pax_global_header00006660000000000000000000000064121765214540014521gustar00rootroot0000000000000052 comment=c850749055fb3197d2c75173a95866af2a337e0d kazoo-1.2.1/000077500000000000000000000000001217652145400126455ustar00rootroot00000000000000kazoo-1.2.1/.gitignore000066400000000000000000000004361217652145400146400ustar00rootroot00000000000000*.egg *.egg-info *.komodo* *.kpf *.log *.pid *.pyc *.swp *.*.swp .*.*.swo *~ bin/ build/ develop-eggs/ dist/ docs/_build dropin.cache eggs/ include lib/ lib-python/ lib_pypy/ man/ parts/ share/ site-packages/ .coverage .idea .pip_cache .project .pydevproject .tox /.settings /.metadata kazoo-1.2.1/.travis.yml000066400000000000000000000024361217652145400147630ustar00rootroot00000000000000language: python python: - "2.7" matrix: exclude: - python: "2.7" include: - python: "2.6" env: GEVENT_VERSION=0.13.8 ZOOKEEPER_VERSION=3.3.6 - python: "2.6" env: GEVENT_VERSION=0.13.8 ZOOKEEPER_VERSION=3.4.5 - python: "2.6" env: GEVENT_VERSION=1.0rc2 ZOOKEEPER_VERSION=3.3.6 - python: "2.6" env: GEVENT_VERSION=1.0rc2 ZOOKEEPER_VERSION=3.4.5 - python: "2.7" env: GEVENT_VERSION=0.13.8 ZOOKEEPER_VERSION=3.3.6 - python: "2.7" env: GEVENT_VERSION=0.13.8 ZOOKEEPER_VERSION=3.4.5 - python: "2.7" env: GEVENT_VERSION=1.0rc2 ZOOKEEPER_VERSION=3.3.6 - python: "2.7" env: GEVENT_VERSION=1.0rc2 ZOOKEEPER_VERSION=3.4.5 - python: "3.2" env: ZOOKEEPER_VERSION=3.3.6 - python: "3.2" env: ZOOKEEPER_VERSION=3.4.5 - python: "3.3" env: ZOOKEEPER_VERSION=3.3.6 - python: "3.3" env: ZOOKEEPER_VERSION=3.4.5 - python: "pypy" env: ZOOKEEPER_VERSION=3.3.6 - python: "pypy" env: ZOOKEEPER_VERSION=3.4.5 notifications: email: false before_install: - sudo apt-get install libevent-dev install: - make - make zookeeper script: - make test kazoo-1.2.1/CHANGES.rst000066400000000000000000000355611217652145400144610ustar00rootroot00000000000000Changelog ========= 1.2.1 (2013-08-01) ------------------ Bug Handling ************ - Issue #108: Circular import fail when importing kazoo.recipe.watchers directly has now been resolved. Watchers and partitioner properly import the KazooState from kazoo.protocol.states rather than kazoo.client. - Issue #109: Partials not usable properly as a datawatch call can now be used. All funcs will be called with 3 args and fall back to 2 args if there's an argument error. - Issue #106, #107: `client.create_async` didn't strip change root from the returned path. 1.2 (2013-07-24) ---------------- Features ******** - KazooClient can now be stopped more reliably even if its in the middle of a long retry sleep. This utilizes the new interrupt feature of KazooRetry which lets the sleep be broken down into chunks and an interrupt function called to determine if the retry should fail early. - Issue #62, #92, #89, #101, #102: Allow KazooRetry to have a max deadline, transition properly when connection fails to LOST, and setup separate connection retry behavior from client command retry behavior. Patches by Mike Lundy. - Issue #100: Make it easier to see exception context in threading and connection modules. - Issue #85: Increase information density of logs and don't prevent dynamic reconfiguration of log levels at runtime. - Data-watchers for the same node are no longer 'stacked'. That is, if a get and an exists call occur for the same node with the same watch function, then it will be registered only once. This change results in Kazoo behaving per Zookeeper client spec regarding repeat watch use. Bug Handling ************ - Issue #53: Throw a warning upon starting if the chroot path doesn't exist so that it's more obvious when the chroot should be created before performing more operations. - Kazoo previously would let the same function be registered as a data-watch or child-watch multiple times, and then call it multiple times upon being triggered. This was non-compliant Zookeeper client behavior, the same watch can now only be registered once for the same znode path per Zookeeper client documentation. - Issue #105: Avoid rare import lock problems by moving module imports in client.py to the module scope. - Issue #103: Allow prefix-less sequential znodes. - Issue #98: Extend testing ZK harness to work with different file locations on some versions of Debian/Ubuntu. - Issue #97: Update some docstrings to reflect current state of handlers. - Issue #62, #92, #89, #101, #102: Allow KazooRetry to have a max deadline, transition properly when connection fails to LOST, and setup separate connection retry behavior from client command retry behavior. Patches by Mike Lundy. API Changes *********** - The `kazoo.testing.harness.KazooTestHarness` class directly inherits from `unittest.TestCase` and you need to ensure to call its `__init__` method. - DataWatch no longer takes any parameters besides for the optional function during instantiation. The additional options are now implicitly True, with the user being left to ignore events as they choose. See the DataWatch API docs for more information. - Issue #99: Better exception raised when the writer fails to close. A WriterNotClosedException that inherits from KazooException is now raised when the writer fails to close in time. 1.1 (2013-06-08) ---------------- Features ******** - Issue #93: Add timeout option to lock/semaphore acquire methods. - Issue #79 / #90: Add ability to pass the WatchedEvent to DataWatch and ChildWatch functions. - Respect large client timeout values when closing the connection. - Add a `max_leases` consistency check to the semaphore recipe. - Issue #76: Extend testing helpers to allow customization of the Java classpath by specifying the new `ZOOKEEPER_CLASSPATH` environment variable. - Issue #65: Allow non-blocking semaphore acquisition. Bug Handling ************ - Issue #96: Provide Windows compatibility in testing harness. - Issue #95: Handle errors deserializing connection response. - Issue #94: Clean up stray bytes in connection pipe. - Issue #87 / #88: Allow re-acquiring lock after cancel. - Issue #77: Use timeout in initial socket connection. - Issue #69: Only ensure path once in lock and semaphore recipes. - Issue #68: Closing the connection causes exceptions to be raised by watchers which assume the connection won't be closed when running commands. - Issue #66: Require ping reply before sending another ping, otherwise the connection will be considered dead and a ConnectionDropped will be raised to trigger a reconnect. - Issue #63: Watchers weren't reset on lost connection. - Issue #58: DataWatcher failed to re-register for changes after non-existent node was created then deleted. API Changes *********** - KazooClient.create_async now supports the makepath argument. - KazooClient.ensure_path now has an async version, ensure_path_async. 1.0 (2013-03-26) ---------------- Features ******** - Added a LockingQueue recipe. The queue first locks an item and removes it from the queue only after the consume() method is called. This enables other nodes to retake the item if an error occurs on the first node. Bug Handling ************ - Issue #50: Avoid problems with sleep function in mixed gevent/threading setup. - Issue #56: Avoid issues with watch callbacks evaluating to false. 1.0b1 (2013-02-24) ------------------ Features ******** - Refactored the internal connection handler to use a single thread. It now uses a deque and pipe to signal the ZK thread that there's a new command to send, so that the ZK thread can send it, or retrieve a response. Processing ZK requests and responses serially in a single thread eliminates the need for a bunch of the locking, the peekable queue and two threads working on the same underlying socket. - Issue #48: Added documentation for the `retry` helper module. - Issue #55: Fix `os.pipe` file descriptor leak and introduce a `KazooClient.close` method. The method is particular useful in tests, where multiple KazooClients are created and closed in the same process. Bug Handling ************ - Issue #46: Avoid TypeError in GeneratorContextManager on process shutdown. - Issue #43: Let DataWatch return node data if allow_missing_node is used. 0.9 (2013-01-07) ---------------- API Changes *********** - When a retry operation ultimately fails, it now raises a `kazoo.retry.RetryFailedError` exception, instead of a general `Exception` instance. `RetryFailedError` also inherits from the base `KazooException`. Features ******** - Improvements to Debian packaging rules. Bug Handling ************ - Issue #39 / #41: Handle connection dropped errors during session writes. Ensure client connection is re-established to a new ZK node if available. - Issue #38: Set `CLOEXEC` flag on all sockets when available. - Issue #37 / #40: Handle timeout errors during `select` calls on sockets. - Issue #36: Correctly set `ConnectionHandler.writer_stopped` even if an exception is raised inside the writer, like a retry operation failing. 0.8 (2012-10-26) ---------------- API Changes *********** - The `KazooClient.__init__` took as `watcher` argument as its second keyword argument. The argument had no effect anymore since version 0.5 and was removed. Bug Handling ************ - Issue #35: `KazooClient.__init__` didn't pass on `retry_max_delay` to the retry helper. - Issue #34: Be more careful while handling socket connection errors. 0.7 (2012-10-15) ---------------- Features ******** - DataWatch now has a `allow_missing_node` setting that allows a watch to be set on a node that doesn't exist when the DataWatch is created. - Add new Queue recipe, with optional priority support. - Add new Counter recipe. - Added debian packaging rules. Bug Handling ************ - Issue #31 fixed: Only catch KazooExceptions in catch-all calls. - Issue #15 fixed again: Force sleep delay to be a float to appease gevent. - Issue #29 fixed: DataWatch and ChildrenWatch properly re-register their watches on server disconnect. 0.6 (2012-09-27) ---------------- API Changes *********** - Node paths are assumed to be Unicode objects. Under Python 2 pure-ascii strings will also be accepted. Node values are considered bytes. The byte type is an alias for `str` under Python 2. - New KeeperState.CONNECTED_RO state for Zookeeper servers connected in read-only mode. - New NotReadOnlyCallError exception when issuing a write change against a server thats currently read-only. Features ******** - Add support for Python 3.2, 3.3 and PyPy (only for the threading handler). - Handles connecting to Zookeeper 3.4+ read-only servers. - Automatic background scanning for a Read/Write server when connected to a server in read-only mode. - Add new Semaphore recipe. - Add a new `retry_max_delay` argument to the client and by default limit the retry delay to at most an hour regardless of exponential backoff settings. - Add new `randomize_hosts` argument to `KazooClient`, allowing one to disable host randomization. Bug Handling ************ - Fix bug with locks not handling intermediary lock contenders disappearing. - Fix bug with set_data type check failing to catch unicode values. - Fix bug with gevent 0.13.x backport of peekable queue. - Fix PatientChildrenWatch to use handler specific sleep function. 0.5 (2012-09-06) ---------------- Skipping a version to reflect the magnitude of the change. Kazoo is now a pure Python client with no C bindings. This release should run without a problem on alternate Python implementations such as PyPy and Jython. Porting to Python 3 in the future should also be much easier. Documentation ************* - Docs have been restructured to handle the new classes and locations of the methods from the pure Python refactor. Bug Handling ************ This change may introduce new bugs, however there is no longer the possibility of a complete Python segfault due to errors in the C library and/or the C binding. - Possible segfaults from the C lib are gone. - Password mangling due to the C lib is gone. - The party recipes didn't set their participating flag to False after leaving. Features ******** - New `client.command` and `client.server_version` API, exposing Zookeeper's four letter commands and giving access to structured version information. - Added 'include_data' option for get_children to include the node's Stat object. - Substantial increase in logging data with debug mode. All correspondence with the Zookeeper server can now be seen to help in debugging. API Changes *********** - The testing helpers have been moved from `testing.__init__` into a `testing.harness` module. The official API's of `KazooTestCase` and `KazooTestHarness` can still be directly imported from `testing`. - The kazoo.handlers.util module was removed. - Backwards compatible exception class aliases are provided for now in kazoo exceptions for the prior C exception names. - Unicode strings now work fine for node names and are properly converted to and from unicode objects. - The data value argument for the create and create_async methods of the client was made optional and defaults to an empty byte string. The data value must be a byte string. Unicode values are no longer allowed and will raise a TypeError. 0.3 (2012-08-23) ---------------- API Changes *********** - Handler interface now has an rlock_object for use by recipes. Bug Handling ************ - Fixed password bug with updated zc-zookeeper-static release, which retains null bytes in the password properly. - Fixed reconnect hammering, so that the reconnection follows retry jitter and retry backoff's. - Fixed possible bug with using a threading.Condition in the set partitioner. Set partitioner uses new rlock_object handler API to get an appropriate RLock for gevent. - Issue #17 fixed: Wrap timeout exceptions with staticmethod so they can be used directly as intended. Patch by Bob Van Zant. - Fixed bug with client reconnection looping indefinitely using an expired session id. 0.2 (2012-08-12) ---------------- Documentation ************* - Fixed doc references to start_async using an AsyncResult object, it uses an Event object. Bug Handling ************ - Issue #16 fixed: gevent zookeeper logging failed to handle a monkey patched logging setup. Logging is now setup such that a greenlet is used for logging messages under gevent, and the thread one is used otherwise. - Fixed bug similar to #14 for ChildrenWatch on the session listener. - Issue #14 fixed: DataWatch had inconsistent handling of the node it was watching not existing. DataWatch also properly spawns its _get_data function to avoid blocking session events. - Issue #15 fixed: sleep_func for SequentialGeventHandler was not set on the class appropriately leading to additional arguments being passed to gevent.sleep. - Issue #9 fixed: Threads/greenlets didn't gracefully shut down. Handler now has a start/stop that is used by the client when calling start and stop that shuts down the handler workers. This addresses errors and warnings that could be emitted upon process shutdown regarding a clean exit of the workers. - Issue #12 fixed: gevent 0.13 doesn't use the same start_new_thread as gevent 1.0 which resulted in a fully monkey-patched environment halting due to the wrong thread. Updated to use the older kazoo method of getting the real thread module object. API Changes *********** - The KazooClient handler is now officially exposed as KazooClient.handler so that the appropriate sync objects can be used by end-users. - Refactored ChildrenWatcher used by SetPartitioner into a publicly exposed PatientChildrenWatch under recipe.watchers. Deprecations ************ - connect/connect_async has been renamed to start/start_async to better match the stop to indicate connection handling. The prior names are aliased for the time being. Recipes ******* - Added Barrier and DoubleBarrier implementation. 0.2b1 (2012-07-27) ------------------ Bug Handling ************ - ZOOKEEPER-1318: SystemError is caught and rethrown as the proper invalid state exception in older zookeeper python bindings where this issue is still valid. - ZOOKEEPER-1431: Install the latest zc-zookeeper-static library or use the packaged ubuntu one for ubuntu 12.04 or later. - ZOOKEEPER-553: State handling isn't checked via this method, we track it in a simpler manner with the watcher to ensure we know the right state. Features ******** - Exponential backoff with jitter for retrying commands. - Gevent 0.13 and 1.0b support. - Lock, Party, SetPartitioner, and Election recipe implementations. - Data and Children watching API's. - State transition handling with listener registering to handle session state changes (choose to fatal the app on session expiration, etc.) - Zookeeper logging stream redirected into Python logging channel under the name 'Zookeeper'. - Base client library with handler support for threading and gevent async environments. kazoo-1.2.1/CONTRIBUTING.rst000066400000000000000000000043541217652145400153140ustar00rootroot00000000000000================= How to contribute ================= We gladly accept outside contributions. We use our `Github issue tracker `_ for both discussions and talking about new features or bugs. You can also fork the project and sent us a pull request. If you have a more general topic to discuss, the `user@zookeeper.apache.org `_ mailing list is a good place to do so. You can sometimes find us on IRC in the `#zookeeper channel on freenode `_. Development =========== If you want to work on the code and sent us a `pull request `_, first fork the repository on github to your own account. Then clone your new repository and run the build scripts:: git clone git@github.com:/kazoo.git cd kazoo make make zookeeper You need to have some supported version of Python installed and have it available as ``python`` in your shell. To run Zookeeper you also need a Java runtime (JRE or JDK) version 6 or 7. You can run all the tests by calling:: make test Or to run individual tests:: export ZOOKEEPER_PATH=//bin/zookeeper/ bin/nosetests -s -d kazoo.tests.test_client:TestClient.test_create The nose test runner allows you to filter by test module, class or individual test method. If you made changes to the documentation, you can build it locally:: make html And then open ``./docs/_build/html/index.html`` in a web browser to verify the correct rendering. Submitting changes ================== We appreciate getting changes sent as pull requests via github. We have travis-ci set up, which will run all tests on all supported version combinations for submitted pull requests, which makes it easy to see if new code breaks tests on some weird version combination. If you introduce new functionality, please also add documentation and a short entry in the top-level ``CHANGES.rst`` file. Legal ===== Currently we don't have any legal contributor agreement, so code ownership stays with the original authors. The project is licensed under the `Apache License Version 2 `_. kazoo-1.2.1/LICENSE000066400000000000000000000236371217652145400136650ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. kazoo-1.2.1/MANIFEST.in000066400000000000000000000003751217652145400144100ustar00rootroot00000000000000include CHANGES.rst include CONTRIBUTING.rst include README.rst include LICENSE include MANIFEST.in exclude .gitignore exclude .travis.yml exclude Makefile exclude run_failure.py recursive-include kazoo * recursive-exclude sw * global-exclude *pyc *pyo kazoo-1.2.1/Makefile000066400000000000000000000036731217652145400143160ustar00rootroot00000000000000HERE = $(shell pwd) BIN = $(HERE)/bin PYTHON = $(BIN)/python PIP_DOWNLOAD_CACHE ?= $(HERE)/.pip_cache INSTALL = $(BIN)/pip install INSTALL += --download-cache $(PIP_DOWNLOAD_CACHE) --use-mirrors BUILD_DIRS = bin build include lib lib64 man share GEVENT_VERSION ?= 1.0rc2 PYTHON_EXE = $(shell [ -f $(PYTHON) ] && echo $(PYTHON) || echo python) PYPY = $(shell $(PYTHON_EXE) -c "import sys; print(getattr(sys, 'pypy_version_info', False) and 'yes' or 'no')") TRAVIS ?= false TRAVIS_PYTHON_VERSION ?= $(shell $(PYTHON_EXE) -c "import sys; print('.'.join([str(s) for s in sys.version_info][:2]))") ZOOKEEPER = $(BIN)/zookeeper ZOOKEEPER_VERSION ?= 3.4.5 ZOOKEEPER_PATH ?= $(ZOOKEEPER) GEVENT_SUPPORTED = yes ifeq ($(findstring 3.,$(TRAVIS_PYTHON_VERSION)), 3.) GEVENT_SUPPORTED = no endif ifeq ($(PYPY),yes) GEVENT_SUPPORTED = no endif .PHONY: all build clean test zookeeper clean-zookeeper all: build $(PYTHON): python sw/virtualenv.py --distribute . rm distribute-0.6.*.tar.gz build: $(PYTHON) ifeq ($(GEVENT_SUPPORTED),yes) $(INSTALL) -U -r requirements_gevent.txt $(INSTALL) -f http://code.google.com/p/gevent/downloads/list gevent==$(GEVENT_VERSION) endif ifneq ($(TRAVIS), true) $(INSTALL) -U -r requirements_sphinx.txt endif $(INSTALL) -U -r requirements.txt $(PYTHON) setup.py develop $(INSTALL) kazoo[test] clean: rm -rf $(BUILD_DIRS) test: ZOOKEEPER_PATH=$(ZOOKEEPER_PATH) NOSE_LOGFORMAT='%(thread)d:%(filename)s: %(levelname)s: %(message)s' \ $(BIN)/nosetests -d -v --with-coverage kazoo.tests html: cd docs && \ make html $(ZOOKEEPER): @echo "Installing Zookeeper" mkdir -p bin cd bin && \ curl -C - http://apache.osuosl.org/zookeeper/zookeeper-$(ZOOKEEPER_VERSION)/zookeeper-$(ZOOKEEPER_VERSION).tar.gz | tar -zx mv bin/zookeeper-$(ZOOKEEPER_VERSION) bin/zookeeper cd bin/zookeeper chmod a+x bin/zookeeper/bin/zkServer.sh @echo "Finished installing Zookeeper" zookeeper: $(ZOOKEEPER) clean-zookeeper: rm -rf zookeeper bin/zookeeper kazoo-1.2.1/README.rst000066400000000000000000000014711217652145400143370ustar00rootroot00000000000000===== Kazoo ===== ``kazoo`` implements a higher level API to `Apache Zookeeper`_ for Python clients. See `the full docs`_ for more information. License ======= ``kazoo`` is offered under the Apache License 2.0. Authors ======= ``kazoo`` started under the `Nimbus Project`_ and through collaboration with the open-source community has been merged with code from `Mozilla`_ and the `Zope Corporation`_. It has seen further contributions from `reddit`_, `Quora`_ and `SageCloud`_ amongst others. .. _Apache Zookeeper: http://zookeeper.apache.org/ .. _the full docs: http://kazoo.rtfd.org/ .. _Nimbus Project: http://www.nimbusproject.org/ .. _Zope Corporation: http://zope.com/ .. _Mozilla: http://www.mozilla.org/ .. _reddit: http://www.reddit.com/ .. _Quora: https://www.quora.com/ .. _SageCloud: http://sagecloud.com/ kazoo-1.2.1/debian/000077500000000000000000000000001217652145400140675ustar00rootroot00000000000000kazoo-1.2.1/debian/changelog000066400000000000000000000002111217652145400157330ustar00rootroot00000000000000kazoo (0+git20130102) unstable; urgency=low * Initial package. -- Neil Williams Fri, 02 Jan 2013 23:20:03 -0800 kazoo-1.2.1/debian/clean000066400000000000000000000000211217652145400150650ustar00rootroot00000000000000kazoo.egg-info/* kazoo-1.2.1/debian/compat000066400000000000000000000000021217652145400152650ustar00rootroot000000000000008 kazoo-1.2.1/debian/control000066400000000000000000000037351217652145400155020ustar00rootroot00000000000000Source: kazoo Section: python Priority: optional Maintainer: Neil Williams Build-Depends: python-setuptools (>= 0.6b3), python-all (>= 2.6.6-3), debhelper (>= 8.0.0), python-repoze.sphinx.autointerface, python-sphinx (>= 1.0.7+dfsg) | python3-sphinx, Standards-Version: 3.9.3 Homepage: https://kazoo.readthedocs.org X-Python-Version: >= 2.6 Package: python-kazoo Architecture: all Depends: ${python:Depends}, ${misc:Depends} Description: higher level API to Apache Zookeeper for Python clients Kazoo features: . * Support for gevent 0.13 and gevent 1.0b * Unified asynchronous API for use with greenlets or threads * Lock, Party, Election, and Partitioner recipe implementations (more implementations are in development) * Data and Children Watchers * Integrated testing helpers for Zookeeper clusters * Simplified Zookeeper connection state tracking * Pure-Python based implementation of the wire protocol, avoiding all the memory leaks, lacking features, and debugging madness of the C library . Kazoo is heavily inspired by Netflix Curator simplifications and helpers. Package: python-kazoo-doc Architecture: all Section: doc Depends: ${misc:Depends}, ${sphinxdoc:Depends} Description: API to Apache Zookeeper for Python clients. - API documentation Kazoo features: . * Support for gevent 0.13 and gevent 1.0b * Unified asynchronous API for use with greenlets or threads * Lock, Party, Election, and Partitioner recipe implementations (more implementations are in development) * Data and Children Watchers * Integrated testing helpers for Zookeeper clusters * Simplified Zookeeper connection state tracking * Pure-Python based implementation of the wire protocol, avoiding all the memory leaks, lacking features, and debugging madness of the C library . Kazoo is heavily inspired by Netflix Curator simplifications and helpers. . This package contains the API documentation. kazoo-1.2.1/debian/copyright000066400000000000000000000005571217652145400160310ustar00rootroot00000000000000Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: kazoo Source: https://github.com/python-zk/kazoo Files: * Copyright: 2012 Kazoo Team License: Apache-2.0 See /usr/share/common-licenses/Apache-2.0 Files: debian/* Copyright: 2012 Neil Williams License: Apache-2.0 See /usr/share/common-licenses/Apache-2.0 kazoo-1.2.1/debian/docs000066400000000000000000000000131217652145400147340ustar00rootroot00000000000000README.rst kazoo-1.2.1/debian/patches/000077500000000000000000000000001217652145400155165ustar00rootroot00000000000000kazoo-1.2.1/debian/patches/no-tag-build.patch000066400000000000000000000007101217652145400210170ustar00rootroot00000000000000Description: Unset build_tag so the installed module isn't "dev" suffixed. Author: Neil Williams Forwarded: not-needed Last-Update: 2012-12-24 Index: kazoo/setup.cfg =================================================================== --- kazoo.orig/setup.cfg 2012-12-21 19:25:50.649997478 -0800 +++ kazoo/setup.cfg 2012-12-23 22:43:45.557703554 -0800 @@ -1,5 +1,5 @@ [egg_info] -tag_build = dev +tag_build = [nosetests] where=kazoo kazoo-1.2.1/debian/patches/series000066400000000000000000000000231217652145400167260ustar00rootroot00000000000000no-tag-build.patch kazoo-1.2.1/debian/python-kazoo-doc.doc-base000066400000000000000000000005241217652145400206740ustar00rootroot00000000000000Document: python-kazoo-doc Title: Python Kazoo Documentation Author: Kazoo Team Section: Programming/Python Format: HTML Index: /usr/share/doc/python-kazoo-doc/html/index.html Files: /usr/share/doc/python-kazoo-doc/html/*.html /usr/share/doc/python-kazoo-doc/html/api/*.html /usr/share/doc/python-kazoo-doc/html/api/*/*.html kazoo-1.2.1/debian/python-kazoo-doc.docs000066400000000000000000000000131217652145400201400ustar00rootroot00000000000000build/html kazoo-1.2.1/debian/python-kazoo.install000066400000000000000000000000211217652145400201120ustar00rootroot00000000000000usr/lib/python2* kazoo-1.2.1/debian/rules000077500000000000000000000005611217652145400151510ustar00rootroot00000000000000#!/usr/bin/make -f %: dh $@ --with python2,sphinxdoc --buildsystem=python_distutils .PHONY: override_dh_installchangelogs override_dh_installchangelogs: dh_installchangelogs CHANGES.rst .PHONY: override_dh_auto_build override_dh_auto_build: sphinx-build -b html docs build/html dh_auto_build .PHONY: override_dh_clean override_dh_clean: rm -rf build dh_clean kazoo-1.2.1/debian/source/000077500000000000000000000000001217652145400153675ustar00rootroot00000000000000kazoo-1.2.1/debian/source/format000066400000000000000000000000141217652145400165750ustar00rootroot000000000000003.0 (quilt) kazoo-1.2.1/debian/watch000066400000000000000000000001121217652145400151120ustar00rootroot00000000000000version=3 https://github.com/python-zk/kazoo/tags .*/(\d[\d\.]+)\.tar\.gz kazoo-1.2.1/docs/000077500000000000000000000000001217652145400135755ustar00rootroot00000000000000kazoo-1.2.1/docs/Makefile000066400000000000000000000126771217652145400152520ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = ../bin/sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/kazoo.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/kazoo.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/kazoo" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/kazoo" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." kazoo-1.2.1/docs/api.rst000066400000000000000000000011341217652145400150770ustar00rootroot00000000000000API Documentation ================= Comprehensive reference material for every public API exposed by `kazoo` is available within this chapter. The API documentation is organized alphabetically by module name. .. toctree:: :maxdepth: 1 api/client api/exceptions api/handlers/gevent api/handlers/threading api/handlers/utils api/interfaces api/protocol/states api/recipe/barrier api/recipe/counter api/recipe/election api/recipe/lock api/recipe/partitioner api/recipe/party api/recipe/queue api/recipe/watchers api/retry api/security api/testing kazoo-1.2.1/docs/api/000077500000000000000000000000001217652145400143465ustar00rootroot00000000000000kazoo-1.2.1/docs/api/client.rst000066400000000000000000000016121217652145400163560ustar00rootroot00000000000000.. _client_module: :mod:`kazoo.client` ---------------------------- .. automodule:: kazoo.client Public API ++++++++++ .. autoclass:: KazooClient() :members: :member-order: bysource .. automethod:: __init__ .. attribute:: handler The :class:`~kazoo.interfaces.IHandler` strategy used by this client. Gives access to appropriate synchronization objects. .. method:: retry(func, *args, **kwargs) Runs the given function with the provided arguments, retrying if it fails because the ZooKeeper connection is lost, see :ref:`retrying_commands`. .. attribute:: state A :class:`~kazoo.protocol.states.KazooState` attribute indicating the current higher-level connection state. .. autoclass:: TransactionRequest :members: :member-order: bysource kazoo-1.2.1/docs/api/exceptions.rst000066400000000000000000000024541217652145400172660ustar00rootroot00000000000000.. _exceptions_module: :mod:`kazoo.exceptions` ----------------------- .. automodule:: kazoo.exceptions Public API ++++++++++ .. autoexception:: KazooException .. autoexception:: ZookeeperError .. autoexception:: AuthFailedError .. autoexception:: BadVersionError .. autoexception:: ConfigurationError .. autoexception:: InvalidACLError .. autoexception:: LockTimeout .. autoexception:: NoChildrenForEphemeralsError .. autoexception:: NodeExistsError .. autoexception:: NoNodeError .. autoexception:: NotEmptyError Private API +++++++++++ .. autoexception:: APIError .. autoexception:: BadArgumentsError .. autoexception:: CancelledError .. autoexception:: ConnectionDropped .. autoexception:: ConnectionClosedError .. autoexception:: ConnectionLoss .. autoexception:: DataInconsistency .. autoexception:: MarshallingError .. autoexception:: NoAuthError .. autoexception:: NotReadOnlyCallError .. autoexception:: InvalidCallbackError .. autoexception:: OperationTimeoutError .. autoexception:: RolledBackError .. autoexception:: RuntimeInconsistency .. autoexception:: SessionExpiredError .. autoexception:: SessionMovedError .. autoexception:: SystemZookeeperError .. autoexception:: UnimplementedError .. autoexception:: WriterNotClosedException .. autoexception:: ZookeeperStoppedError kazoo-1.2.1/docs/api/handlers/000077500000000000000000000000001217652145400161465ustar00rootroot00000000000000kazoo-1.2.1/docs/api/handlers/gevent.rst000066400000000000000000000004301217652145400201650ustar00rootroot00000000000000.. _gevent_handler_module: :mod:`kazoo.handlers.gevent` ---------------------------- .. automodule:: kazoo.handlers.gevent Public API ++++++++++ .. autoclass:: SequentialGeventHandler :members: Private API +++++++++++ .. autoclass:: AsyncResult :members: kazoo-1.2.1/docs/api/handlers/threading.rst000066400000000000000000000005071217652145400206470ustar00rootroot00000000000000.. _thread_handler_module: :mod:`kazoo.handlers.threading` ------------------------------- .. automodule:: kazoo.handlers.threading Public API ++++++++++ .. autoclass:: SequentialThreadingHandler :members: Private API +++++++++++ .. autoclass:: AsyncResult :members: .. autoexception:: TimeoutError kazoo-1.2.1/docs/api/handlers/utils.rst000066400000000000000000000004551217652145400200440ustar00rootroot00000000000000.. _utils_module: :mod:`kazoo.handlers.utils` --------------------------- .. automodule:: kazoo.handlers.utils Public API ++++++++++ .. autofunction:: capture_exceptions .. autofunction:: wrap Private API +++++++++++ .. autofunction:: create_pipe .. autofunction:: create_tcp_socket kazoo-1.2.1/docs/api/interfaces.rst000066400000000000000000000017571217652145400172350ustar00rootroot00000000000000.. _interfaces_module: :mod:`kazoo.interfaces` ---------------------------- .. automodule:: kazoo.interfaces Public API ++++++++++ :class:`IHandler` implementations should be created by the developer to be passed into :class:`~kazoo.client.KazooClient` during instantiation for the preferred callback handling. If the developer needs to use objects implementing the :class:`IAsyncResult` interface, the :meth:`IHandler.async_result` method must be used instead of instantiating one directly. .. autointerface:: IHandler :members: Private API +++++++++++ The :class:`IAsyncResult` documents the proper implementation for providing a value that results from a Zookeeper completion callback. Since the :class:`~kazoo.client.KazooClient` returns an :class:`IAsyncResult` object instead of taking a completion callback for async functions, developers wishing to have their own callback called should use the :meth:`IAsyncResult.rawlink` method. .. autointerface:: IAsyncResult :members: kazoo-1.2.1/docs/api/protocol/000077500000000000000000000000001217652145400162075ustar00rootroot00000000000000kazoo-1.2.1/docs/api/protocol/states.rst000066400000000000000000000005361217652145400202500ustar00rootroot00000000000000.. _states_module: :mod:`kazoo.protocol.states` ---------------------------- .. automodule:: kazoo.protocol.states Public API ++++++++++ .. autoclass:: EventType .. autoclass:: KazooState .. autoclass:: KeeperState .. autoclass:: WatchedEvent .. autoclass:: ZnodeStat Private API +++++++++++ .. autoclass:: Callback kazoo-1.2.1/docs/api/recipe/000077500000000000000000000000001217652145400156155ustar00rootroot00000000000000kazoo-1.2.1/docs/api/recipe/barrier.rst000066400000000000000000000004611217652145400177760ustar00rootroot00000000000000.. _barrier_module: :mod:`kazoo.recipe.barrier` ---------------------------- .. automodule:: kazoo.recipe.barrier Public API ++++++++++ .. autoclass:: Barrier :members: .. automethod:: __init__ .. autoclass:: DoubleBarrier :members: .. automethod:: __init__ kazoo-1.2.1/docs/api/recipe/counter.rst000066400000000000000000000005101217652145400200220ustar00rootroot00000000000000.. _counter_module: :mod:`kazoo.recipe.counter` --------------------------- .. automodule:: kazoo.recipe.counter .. versionadded:: 0.7 The Counter class. Public API ++++++++++ .. autoclass:: Counter :members: .. automethod:: __init__ .. automethod:: __add__ .. automethod:: __sub__ kazoo-1.2.1/docs/api/recipe/election.rst000066400000000000000000000003371217652145400201540ustar00rootroot00000000000000.. _election_module: :mod:`kazoo.recipe.election` ---------------------------- .. automodule:: kazoo.recipe.election Public API ++++++++++ .. autoclass:: Election :members: .. automethod:: __init__ kazoo-1.2.1/docs/api/recipe/lock.rst000066400000000000000000000004411217652145400172760ustar00rootroot00000000000000.. _lock_module: :mod:`kazoo.recipe.lock` ---------------------------- .. automodule:: kazoo.recipe.lock Public API ++++++++++ .. autoclass:: Lock :members: .. automethod:: __init__ .. autoclass:: Semaphore :members: .. automethod:: __init__ kazoo-1.2.1/docs/api/recipe/partitioner.rst000066400000000000000000000004241217652145400207070ustar00rootroot00000000000000.. _partitioner_module: :mod:`kazoo.recipe.partitioner` ------------------------------- .. automodule:: kazoo.recipe.partitioner Public API ++++++++++ .. autoclass:: SetPartitioner :members: .. automethod:: __init__ .. autoclass:: PartitionState kazoo-1.2.1/docs/api/recipe/party.rst000066400000000000000000000007371217652145400175150ustar00rootroot00000000000000.. _party_module: :mod:`kazoo.recipe.party` ------------------------- .. automodule:: kazoo.recipe.party Public API ++++++++++ .. autoclass:: Party :members: :inherited-members: .. automethod:: __init__ .. automethod:: __iter__ .. automethod:: __len__ .. autoclass:: ShallowParty :members: :inherited-members: .. automethod:: __init__ .. automethod:: __iter__ .. automethod:: __len__ kazoo-1.2.1/docs/api/recipe/queue.rst000066400000000000000000000007741217652145400175030ustar00rootroot00000000000000.. _queue_module: :mod:`kazoo.recipe.queue` ------------------------- .. automodule:: kazoo.recipe.queue .. versionadded:: 0.6 The Queue class. .. versionadded:: 1.0 The LockingQueue class. Public API ++++++++++ .. autoclass:: Queue :members: :inherited-members: .. automethod:: __init__ .. automethod:: __len__ .. autoclass:: LockingQueue :members: :inherited-members: .. automethod:: __init__ .. automethod:: __len__ kazoo-1.2.1/docs/api/recipe/watchers.rst000066400000000000000000000007301217652145400201670ustar00rootroot00000000000000.. _watchers_module: :mod:`kazoo.recipe.watchers` ---------------------------- .. automodule:: kazoo.recipe.watchers Public API ++++++++++ .. autoclass:: DataWatch :members: .. automethod:: __init__ .. automethod:: __call__ .. autoclass:: ChildrenWatch :members: .. automethod:: __init__ .. automethod:: __call__ .. autoclass:: PatientChildrenWatch :members: .. automethod:: __init__ kazoo-1.2.1/docs/api/retry.rst000066400000000000000000000006061217652145400162470ustar00rootroot00000000000000.. _retry_module: :mod:`kazoo.retry` ---------------------------- .. automodule:: kazoo.retry Public API ++++++++++ .. autoclass:: KazooRetry :members: :member-order: bysource .. automethod:: __init__ .. automethod:: __call__ .. autoexception:: ForceRetryError .. autoexception:: RetryFailedError .. autoexception:: InterruptedError kazoo-1.2.1/docs/api/security.rst000066400000000000000000000005421217652145400167500ustar00rootroot00000000000000.. _security_module: :mod:`kazoo.security` ---------------------------- .. automodule:: kazoo.security Public API ++++++++++ .. autoclass:: ACL .. autoclass:: Id .. autofunction:: make_digest_acl Private API +++++++++++ .. autofunction:: make_acl .. autofunction:: make_digest_acl_credential .. autoclass: ACLPermission kazoo-1.2.1/docs/api/testing.rst000066400000000000000000000003331217652145400165540ustar00rootroot00000000000000.. _testing_harness_module: :mod:`kazoo.testing.harness` ---------------------------- .. automodule:: kazoo.testing.harness Public API ++++++++++ .. autoclass:: KazooTestHarness .. autoclass:: KazooTestCase kazoo-1.2.1/docs/async_usage.rst000066400000000000000000000070451217652145400166360ustar00rootroot00000000000000.. _async_usage: ================== Asynchronous Usage ================== The asynchronous Kazoo API relies on the :class:`~kazoo.interfaces.IAsyncResult` object which is returned by all the asynchronous methods. Callbacks can be added with the :meth:`~kazoo.interfaces.IAsyncResult.rawlink` method which works in a consistent manner whether threads or an asynchronous framework like gevent is used. Kazoo utilizes a pluggable :class:`~kazoo.interfaces.IHandler` interface which abstracts the callback system to ensure it works consistently. Connection Handling =================== Creating a connection: .. code-block:: python from kazoo.client import KazooClient from kazoo.handlers.gevent import SequentialGeventHandler zk = KazooClient(handler=SequentialGeventHandler()) # returns immediately event = zk.start_async() # Wait for 30 seconds and see if we're connected event.wait(timeout=30) if not zk.connected: # Not connected, stop trying to connect zk.stop() raise Exception("Unable to connect.") In this example, the `wait` method is used on the event object returned by the :meth:`~kazoo.client.KazooClient.start_async` method. A timeout is **always** used because its possible that we might never connect and that should be handled gracefully. The :class:`~kazoo.handlers.gevent.SequentialGeventHandler` is used when you want to use gevent. Kazoo doesn't rely on gevents monkey patching and requires that you pass in the appropriate handler, the default handler is :class:`~kazoo.handlers.threading.SequentialThreadingHandler`. Asynchronous Callbacks ====================== All kazoo `_async` methods except for :meth:`~kazoo.client.KazooClient.start_async` return an :class:`~kazoo.interfaces.IAsyncResult` instance. These instances allow you to see when a result is ready, or chain one or more callback functions to the result that will be called when it's ready. The callback function will be passed the :class:`~kazoo.interfaces.IAsyncResult` instance and should call the :meth:`~kazoo.interfaces.IAsyncResult.get` method on it to retrieve the value. This call could result in an exception being raised if the asynchronous function encountered an error. It should be caught and handled appropriately. Example: .. code-block:: python import sys from kazoo.exceptions import ConnectionLossException from kazoo.exceptions import NoAuthException def my_callback(async_obj): try: children = async_obj.get() do_something(children) except (ConnectionLossException, NoAuthException): sys.exit(1) # Both these statements return immediately, the second sets a callback # that will be run when get_children_async has its return value async_obj = zk.get_children_async("/some/node") async_obj.rawlink(my_callback) Zookeeper CRUD ============== The following CRUD methods all work the same as their synchronous counterparts except that they return an :class:`~kazoo.interfaces.IAsyncResult` object. Creating Method: * :meth:`~kazoo.client.KazooClient.create_async` Reading Methods: * :meth:`~kazoo.client.KazooClient.exists_async` * :meth:`~kazoo.client.KazooClient.get_async` * :meth:`~kazoo.client.KazooClient.get_children_async` Updating Methods: * :meth:`~kazoo.client.KazooClient.set_async` Deleting Methods: * :meth:`~kazoo.client.KazooClient.delete_async` The :meth:`~kazoo.client.KazooClient.ensure_path` has no asynchronous counterpart at the moment nor can the :meth:`~kazoo.client.KazooClient.delete_async` method do recursive deletes. kazoo-1.2.1/docs/basic_usage.rst000066400000000000000000000362221217652145400166010ustar00rootroot00000000000000.. _basic_usage: =========== Basic Usage =========== Connection Handling =================== To begin using Kazoo, a :class:`~kazoo.client.KazooClient` object must be created and a connection established: .. code-block:: python from kazoo.client import KazooClient zk = KazooClient(hosts='127.0.0.1:2181') zk.start() By default, the client will connect to a local Zookeeper server on the default port (2181). You should make sure Zookeeper is actually running there first, or the ``start`` command will be waiting until its default timeout. Once connected, the client will attempt to stay connected regardless of intermittent connection loss or Zookeeper session expiration. The client can be instructed to drop a connection by calling `stop`: .. code-block:: python zk.stop() Listening for Connection Events ------------------------------- It can be useful to know when the connection has been dropped, restored, or when the Zookeeper session has expired. To simplify this process Kazoo uses a state system and lets you register listener functions to be called when the state changes. .. code-block:: python from kazoo.client import KazooState def my_listener(state): if state == KazooState.LOST: # Register somewhere that the session was lost elif state == KazooState.SUSPENDED # Handle being disconnected from Zookeeper else: # Handle being connected/reconnected to Zookeeper zk.add_listener(my_listener) When using the :class:`kazoo.recipe.lock.Lock` or creating ephemeral nodes, its highly recommended to add a state listener so that your program can properly deal with connection interruptions or a Zookeeper session loss. Understanding Kazoo States -------------------------- The :class:`~kazoo.protocol.states.KazooState` object represents several states the client transitions through. The current state of the client can always be determined by viewing the :attr:`~kazoo.client.KazooClient.state` property. The possible states are: - LOST - CONNECTED - SUSPENDED When a :class:`~kazoo.client.KazooClient` instance is first created, it is in the `LOST` state. After a connection is established it transitions to the `CONNECTED` state. If any connection issues come up or if it needs to connect to a different Zookeeper cluster node, it will transition to `SUSPENDED` to let you know that commands cannot currently be run. The connection will also be lost if the Zookeeper node is no longer part of the quorum, resulting in a `SUSPENDED` state. Upon re-establishing a connection the client could transition to `LOST` if the session has expired, or `CONNECTED` if the session is still valid. .. note:: These states should be monitored using a listener as described previously so that the client behaves properly depending on the state of the connection. When a connection transitions to `SUSPENDED`, if the client is performing an action that requires agreement with other systems (using the Lock recipe for example), it should pause what it's doing. When the connection has been re-established the client can continue depending on if the state is `LOST` or transitions directly to `CONNECTED` again. When a connection transitions to `LOST`, any ephemeral nodes that have been created will be removed by Zookeeper. This affects all recipes that create ephemeral nodes, such as the Lock recipe. Lock's will need to be re-acquired after the state transitions to `CONNECTED` again. This transition occurs when a session expires or when you stop the clients connection. **Valid State Transitions** - *LOST -> CONNECTED* New connection, or previously lost one becoming connected. - *CONNECTED -> SUSPENDED* Connection loss to server occurred on a connection. - *CONNECTED -> LOST* Only occurs if invalid authentication credentials are provided after the connection was established. - *SUSPENDED -> LOST* Connection resumed to server, but then lost as the session was expired. - *SUSPENDED -> CONNECTED* Connection that was lost has been restored. Read-Only Connections --------------------- .. versionadded:: 0.6 Zookeeper 3.4 and above `supports a read-only mode `_. This mode must be turned on for the servers in the Zookeeper cluster for the client to utilize it. To use this mode with Kazoo, the :class:`~kazoo.client.KazooClient` should be called with the `read_only` option set to `True`. This will let the client connect to a Zookeeper node that has gone read-only, and the client will continue to scan for other nodes that are read-write. .. code-block:: python from kazoo.client import KazooClient zk = KazooClient(hosts='127.0.0.1:2181', read_only=True) zk.start() A new attribute on :class:`~kazoo.protocol.states.KeeperState` has been added, `CONNECTED_RO`. The connection states above are still valid, however upon `CONNECTED`, you will need to check the clients non- simplified state to see if the connection is `CONNECTED_RO`. For example: .. code-block:: python from kazoo.client import KazooState from kazoo.client import KeeperState @zk.add_listener def watch_for_ro(state): if state == KazooState.CONNECTED: if zk.client_state == KeeperState.CONNECTED_RO: print("Read only mode!") else: print("Read/Write mode!") It's important to note that a `KazooState` is passed in to the listener but the read-only information is only available by comparing the non-simplified client state to the `KeeperState` object. .. warning:: A client using read-only mode should not use any of the recipes. Zookeeper CRUD ============== Zookeeper includes several functions for creating, reading, updating, and deleting Zookeeper nodes (called znodes or nodes here). Kazoo adds several convenience methods and a more Pythonic API. Creating Nodes -------------- Methods: * :meth:`~kazoo.client.KazooClient.ensure_path` * :meth:`~kazoo.client.KazooClient.create` :meth:`~kazoo.client.KazooClient.ensure_path` will recursively create the node and any nodes in the path necessary along the way, but can not set the data for the node, only the ACL. :meth:`~kazoo.client.KazooClient.create` creates a node and can set the data on the node along with a watch function. It requires the path to it to exist first, unless the `makepath` option is set to `True`. .. code-block:: python # Ensure a path, create if necessary zk.ensure_path("/my/favorite") # Create a node with data zk.create("/my/favorite/node", b"a value") Reading Data ------------ Methods: * :meth:`~kazoo.client.KazooClient.exists` * :meth:`~kazoo.client.KazooClient.get` * :meth:`~kazoo.client.KazooClient.get_children` :meth:`~kazoo.client.KazooClient.exists` checks to see if a node exists. :meth:`~kazoo.client.KazooClient.get` fetches the data of the node along with detailed node information in a :class:`~kazoo.protocol.states.ZnodeStat` structure. :meth:`~kazoo.client.KazooClient.get_children` gets a list of the children of a given node. .. code-block:: python # Determine if a node exists if zk.exists("/my/favorite"): # Do something # Print the version of a node and its data data, stat = zk.get("/my/favorite") print("Version: %s, data: %s" % (stat.version, data.decode("utf-8"))) # List the children children = zk.get_children("/my/favorite") print("There are %s children with names %s" % (len(children), children)) Updating Data ------------- Methods: * :meth:`~kazoo.client.KazooClient.set` :meth:`~kazoo.client.KazooClient.set` updates the data for a given node. A version for the node can be supplied, which will be required to match before updating the data, or a :exc:`~kazoo.exceptions.BadVersionError` will be raised instead of updating. .. code-block:: python zk.set("/my/favorite", b"some data") Deleting Nodes -------------- Methods: * :meth:`~kazoo.client.KazooClient.delete` :meth:`~kazoo.client.KazooClient.delete` deletes a node, and can optionally recursively delete all children of the node as well. A version can be supplied when deleting a node which will be required to match the version of the node before deleting it or a :exc:`~kazoo.exceptions.BadVersionError` will be raised instead of deleting. .. code-block:: python zk.delete("/my/favorite/node", recursive=True) .. _retrying_commands: Retrying Commands ================= Connections to Zookeeper may get interrupted if the Zookeeper server goes down or becomes unreachable. By default, kazoo does not retry commands, so these failures will result in an exception being raised. To assist with failures kazoo comes with a :meth:`~kazoo.client.KazooClient.retry` helper that will retry a function should one of the Zookeeper connection exceptions get raised. Example: .. code-block:: python result = zk.retry(zk.get, "/path/to/node") Some commands may have unique behavior that doesn't warrant automatic retries on a per command basis. For example, if one creates a node a connection might be lost before the command returns successfully but the node actually got created. This results in a :exc:`kazoo.exceptions.NodeExistsError` being raised when it runs again. A similar unique situation arises when a node is created with ephemeral and sequence options set, `documented here on the Zookeeper site `_. Since the :meth:`~kazoo.client.KazooClient.retry` method takes a function to call and its arguments, a function that runs multiple Zookeeper commands could be passed to it so that the entire function will be retried if the connection is lost. This snippet from the lock implementation shows how it uses retry to re-run the function acquiring a lock, and checks to see if it was already created to handle this condition: .. code-block:: python # kazoo.recipe.lock snippet def acquire(self): """Acquire the mutex, blocking until it is obtained""" try: self.client.retry(self._inner_acquire) self.is_acquired = True except KazooException: # if we did ultimately fail, attempt to clean up self._best_effort_cleanup() self.cancelled = False raise def _inner_acquire(self): self.wake_event.clear() # make sure our election parent node exists if not self.assured_path: self.client.ensure_path(self.path) node = None if self.create_tried: node = self._find_node() else: self.create_tried = True if not node: node = self.client.create(self.create_path, self.data, ephemeral=True, sequence=True) # strip off path to node node = node[len(self.path) + 1:] `create_tried` records whether it has tried to create the node already in the event the connection is lost before the node name is returned. Custom Retries -------------- Sometimes you may wish to have specific retry policies for a command or set of commands that differs from the :meth:`~kazoo.client.KazooClient.retry` method. You can manually create a :class:`~kazoo.retry.KazooRetry` instance with the specific retry policy you prefer: .. code-block:: python from kazoo.retry import KazooRetry kr = KazooRetry(max_tries=3, ignore_expire=False) result = kr(client.get, "/some/path") This will retry the ``client.get`` command up to 3 times, and raise a session expiration if it occurs. You can also make an instance with the default behavior that ignores session expiration during a retry. Watchers ======== Kazoo can set watch functions on a node that can be triggered either when the node has changed or when the children of the node change. This change to the node or children can also be the node or its children being deleted. Watchers can be set in two different ways, the first is the style that Zookeeper supports by default for one-time watch events. These watch functions will be called once by kazoo, and do *not* receive session events, unlike the native Zookeeper watches. Using this style requires the watch function to be passed to one of these methods: * :meth:`~kazoo.client.KazooClient.get` * :meth:`~kazoo.client.KazooClient.get_children` * :meth:`~kazoo.client.KazooClient.exists` A watch function passed to :meth:`~kazoo.client.KazooClient.get` or :meth:`~kazoo.client.KazooClient.exists` will be called when the data on the node changes or the node itself is deleted. It will be passed a :class:`~kazoo.protocol.states.WatchedEvent` instance. .. code-block:: python def my_func(event): # check to see what the children are now # Call my_func when the children change children = zk.get_children("/my/favorite/node", watch=my_func) Kazoo includes a higher level API that watches for data and children modifications that's easier to use as it doesn't require re-setting the watch every time the event is triggered. It also passes in the data and :class:`~kazoo.protocol.states.ZnodeStat` when watching a node or the list of children when watching a nodes children. Watch functions registered with this API will be called immediately and every time there's a change, or until the function returns False. If `allow_session_lost` is set to `True`, then the function will no longer be called if the session is lost. The following methods provide this functionality: * :class:`~kazoo.recipe.watchers.ChildrenWatch` * :class:`~kazoo.recipe.watchers.DataWatch` These classes are available directly on the :class:`~kazoo.client.KazooClient` instance and don't require the client object to be passed in when used in this manner. The instance returned by instantiating either of the classes can be called directly allowing them to be used as decorators: .. code-block:: python @zk.ChildrenWatch("/my/favorite/node") def watch_children(children): print("Children are now: %s" % children) # Above function called immediately, and from then on @zk.DataWatch("/my/favorite") def watch_node(data, stat): print("Version: %s, data: %s" % (stat.version, data.decode("utf-8"))) Transactions ============ .. versionadded:: 0.6 Zookeeper 3.4 and above supports the sending of multiple commands at once that will be committed as a single atomic unit. Either they will all succeed or they will all fail. The result of a transaction will be a list of the success/failure results for each command in the transaction. .. code-block:: python transaction = zk.transaction() transaction.check('/node/a', version=3) transaction.create('/node/b', b"a value") results = transaction.commit() The :meth:`~kazoo.client.KazooClient.transaction` method returns a :class:`~kazoo.client.TransactionRequest` instance. It's methods may be called to queue commands to be completed in the transaction. When the transaction is ready to be sent, the :meth:`~kazoo.client.TransactionRequest.commit` method on it is called. In the example above, there's a command not available unless a transaction is being used, `check`. This can check nodes for a specific version, which could be used to make the transaction fail if a node doesn't match a version that it should be at. In this case the node `/node/a` must be at version 3 or `/node/b` will not be created. kazoo-1.2.1/docs/changelog.rst000066400000000000000000000000341217652145400162530ustar00rootroot00000000000000.. include:: ../CHANGES.rst kazoo-1.2.1/docs/conf.py000066400000000000000000000175351217652145400151070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # kazoo documentation build configuration file, created by # sphinx-quickstart on Fri Nov 11 13:23:01 2011. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys class Mock(object): def __init__(self, *args): pass def __getattr__(self, name): return Mock MOCK_MODULES = ['zookeeper'] for mod_name in MOCK_MODULES: sys.modules[mod_name] = Mock() # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.viewcode', 'repoze.sphinx.autointerface', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'kazoo' copyright = u'2011-2013, Kazoo team' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '1.2' # The full version, including alpha/beta/rc tags. release = '1.2.1' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'kazoodoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'kazoo.tex', u'kazoo Documentation', u'Various Authors', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'kazoo', u'kazoo Documentation', [u'Various Authors'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'kazoo', u'kazoo Documentation', u'Various Authors', 'kazoo', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' kazoo-1.2.1/docs/glossary.rst000066400000000000000000000004341217652145400161730ustar00rootroot00000000000000.. _glossary: Glossary ======== .. glossary:: Zookeeper `Apache Zookeeper `_ is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. kazoo-1.2.1/docs/implementation.rst000066400000000000000000000036331217652145400173610ustar00rootroot00000000000000.. _implementation_details: ====================== Implementation Details ====================== Up to version 0.3 kazoo used the Python bindings to the Zookeeper C library. Unfortunately those bindings are fairly buggy and required a fair share of weird workarounds to interface with the native OS thread used in those bindings. Starting with version 0.4 kazoo implements the entire Zookeeper wire protocol itself in pure Python. Doing so removed the need for the workarounds and made it much easier to implement the features missing in the C bindings. Handlers ======== Both the Kazoo handlers run 3 separate queues to help alleviate deadlock issues and ensure consistent execution order regardless of environment. The :class:`~kazoo.handlers.gevent.SequentialGeventHandler` runs a separate greenlet for each queue that processes the callbacks queued in order. The :class:`~kazoo.handlers.threading.SequentialThreadingHandler` runs a separate thread for each queue that processes the callbacks queued in order (thus the naming scheme which notes they are sequential in anticipation that there could be handlers shipped in the future which don't make this guarantee). Callbacks are queued by type, the 3 types being: 1. Session events (State changes, registered listener functions) 2. Watch events (Watch callbacks, DataWatch, and ChildrenWatch functions) 3. Completion callbacks (Functions chained to :class:`~kazoo.interfaces.IAsyncResult` objects) This ensures that calls can be made to Zookeeper from any callback **except for a state listener** without worrying that critical session events will be blocked. .. warning:: Its important to remember that if you write code that blocks in one of these functions then no queued functions of that type will be executed until the code stops blocking. If your code might block, it should run itself in a separate greenlet/thread so that the other callbacks can run. kazoo-1.2.1/docs/index.rst000066400000000000000000000060571217652145400154460ustar00rootroot00000000000000===== kazoo ===== Kazoo is a Python library designed to make working with :term:`Zookeeper` a more hassle-free experience that is less prone to errors. Kazoo features: * A wide range of recipe implementations, like Lock, Election or Queue * Data and Children Watchers * Simplified Zookeeper connection state tracking * Unified asynchronous API for use with greenlets or threads * Support for gevent 0.13 and gevent 1.0 * Support for Zookeeper 3.3 and 3.4 servers * Integrated testing helpers for Zookeeper clusters * Pure-Python based implementation of the wire protocol, avoiding all the memory leaks, lacking features, and debugging madness of the C library Kazoo is heavily inspired by `Netflix Curator`_ simplifications and helpers. .. note:: You should be familiar with Zookeeper and have read the `Zookeeper Programmers Guide`_ before using `kazoo`. Reference Docs ============== .. toctree:: :maxdepth: 1 install basic_usage async_usage implementation testing api Changelog Why === Using :term:`Zookeeper` in a safe manner can be difficult due to the variety of edge-cases in :term:`Zookeeper` and other bugs that have been present in the Python C binding. Due to how the C library utilizes a separate C thread for :term:`Zookeeper` communication some libraries like `gevent`_ also don't work properly by default. By utilizing a pure Python implementation, Kazoo handles all of these cases and provides a new asynchronous API which is consistent when using threads or `gevent`_ greenlets. Source Code =========== All source code is available on `github under kazoo `_. Bugs/Support ============ Bugs and support issues should be reported on the `kazoo github issue tracker `_. The developers of ``kazoo`` can frequently be found on the Freenode IRC network in the #zookeeper channel. For general discussions, please use the `python-zk `_ mailing list hosted on Google Groups. Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`glossary` .. toctree:: :hidden: glossary License ======= ``kazoo`` is offered under the Apache License 2.0. Authors ======= ``kazoo`` started under the `Nimbus Project`_ and through collaboration with the open-source community has been merged with code from `Mozilla`_ and the `Zope Corporation`_. It has seen further contributions from `reddit`_, `Quora`_ and `SageCloud`_. .. _Apache Zookeeper: http://zookeeper.apache.org/ .. _Zookeeper Programmers Guide: http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html .. _Zookeeper Recipes: http://zookeeper.apache.org/doc/current/recipes.html#sc_recoverableSharedLocks .. _Nimbus Project: http://www.nimbusproject.org/ .. _Zope Corporation: http://zope.com/ .. _Mozilla: http://www.mozilla.org/ .. _Netflix Curator: https://github.com/Netflix/curator .. _gevent: http://gevent.org/ .. _reddit: http://www.reddit.com/ .. _Quora: https://www.quora.com/ .. _SageCloud: http://sagecloud.com/ kazoo-1.2.1/docs/install.rst000066400000000000000000000004151217652145400157750ustar00rootroot00000000000000.. _install: ========== Installing ========== kazoo can be installed via ``pip`` or ``easy_install``: .. code-block:: bash $ pip install kazoo Kazoo implements the Zookeeper protocol in pure Python, so you don't need any Python Zookeeper C bindings installed. kazoo-1.2.1/docs/make.bat000066400000000000000000000117461217652145400152130ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set I18NSPHINXOPTS=%SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\kazoo.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\kazoo.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end kazoo-1.2.1/docs/testing.rst000066400000000000000000000036621217652145400160130ustar00rootroot00000000000000.. _testing: ======= Testing ======= Kazoo has several test harnesses used internally for its own tests that are exposed as public API's for use in your own tests for common Zookeeper cluster management and session testing. They can be mixed in with your own `unittest` or `nose` tests along with a `mock` object that allows you to force specific `KazooClient` commands to fail in various ways. The test harness needs to be able to find the Zookeeper Java libraries. You need to specify an environment variable called `ZOOKEEPER_PATH` and point it to their location, for example `/usr/share/java`. The directory should contain a `zookeeper-*.jar` and a `lib` directory containing at least a `log4j-*.jar`. If your Java setup is complex, you may also override our classpath mechanism completely by specifying an environment variable called `ZOOKEEPER_CLASSPATH`. If provided, it will be used unmodified as the Java classpath for Zookeeper. Kazoo Test Harness ================== The :class:`~kazoo.testing.harness.KazooTestHarness` can be used directly or mixed in with your test code. Example: .. code-block:: python from kazoo.testing import KazooTestHarness class MyTest(KazooTestHarness): def setUp(self): self.setup_zookeeper() def tearDown(self): self.teardown_zookeeper() def testmycode(self): self.client.ensure_path('/test/path') result = self.client.get('/test/path') ... Kazoo Test Case =============== The :class:`~kazoo.testing.harness.KazooTestCase` is complete test case that is equivalent to the mixin setup of :class:`~kazoo.testing.harness.KazooTestHarness`. An equivalent test to the one above: .. code-block:: python from kazoo.testing import KazooTestCase class MyTest(KazooTestCase): def testmycode(self): self.client.ensure_path('/test/path') result = self.client.get('/test/path') ... kazoo-1.2.1/kazoo/000077500000000000000000000000001217652145400137705ustar00rootroot00000000000000kazoo-1.2.1/kazoo/__init__.py000066400000000000000000000000021217652145400160710ustar00rootroot00000000000000# kazoo-1.2.1/kazoo/client.py000066400000000000000000001322661217652145400156320ustar00rootroot00000000000000"""Kazoo Zookeeper Client""" import inspect import logging import os import re import warnings from collections import defaultdict, deque from functools import partial from os.path import split from kazoo.exceptions import ( AuthFailedError, ConfigurationError, ConnectionClosedError, ConnectionLoss, NoNodeError, NodeExistsError, SessionExpiredError, WriterNotClosedException, ) from kazoo.handlers.threading import SequentialThreadingHandler from kazoo.handlers.utils import capture_exceptions, wrap from kazoo.hosts import collect_hosts from kazoo.protocol.connection import ConnectionHandler from kazoo.protocol.paths import normpath from kazoo.protocol.paths import _prefix_root from kazoo.protocol.serialization import ( Auth, CheckVersion, CloseInstance, Create, Delete, Exists, GetChildren, GetChildren2, GetACL, SetACL, GetData, SetData, Sync, Transaction ) from kazoo.protocol.states import KazooState from kazoo.protocol.states import KeeperState from kazoo.retry import KazooRetry from kazoo.security import ACL from kazoo.security import OPEN_ACL_UNSAFE # convenience API from kazoo.recipe.barrier import Barrier from kazoo.recipe.barrier import DoubleBarrier from kazoo.recipe.counter import Counter from kazoo.recipe.election import Election from kazoo.recipe.lock import Lock from kazoo.recipe.lock import Semaphore from kazoo.recipe.partitioner import SetPartitioner from kazoo.recipe.party import Party from kazoo.recipe.party import ShallowParty from kazoo.recipe.queue import Queue from kazoo.recipe.queue import LockingQueue from kazoo.recipe.watchers import ChildrenWatch from kazoo.recipe.watchers import DataWatch try: # pragma: nocover basestring except NameError: # pragma: nocover basestring = str LOST_STATES = (KeeperState.EXPIRED_SESSION, KeeperState.AUTH_FAILED, KeeperState.CLOSED) ENVI_VERSION = re.compile('[\w\s:.]*=([\d\.]*).*', re.DOTALL) log = logging.getLogger(__name__) _RETRY_COMPAT_DEFAULTS = dict( max_retries=None, retry_delay=0.1, retry_backoff=2, retry_jitter=0.8, retry_max_delay=3600, ) _RETRY_COMPAT_MAPPING = dict( max_retries='max_tries', retry_delay='delay', retry_backoff='backoff', retry_jitter='max_jitter', retry_max_delay='max_delay', ) class KazooClient(object): """An Apache Zookeeper Python client supporting alternate callback handlers and high-level functionality. Watch functions registered with this class will not get session events, unlike the default Zookeeper watches. They will also be called with a single argument, a :class:`~kazoo.protocol.states.WatchedEvent` instance. """ def __init__(self, hosts='127.0.0.1:2181', timeout=10.0, client_id=None, handler=None, default_acl=None, auth_data=None, read_only=None, randomize_hosts=True, connection_retry=None, command_retry=None, logger=None, **kwargs): """Create a :class:`KazooClient` instance. All time arguments are in seconds. :param hosts: Comma-separated list of hosts to connect to (e.g. 127.0.0.1:2181,127.0.0.1:2182). :param timeout: The longest to wait for a Zookeeper connection. :param client_id: A Zookeeper client id, used when re-establishing a prior session connection. :param handler: An instance of a class implementing the :class:`~kazoo.interfaces.IHandler` interface for callback handling. :param default_acl: A default ACL used on node creation. :param auth_data: A list of authentication credentials to use for the connection. Should be a list of (scheme, credential) tuples as :meth:`add_auth` takes. :param read_only: Allow connections to read only servers. :param randomize_hosts: By default randomize host selection. :param connection_retry: A :class:`kazoo.retry.KazooRetry` object to use for retrying the connection to Zookeeper. Also can be a dict of options which will be used for creating one. :param command_retry: A :class:`kazoo.retry.KazooRetry` object to use for the :meth:`KazooClient.retry` method. Also can be a dict of options which will be used for creating one. :param logger: A custom logger to use instead of the module global `log` instance. Basic Example: .. code-block:: python zk = KazooClient() zk.start() children = zk.get_children('/') zk.stop() As a convenience all recipe classes are available as attributes and get automatically bound to the client. For example:: zk = KazooClient() zk.start() lock = zk.Lock('/lock_path') .. versionadded:: 0.6 The read_only option. Requires Zookeeper 3.4+ .. versionadded:: 0.6 The retry_max_delay option. .. versionadded:: 0.6 The randomize_hosts option. .. versionchanged:: 0.8 Removed the unused watcher argument (was second argument). .. versionadded:: 1.2 The connection_retry, command_retry and logger options. """ self.logger = logger or log # Record the handler strategy used self.handler = handler if handler else SequentialThreadingHandler() if inspect.isclass(self.handler): raise ConfigurationError("Handler must be an instance of a class, " "not the class: %s" % self.handler) self.auth_data = auth_data if auth_data else set([]) self.default_acl = default_acl self.randomize_hosts = randomize_hosts self.hosts, chroot = collect_hosts(hosts, randomize_hosts) if chroot: self.chroot = normpath(chroot) else: self.chroot = '' # Curator like simplified state tracking, and listeners for # state transitions self._state = KeeperState.CLOSED self.state = KazooState.LOST self.state_listeners = set() self._reset() self.read_only = read_only if client_id: self._session_id = client_id[0] self._session_passwd = client_id[1] else: self._reset_session() # ZK uses milliseconds self._session_timeout = int(timeout * 1000) # We use events like twitter's client to track current and # desired state (connected, and whether to shutdown) self._live = self.handler.event_object() self._writer_stopped = self.handler.event_object() self._stopped = self.handler.event_object() self._stopped.set() self._writer_stopped.set() self.retry = self._conn_retry = None if connection_retry is not None: self._conn_retry = connection_retry if self.handler.sleep_func != self._conn_retry.sleep_func: raise ConfigurationError("Retry handler and event handler " " must use the same sleep func") if command_retry is not None: self.retry = command_retry if self.handler.sleep_func != self.comand_retry.sleep_func: raise ConfigurationError("Command retry handler and event handler " " must use the same sleep func") if self.retry is None or self._conn_retry is None: old_retry_keys = dict(_RETRY_COMPAT_DEFAULTS) for key in old_retry_keys: try: old_retry_keys[key] = kwargs.pop(key) warnings.warn('Passing retry configuration param %s to the' ' client directly is deprecated, please pass a' ' configured retry object (using param %s)' % ( key, _RETRY_COMPAT_MAPPING[key]), DeprecationWarning, stacklevel=2) except KeyError: pass retry_keys = {} for oldname, value in old_retry_keys.items(): retry_keys[_RETRY_COMPAT_MAPPING[oldname]] = value if self._conn_retry is None: self._conn_retry = KazooRetry( sleep_func=self.handler.sleep_func, **retry_keys) if self.retry is None: self.retry = KazooRetry( sleep_func=self.handler.sleep_func, **retry_keys) self._conn_retry.interrupt = lambda: self._stopped.is_set() self._connection = ConnectionHandler(self, self._conn_retry.copy(), logger=self.logger) self.Barrier = partial(Barrier, self) self.Counter = partial(Counter, self) self.DoubleBarrier = partial(DoubleBarrier, self) self.ChildrenWatch = partial(ChildrenWatch, self) self.DataWatch = partial(DataWatch, self) self.Election = partial(Election, self) self.Lock = partial(Lock, self) self.Party = partial(Party, self) self.Queue = partial(Queue, self) self.LockingQueue = partial(LockingQueue, self) self.SetPartitioner = partial(SetPartitioner, self) self.Semaphore = partial(Semaphore, self) self.ShallowParty = partial(ShallowParty, self) # If we got any unhandled keywords, complain like python would if kwargs: raise TypeError('__init__() got unexpected keyword arguments: %s' % (kwargs.keys(),)) def _reset(self): """Resets a variety of client states for a new connection.""" self._queue = deque() self._pending = deque() self._reset_watchers() self._reset_session() self.last_zxid = 0 def _reset_watchers(self): self._child_watchers = defaultdict(set) self._data_watchers = defaultdict(set) def _reset_session(self): self._session_id = None self._session_passwd = b'\x00' * 16 @property def client_state(self): """Returns the last Zookeeper client state This is the non-simplified state information and is generally not as useful as the simplified KazooState information. """ return self._state @property def client_id(self): """Returns the client id for this Zookeeper session if connected. :returns: client id which consists of the session id and password. :rtype: tuple """ if self._live.is_set(): return (self._session_id, self._session_passwd) return None @property def connected(self): """Returns whether the Zookeeper connection has been established.""" return self._live.is_set() def add_listener(self, listener): """Add a function to be called for connection state changes. This function will be called with a :class:`~kazoo.protocol.states.KazooState` instance indicating the new connection state on state transitions. .. warning:: This function must not block. If its at all likely that it might need data or a value that could result in blocking than the :meth:`~kazoo.interfaces.IHandler.spawn` method should be used so that the listener can return immediately. """ if not (listener and callable(listener)): raise ConfigurationError("listener must be callable") self.state_listeners.add(listener) def remove_listener(self, listener): """Remove a listener function""" self.state_listeners.discard(listener) def _make_state_change(self, state): # skip if state is current if self.state == state: return self.state = state # Create copy of listeners for iteration in case one needs to # remove itself for listener in list(self.state_listeners): try: remove = listener(state) if remove is True: self.remove_listener(listener) except Exception: self.logger.exception("Error in connection state listener") def _session_callback(self, state): if state == self._state: return # Note that we don't check self.state == LOST since that's also # the client's initial state dead_state = self._state in LOST_STATES self._state = state # If we were previously closed or had an expired session, and # are now connecting, don't bother with the rest of the # transitions since they only apply after # we've established a connection if dead_state and state == KeeperState.CONNECTING: self.logger.debug("Skipping state change") return if state in (KeeperState.CONNECTED, KeeperState.CONNECTED_RO): self.logger.info("Zookeeper connection established, state: %s", state) self._live.set() self._make_state_change(KazooState.CONNECTED) elif state in LOST_STATES: self.logger.info("Zookeeper session lost, state: %s", state) self._live.clear() self._make_state_change(KazooState.LOST) self._notify_pending(state) self._reset() else: self.logger.info("Zookeeper connection lost") # Connection lost self._live.clear() self._notify_pending(state) self._make_state_change(KazooState.SUSPENDED) self._reset_watchers() def _notify_pending(self, state): """Used to clear a pending response queue and request queue during connection drops.""" if state == KeeperState.AUTH_FAILED: exc = AuthFailedError() elif state == KeeperState.EXPIRED_SESSION: exc = SessionExpiredError() else: exc = ConnectionLoss() while True: try: request, async_object, xid = self._pending.popleft() if async_object: async_object.set_exception(exc) except IndexError: break while True: try: request, async_object = self._queue.popleft() if async_object: async_object.set_exception(exc) except IndexError: break def _safe_close(self): self.handler.stop() timeout = self._session_timeout // 1000 if timeout < 10: timeout = 10 if not self._connection.stop(timeout): raise WriterNotClosedException( "Writer still open from prior connection " "and wouldn't close after %s seconds" % timeout) def _call(self, request, async_object): """Ensure there's an active connection and put the request in the queue if there is.""" if self._state == KeeperState.AUTH_FAILED: async_object.set_exception(AuthFailedError()) return elif self._state == KeeperState.CLOSED: async_object.set_exception(ConnectionClosedError( "Connection has been closed")) return elif self._state in (KeeperState.EXPIRED_SESSION, KeeperState.CONNECTING): async_object.set_exception(SessionExpiredError()) return self._queue.append((request, async_object)) # wake the connection, guarding against a race with close() write_pipe = self._connection._write_pipe if write_pipe is None: async_object.set_exception(ConnectionClosedError( "Connection has been closed")) try: os.write(write_pipe, b'\0') except: async_object.set_exception(ConnectionClosedError( "Connection has been closed")) def start(self, timeout=15): """Initiate connection to ZK. :param timeout: Time in seconds to wait for connection to succeed. :raises: :attr:`~kazoo.interfaces.IHandler.timeout_exception` if the connection wasn't established within `timeout` seconds. """ event = self.start_async() event.wait(timeout=timeout) if not self.connected: # We time-out, ensure we are disconnected self.stop() raise self.handler.timeout_exception("Connection time-out") if self.chroot and not self.exists("/"): warnings.warn("No chroot path exists, the chroot path " "should be created before normal use.") def start_async(self): """Asynchronously initiate connection to ZK. :returns: An event object that can be checked to see if the connection is alive. :rtype: :class:`~threading.Event` compatible object. """ # If we're already connected, ignore if self._live.is_set(): return self._live # Make sure we're safely closed self._safe_close() # We've been asked to connect, clear the stop and our writer # thread indicator self._stopped.clear() self._writer_stopped.clear() # Start the handler self.handler.start() # Start the connection self._connection.start() return self._live def stop(self): """Gracefully stop this Zookeeper session. This method can be called while a reconnection attempt is in progress, which will then be halted. Once the connection is closed, its session becomes invalid. All the ephemeral nodes in the ZooKeeper server associated with the session will be removed. The watches left on those nodes (and on their parents) will be triggered. """ if self._stopped.is_set(): return self._stopped.set() self._queue.append((CloseInstance, None)) os.write(self._connection._write_pipe, b'\0') self._safe_close() def restart(self): """Stop and restart the Zookeeper session.""" self.stop() self.start() def close(self): """Free any resources held by the client. This method should be called on a stopped client before it is discarded. Not doing so may result in filehandles being leaked. .. versionadded:: 1.0 """ self._connection.close() def command(self, cmd=b'ruok'): """Sent a management command to the current ZK server. Examples are `ruok`, `envi` or `stat`. :returns: An unstructured textual response. :rtype: str :raises: :exc:`ConnectionLoss` if there is no connection open, or possibly a :exc:`socket.error` if there's a problem with the connection used just for this command. .. versionadded:: 0.5 """ if not self._live.is_set(): raise ConnectionLoss("No connection to server") sock = self.handler.socket() sock.settimeout(self._session_timeout) peer = self._connection._socket.getpeername() sock.connect(peer) sock.sendall(cmd) result = sock.recv(8192) sock.close() return result.decode('utf-8', 'replace') def server_version(self): """Get the version of the currently connected ZK server. :returns: The server version, for example (3, 4, 3). :rtype: tuple .. versionadded:: 0.5 """ data = self.command(b'envi') string = ENVI_VERSION.match(data).group(1) return tuple([int(i) for i in string.split('.')]) def add_auth(self, scheme, credential): """Send credentials to server. :param scheme: authentication scheme (default supported: "digest"). :param credential: the credential -- value depends on scheme. """ return self.add_auth_async(scheme, credential) def add_auth_async(self, scheme, credential): """Asynchronously send credentials to server. Takes the same arguments as :meth:`add_auth`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ if not isinstance(scheme, basestring): raise TypeError("Invalid type for scheme") if not isinstance(credential, basestring): raise TypeError("Invalid type for credential") self._call(Auth(0, scheme, credential), None) return True def unchroot(self, path): """Strip the chroot if applicable from the path.""" if not self.chroot: return path if path.startswith(self.chroot): return path[len(self.chroot):] else: return path def sync_async(self, path): """Asynchronous sync. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ async_result = self.handler.async_result() self._call(Sync(_prefix_root(self.chroot, path)), async_result) return async_result def sync(self, path): """Sync, blocks until response is acknowledged. Flushes channel between process and leader. :param path: path of node. :returns: The node path that was synced. :raises: :exc:`~kazoo.exceptions.ZookeeperError` if the server returns a non-zero error code. .. versionadded:: 0.5 """ return self.sync_async(path).get() def create(self, path, value=b"", acl=None, ephemeral=False, sequence=False, makepath=False): """Create a node with the given value as its data. Optionally set an ACL on the node. The ephemeral and sequence arguments determine the type of the node. An ephemeral node will be automatically removed by ZooKeeper when the session associated with the creation of the node expires. A sequential node will be given the specified path plus a suffix `i` where i is the current sequential number of the node. The sequence number is always fixed length of 10 digits, 0 padded. Once such a node is created, the sequential number will be incremented by one. If a node with the same actual path already exists in ZooKeeper, a NodeExistsError will be raised. Note that since a different actual path is used for each invocation of creating sequential nodes with the same path argument, the call will never raise NodeExistsError. If the parent node does not exist in ZooKeeper, a NoNodeError will be raised. Setting the optional `makepath` argument to `True` will create all missing parent nodes instead. An ephemeral node cannot have children. If the parent node of the given path is ephemeral, a NoChildrenForEphemeralsError will be raised. This operation, if successful, will trigger all the watches left on the node of the given path by :meth:`exists` and :meth:`get` API calls, and the watches left on the parent node by :meth:`get_children` API calls. The maximum allowable size of the node value is 1 MB. Values larger than this will cause a ZookeeperError to be raised. :param path: Path of node. :param value: Initial bytes value of node. :param acl: :class:`~kazoo.security.ACL` list. :param ephemeral: Boolean indicating whether node is ephemeral (tied to this session). :param sequence: Boolean indicating whether path is suffixed with a unique index. :param makepath: Whether the path should be created if it doesn't exist. :returns: Real path of the new node. :rtype: str :raises: :exc:`~kazoo.exceptions.NodeExistsError` if the node already exists. :exc:`~kazoo.exceptions.NoNodeError` if parent nodes are missing. :exc:`~kazoo.exceptions.NoChildrenForEphemeralsError` if the parent node is an ephemeral node. :exc:`~kazoo.exceptions.ZookeeperError` if the provided value is too large. :exc:`~kazoo.exceptions.ZookeeperError` if the server returns a non-zero error code. """ return self.create_async(path, value, acl=acl, ephemeral=ephemeral, sequence=sequence, makepath=makepath).get() def create_async(self, path, value=b"", acl=None, ephemeral=False, sequence=False, makepath=False): """Asynchronously create a ZNode. Takes the same arguments as :meth:`create`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` .. versionadded:: 1.1 The makepath option. """ if acl is None and self.default_acl: acl = self.default_acl if not isinstance(path, basestring): raise TypeError("path must be a string") if acl and (isinstance(acl, ACL) or not isinstance(acl, (tuple, list))): raise TypeError("acl must be a tuple/list of ACL's") if not isinstance(value, bytes): raise TypeError("value must be a byte string") if not isinstance(ephemeral, bool): raise TypeError("ephemeral must be a bool") if not isinstance(sequence, bool): raise TypeError("sequence must be a bool") if not isinstance(makepath, bool): raise TypeError("makepath must be a bool") flags = 0 if ephemeral: flags |= 1 if sequence: flags |= 2 if acl is None: acl = OPEN_ACL_UNSAFE async_result = self.handler.async_result() def do_create(): self._create_async_inner(path, value, acl, flags, trailing=sequence).rawlink(create_completion) @capture_exceptions(async_result) def retry_completion(result): result.get() do_create() @wrap(async_result) def create_completion(result): try: return self.unchroot(result.get()) except NoNodeError: if not makepath: raise if sequence and path.endswith('/'): parent = path.rstrip('/') else: parent, _ = split(path) self.ensure_path_async(parent, acl).rawlink(retry_completion) do_create() return async_result def _create_async_inner(self, path, value, acl, flags, trailing=False): async_result = self.handler.async_result() self._call(Create(_prefix_root(self.chroot, path, trailing=trailing), value, acl, flags), async_result) return async_result def ensure_path(self, path, acl=None): """Recursively create a path if it doesn't exist. :param path: Path of node. :param acl: Permissions for node. """ return self.ensure_path_async(path, acl).get() def ensure_path_async(self, path, acl=None): """Recursively create a path asynchronously if it doesn't exist. Takes the same arguments as :meth:`ensure_path`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` .. versionadded:: 1.1 """ acl = acl or self.default_acl async_result = self.handler.async_result() @wrap(async_result) def create_completion(result): try: return result.get() except NodeExistsError: return True @capture_exceptions(async_result) def prepare_completion(next_path, result): result.get() self.create_async(next_path, acl=acl).rawlink(create_completion) @wrap(async_result) def exists_completion(path, result): if result.get(): return True parent, node = split(path) if node: self.ensure_path_async(parent, acl=acl).rawlink( partial(prepare_completion, path)) else: self.create_async(path, acl=acl).rawlink(create_completion) self.exists_async(path).rawlink(partial(exists_completion, path)) return async_result def exists(self, path, watch=None): """Check if a node exists. If a watch is provided, it will be left on the node with the given path. The watch will be triggered by a successful operation that creates/deletes the node or sets the data on the node. :param path: Path of node. :param watch: Optional watch callback to set for future changes to this path. :returns: ZnodeStat of the node if it exists, else None if the node does not exist. :rtype: :class:`~kazoo.protocol.states.ZnodeStat` or `None`. :raises: :exc:`~kazoo.exceptions.ZookeeperError` if the server returns a non-zero error code. """ return self.exists_async(path, watch).get() def exists_async(self, path, watch=None): """Asynchronously check if a node exists. Takes the same arguments as :meth:`exists`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ if not isinstance(path, basestring): raise TypeError("path must be a string") if watch and not callable(watch): raise TypeError("watch must be a callable") async_result = self.handler.async_result() self._call(Exists(_prefix_root(self.chroot, path), watch), async_result) return async_result def get(self, path, watch=None): """Get the value of a node. If a watch is provided, it will be left on the node with the given path. The watch will be triggered by a successful operation that sets data on the node, or deletes the node. :param path: Path of node. :param watch: Optional watch callback to set for future changes to this path. :returns: Tuple (value, :class:`~kazoo.protocol.states.ZnodeStat`) of node. :rtype: tuple :raises: :exc:`~kazoo.exceptions.NoNodeError` if the node doesn't exist :exc:`~kazoo.exceptions.ZookeeperError` if the server returns a non-zero error code """ return self.get_async(path, watch).get() def get_async(self, path, watch=None): """Asynchronously get the value of a node. Takes the same arguments as :meth:`get`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ if not isinstance(path, basestring): raise TypeError("path must be a string") if watch and not callable(watch): raise TypeError("watch must be a callable") async_result = self.handler.async_result() self._call(GetData(_prefix_root(self.chroot, path), watch), async_result) return async_result def get_children(self, path, watch=None, include_data=False): """Get a list of child nodes of a path. If a watch is provided it will be left on the node with the given path. The watch will be triggered by a successful operation that deletes the node of the given path or creates/deletes a child under the node. The list of children returned is not sorted and no guarantee is provided as to its natural or lexical order. :param path: Path of node to list. :param watch: Optional watch callback to set for future changes to this path. :param include_data: Include the :class:`~kazoo.protocol.states.ZnodeStat` of the node in addition to the children. This option changes the return value to be a tuple of (children, stat). :returns: List of child node names, or tuple if `include_data` is `True`. :rtype: list :raises: :exc:`~kazoo.exceptions.NoNodeError` if the node doesn't exist. :exc:`~kazoo.exceptions.ZookeeperError` if the server returns a non-zero error code. .. versionadded:: 0.5 The `include_data` option. """ return self.get_children_async(path, watch, include_data).get() def get_children_async(self, path, watch=None, include_data=False): """Asynchronously get a list of child nodes of a path. Takes the same arguments as :meth:`get_children`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ if not isinstance(path, basestring): raise TypeError("path must be a string") if watch and not callable(watch): raise TypeError("watch must be a callable") if not isinstance(include_data, bool): raise TypeError("include_data must be a bool") async_result = self.handler.async_result() if include_data: req = GetChildren2(_prefix_root(self.chroot, path), watch) else: req = GetChildren(_prefix_root(self.chroot, path), watch) self._call(req, async_result) return async_result def get_acls(self, path): """Return the ACL and stat of the node of the given path. :param path: Path of the node. :returns: The ACL array of the given node and its :class:`~kazoo.protocol.states.ZnodeStat`. :rtype: tuple of (:class:`~kazoo.security.ACL` list, :class:`~kazoo.protocol.states.ZnodeStat`) :raises: :exc:`~kazoo.exceptions.NoNodeError` if the node doesn't exist. :exc:`~kazoo.exceptions.ZookeeperError` if the server returns a non-zero error code .. versionadded:: 0.5 """ return self.get_acls_async(path).get() def get_acls_async(self, path): """Return the ACL and stat of the node of the given path. Takes the same arguments as :meth:`get_acls`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ if not isinstance(path, basestring): raise TypeError("path must be a string") async_result = self.handler.async_result() self._call(GetACL(_prefix_root(self.chroot, path)), async_result) return async_result def set_acls(self, path, acls, version=-1): """Set the ACL for the node of the given path. Set the ACL for the node of the given path if such a node exists and the given version matches the version of the node. :param path: Path for the node. :param acls: List of :class:`~kazoo.security.ACL` objects to set. :param version: The expected node version that must match. :returns: The stat of the node. :raises: :exc:`~kazoo.exceptions.BadVersionError` if version doesn't match. :exc:`~kazoo.exceptions.NoNodeError` if the node doesn't exist. :exc:`~kazoo.exceptions.InvalidACLError` if the ACL is invalid. :exc:`~kazoo.exceptions.ZookeeperError` if the server returns a non-zero error code. .. versionadded:: 0.5 """ return self.set_acls_async(path, acls, version).get() def set_acls_async(self, path, acls, version=-1): """Set the ACL for the node of the given path. Takes the same arguments as :meth:`set_acls`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ if not isinstance(path, basestring): raise TypeError("path must be a string") if isinstance(acls, ACL) or not isinstance(acls, (tuple, list)): raise TypeError("acl must be a tuple/list of ACL's") if not isinstance(version, int): raise TypeError("version must be an int") async_result = self.handler.async_result() self._call(SetACL(_prefix_root(self.chroot, path), acls, version), async_result) return async_result def set(self, path, value, version=-1): """Set the value of a node. If the version of the node being updated is newer than the supplied version (and the supplied version is not -1), a BadVersionError will be raised. This operation, if successful, will trigger all the watches on the node of the given path left by :meth:`get` API calls. The maximum allowable size of the value is 1 MB. Values larger than this will cause a ZookeeperError to be raised. :param path: Path of node. :param value: New data value. :param version: Version of node being updated, or -1. :returns: Updated :class:`~kazoo.protocol.states.ZnodeStat` of the node. :raises: :exc:`~kazoo.exceptions.BadVersionError` if version doesn't match. :exc:`~kazoo.exceptions.NoNodeError` if the node doesn't exist. :exc:`~kazoo.exceptions.ZookeeperError` if the provided value is too large. :exc:`~kazoo.exceptions.ZookeeperError` if the server returns a non-zero error code. """ return self.set_async(path, value, version).get() def set_async(self, path, value, version=-1): """Set the value of a node. Takes the same arguments as :meth:`set`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ if not isinstance(path, basestring): raise TypeError("path must be a string") if not isinstance(value, bytes): raise TypeError("value must be a byte string") if not isinstance(version, int): raise TypeError("version must be an int") async_result = self.handler.async_result() self._call(SetData(_prefix_root(self.chroot, path), value, version), async_result) return async_result def transaction(self): """Create and return a :class:`TransactionRequest` object Creates a :class:`TransactionRequest` object. A Transaction can consist of multiple operations which can be committed as a single atomic unit. Either all of the operations will succeed or none of them. :returns: A TransactionRequest. :rtype: :class:`TransactionRequest` .. versionadded:: 0.6 Requires Zookeeper 3.4+ """ return TransactionRequest(self) def delete(self, path, version=-1, recursive=False): """Delete a node. The call will succeed if such a node exists, and the given version matches the node's version (if the given version is -1, the default, it matches any node's versions). This operation, if successful, will trigger all the watches on the node of the given path left by `exists` API calls, and the watches on the parent node left by `get_children` API calls. :param path: Path of node to delete. :param version: Version of node to delete, or -1 for any. :param recursive: Recursively delete node and all its children, defaults to False. :type recursive: bool :raises: :exc:`~kazoo.exceptions.BadVersionError` if version doesn't match. :exc:`~kazoo.exceptions.NoNodeError` if the node doesn't exist. :exc:`~kazoo.exceptions.NotEmptyError` if the node has children. :exc:`~kazoo.exceptions.ZookeeperError` if the server returns a non-zero error code. """ if not isinstance(recursive, bool): raise TypeError("recursive must be a bool") if recursive: return self._delete_recursive(path) else: return self.delete_async(path, version).get() def delete_async(self, path, version=-1): """Asynchronously delete a node. Takes the same arguments as :meth:`delete`, with the exception of `recursive`. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ if not isinstance(path, basestring): raise TypeError("path must be a string") if not isinstance(version, int): raise TypeError("version must be an int") async_result = self.handler.async_result() self._call(Delete(_prefix_root(self.chroot, path), version), async_result) return async_result def _delete_recursive(self, path): try: children = self.get_children(path) except NoNodeError: return True if children: for child in children: if path == "/": child_path = path + child else: child_path = path + "/" + child self._delete_recursive(child_path) try: self.delete(path) except NoNodeError: # pragma: nocover pass class TransactionRequest(object): """A Zookeeper Transaction Request A Transaction provides a builder object that can be used to construct and commit an atomic set of operations. The transaction must be committed before its sent. Transactions are not thread-safe and should not be accessed from multiple threads at once. .. versionadded:: 0.6 Requires Zookeeper 3.4+ """ def __init__(self, client): self.client = client self.operations = [] self.committed = False def create(self, path, value=b"", acl=None, ephemeral=False, sequence=False): """Add a create ZNode to the transaction. Takes the same arguments as :meth:`KazooClient.create`, with the exception of `makepath`. :returns: None """ if acl is None and self.client.default_acl: acl = self.client.default_acl if not isinstance(path, basestring): raise TypeError("path must be a string") if acl and not isinstance(acl, (tuple, list)): raise TypeError("acl must be a tuple/list of ACL's") if not isinstance(value, bytes): raise TypeError("value must be a byte string") if not isinstance(ephemeral, bool): raise TypeError("ephemeral must be a bool") if not isinstance(sequence, bool): raise TypeError("sequence must be a bool") flags = 0 if ephemeral: flags |= 1 if sequence: flags |= 2 if acl is None: acl = OPEN_ACL_UNSAFE self._add(Create(_prefix_root(self.client.chroot, path), value, acl, flags), None) def delete(self, path, version=-1): """Add a delete ZNode to the transaction. Takes the same arguments as :meth:`KazooClient.delete`, with the exception of `recursive`. """ if not isinstance(path, basestring): raise TypeError("path must be a string") if not isinstance(version, int): raise TypeError("version must be an int") self._add(Delete(_prefix_root(self.client.chroot, path), version)) def set_data(self, path, value, version=-1): """Add a set ZNode value to the transaction. Takes the same arguments as :meth:`KazooClient.set`. """ if not isinstance(path, basestring): raise TypeError("path must be a string") if not isinstance(value, bytes): raise TypeError("value must be a byte string") if not isinstance(version, int): raise TypeError("version must be an int") self._add(SetData(_prefix_root(self.client.chroot, path), value, version)) def check(self, path, version): """Add a Check Version to the transaction. This command will fail and abort a transaction if the path does not match the specified version. """ if not isinstance(path, basestring): raise TypeError("path must be a string") if not isinstance(version, int): raise TypeError("version must be an int") self._add(CheckVersion(_prefix_root(self.client.chroot, path), version)) def commit_async(self): """Commit the transaction asynchronously. :rtype: :class:`~kazoo.interfaces.IAsyncResult` """ self._check_tx_state() self.committed = True async_object = self.client.handler.async_result() self.client._call(Transaction(self.operations), async_object) return async_object def commit(self): """Commit the transaction. :returns: A list of the results for each operation in the transaction. """ return self.commit_async().get() def __enter__(self): return self def __exit__(self, exc_type, exc_value, exc_tb): """Commit and cleanup accumulated transaction data.""" if not exc_type: self.commit() def _check_tx_state(self): if self.committed: raise ValueError('Transaction already committed') def _add(self, request, post_processor=None): self._check_tx_state() self.client.logger.debug('Added %r to %r', request, self) self.operations.append(request) kazoo-1.2.1/kazoo/exceptions.py000066400000000000000000000100371217652145400165240ustar00rootroot00000000000000"""Kazoo Exceptions""" from collections import defaultdict class KazooException(Exception): """Base Kazoo exception that all other kazoo library exceptions inherit from""" class ZookeeperError(KazooException): """Base Zookeeper exception for errors originating from the Zookeeper server""" class CancelledError(KazooException): """Raised when a process is cancelled by another thread""" class ConfigurationError(KazooException): """Raised if the configuration arguments to an object are invalid""" class ZookeeperStoppedError(KazooException): """Raised when the kazoo client stopped (and thus not connected)""" class ConnectionDropped(KazooException): """Internal error for jumping out of loops""" class LockTimeout(KazooException): """Raised if failed to acquire a lock. .. versionadded:: 1.1 """ class WriterNotClosedException(KazooException): """Raised if the writer is unable to stop closing when requested. .. versionadded:: 1.2 """ def _invalid_error_code(): raise RuntimeError('Invalid error code') EXCEPTIONS = defaultdict(_invalid_error_code) def _zookeeper_exception(code): def decorator(klass): def create(*args, **kwargs): return klass(args, kwargs) EXCEPTIONS[code] = create klass.code = code return klass return decorator @_zookeeper_exception(0) class RolledBackError(ZookeeperError): pass @_zookeeper_exception(-1) class SystemZookeeperError(ZookeeperError): pass @_zookeeper_exception(-2) class RuntimeInconsistency(ZookeeperError): pass @_zookeeper_exception(-3) class DataInconsistency(ZookeeperError): pass @_zookeeper_exception(-4) class ConnectionLoss(ZookeeperError): pass @_zookeeper_exception(-5) class MarshallingError(ZookeeperError): pass @_zookeeper_exception(-6) class UnimplementedError(ZookeeperError): pass @_zookeeper_exception(-7) class OperationTimeoutError(ZookeeperError): pass @_zookeeper_exception(-8) class BadArgumentsError(ZookeeperError): pass @_zookeeper_exception(-100) class APIError(ZookeeperError): pass @_zookeeper_exception(-101) class NoNodeError(ZookeeperError): pass @_zookeeper_exception(-102) class NoAuthError(ZookeeperError): pass @_zookeeper_exception(-103) class BadVersionError(ZookeeperError): pass @_zookeeper_exception(-108) class NoChildrenForEphemeralsError(ZookeeperError): pass @_zookeeper_exception(-110) class NodeExistsError(ZookeeperError): pass @_zookeeper_exception(-111) class NotEmptyError(ZookeeperError): pass @_zookeeper_exception(-112) class SessionExpiredError(ZookeeperError): pass @_zookeeper_exception(-113) class InvalidCallbackError(ZookeeperError): pass @_zookeeper_exception(-114) class InvalidACLError(ZookeeperError): pass @_zookeeper_exception(-115) class AuthFailedError(ZookeeperError): pass @_zookeeper_exception(-118) class SessionMovedError(ZookeeperError): pass @_zookeeper_exception(-119) class NotReadOnlyCallError(ZookeeperError): """An API call that is not read-only was used while connected to a read-only server""" class ConnectionClosedError(SessionExpiredError): """Connection is closed""" # BW Compat aliases for C lib style exceptions ConnectionLossException = ConnectionLoss MarshallingErrorException = MarshallingError SystemErrorException = SystemZookeeperError RuntimeInconsistencyException = RuntimeInconsistency DataInconsistencyException = DataInconsistency UnimplementedException = UnimplementedError OperationTimeoutException = OperationTimeoutError BadArgumentsException = BadArgumentsError ApiErrorException = APIError NoNodeException = NoNodeError NoAuthException = NoAuthError BadVersionException = BadVersionError NoChildrenForEphemeralsException = NoChildrenForEphemeralsError NodeExistsException = NodeExistsError InvalidACLException = InvalidACLError AuthFailedException = AuthFailedError NotEmptyException = NotEmptyError SessionExpiredException = SessionExpiredError InvalidCallbackException = InvalidCallbackError kazoo-1.2.1/kazoo/handlers/000077500000000000000000000000001217652145400155705ustar00rootroot00000000000000kazoo-1.2.1/kazoo/handlers/__init__.py000066400000000000000000000000021217652145400176710ustar00rootroot00000000000000# kazoo-1.2.1/kazoo/handlers/gevent.py000066400000000000000000000112611217652145400174330ustar00rootroot00000000000000"""A gevent based handler.""" from __future__ import absolute_import import atexit import logging import gevent import gevent.coros import gevent.event import gevent.queue import gevent.select import gevent.thread from gevent.queue import Empty from gevent.queue import Queue from gevent import socket from zope.interface import implementer from kazoo.handlers.utils import create_tcp_socket from kazoo.interfaces import IAsyncResult from kazoo.interfaces import IHandler _using_libevent = gevent.__version__.startswith('0.') log = logging.getLogger(__name__) _STOP = object() AsyncResult = implementer(IAsyncResult)(gevent.event.AsyncResult) @implementer(IHandler) class SequentialGeventHandler(object): """Gevent handler for sequentially executing callbacks. This handler executes callbacks in a sequential manner. A queue is created for each of the callback events, so that each type of event has its callback type run sequentially. Each queue type has a greenlet worker that pulls the callback event off the queue and runs it in the order the client sees it. This split helps ensure that watch callbacks won't block session re-establishment should the connection be lost during a Zookeeper client call. Watch callbacks should avoid blocking behavior as the next callback of that type won't be run until it completes. If you need to block, spawn a new greenlet and return immediately so callbacks can proceed. """ name = "sequential_gevent_handler" sleep_func = staticmethod(gevent.sleep) def __init__(self): """Create a :class:`SequentialGeventHandler` instance""" self.callback_queue = Queue() self._running = False self._async = None self._state_change = gevent.coros.Semaphore() self._workers = [] atexit.register(self.stop) class timeout_exception(gevent.event.Timeout): def __init__(self, msg): gevent.event.Timeout.__init__(self, exception=msg) def _create_greenlet_worker(self, queue): def greenlet_worker(): while True: try: func = queue.get() if func is _STOP: break func() except Empty: continue except Exception as exc: log.warning("Exception in worker greenlet") log.exception(exc) return gevent.spawn(greenlet_worker) def start(self): """Start the greenlet workers.""" with self._state_change: if self._running: return self._running = True # Spawn our worker greenlets, we have # - A callback worker for watch events to be called for queue in (self.callback_queue,): w = self._create_greenlet_worker(queue) self._workers.append(w) def stop(self): """Stop the greenlet workers and empty all queues.""" with self._state_change: if not self._running: return self._running = False for queue in (self.callback_queue,): queue.put(_STOP) while self._workers: worker = self._workers.pop() worker.join() # Clear the queues self.callback_queue = Queue() # pragma: nocover def select(self, *args, **kwargs): return gevent.select.select(*args, **kwargs) def socket(self, *args, **kwargs): return create_tcp_socket(socket) def event_object(self): """Create an appropriate Event object""" return gevent.event.Event() def lock_object(self): """Create an appropriate Lock object""" return gevent.thread.allocate_lock() def rlock_object(self): """Create an appropriate RLock object""" return gevent.coros.RLock() def async_result(self): """Create a :class:`AsyncResult` instance The :class:`AsyncResult` instance will have its completion callbacks executed in the thread the :class:`SequentialGeventHandler` is created in (which should be the gevent/main thread). """ return AsyncResult() def spawn(self, func, *args, **kwargs): """Spawn a function to run asynchronously""" return gevent.spawn(func, *args, **kwargs) def dispatch_callback(self, callback): """Dispatch to the callback object The callback is put on separate queues to run depending on the type as documented for the :class:`SequentialGeventHandler`. """ self.callback_queue.put(lambda: callback.func(*callback.args)) kazoo-1.2.1/kazoo/handlers/threading.py000066400000000000000000000211421217652145400201070ustar00rootroot00000000000000"""A threading based handler. The :class:`SequentialThreadingHandler` is intended for regular Python environments that use threads. .. warning:: Do not use :class:`SequentialThreadingHandler` with applications using asynchronous event loops (like gevent). Use the :class:`~kazoo.handlers.gevent.SequentialGeventHandler` instead. """ from __future__ import absolute_import import atexit import logging import select import socket import threading import time try: import Queue except ImportError: # pragma: nocover import queue as Queue from zope.interface import implementer from kazoo.handlers.utils import create_tcp_socket from kazoo.interfaces import IAsyncResult from kazoo.interfaces import IHandler # sentinel objects _NONE = object() _STOP = object() log = logging.getLogger(__name__) class TimeoutError(Exception): pass @implementer(IAsyncResult) class AsyncResult(object): """A one-time event that stores a value or an exception""" def __init__(self, handler): self._handler = handler self.value = None self._exception = _NONE self._condition = threading.Condition() self._callbacks = [] def ready(self): """Return true if and only if it holds a value or an exception""" return self._exception is not _NONE def successful(self): """Return true if and only if it is ready and holds a value""" return self._exception is None @property def exception(self): if self._exception is not _NONE: return self._exception def set(self, value=None): """Store the value. Wake up the waiters.""" with self._condition: self.value = value self._exception = None for callback in self._callbacks: self._handler.completion_queue.put( lambda: callback(self) ) self._condition.notify_all() def set_exception(self, exception): """Store the exception. Wake up the waiters.""" with self._condition: self._exception = exception for callback in self._callbacks: self._handler.completion_queue.put( lambda: callback(self) ) self._condition.notify_all() def get(self, block=True, timeout=None): """Return the stored value or raise the exception. If there is no value raises TimeoutError. """ with self._condition: if self._exception is not _NONE: if self._exception is None: return self.value raise self._exception elif block: self._condition.wait(timeout) if self._exception is not _NONE: if self._exception is None: return self.value raise self._exception # if we get to this point we timeout raise TimeoutError() def get_nowait(self): """Return the value or raise the exception without blocking. If nothing is available, raises TimeoutError """ return self.get(block=False) def wait(self, timeout=None): """Block until the instance is ready.""" with self._condition: self._condition.wait(timeout) return self._exception is not _NONE def rawlink(self, callback): """Register a callback to call when a value or an exception is set""" with self._condition: # Are we already set? Dispatch it now if self.ready(): self._handler.completion_queue.put( lambda: callback(self) ) return if callback not in self._callbacks: self._callbacks.append(callback) def unlink(self, callback): """Remove the callback set by :meth:`rawlink`""" with self._condition: if self.ready(): # Already triggered, ignore return if callback in self._callbacks: self._callbacks.remove(callback) @implementer(IHandler) class SequentialThreadingHandler(object): """Threading handler for sequentially executing callbacks. This handler executes callbacks in a sequential manner. A queue is created for each of the callback events, so that each type of event has its callback type run sequentially. These are split into two queues, one for watch events and one for async result completion callbacks. Each queue type has a thread worker that pulls the callback event off the queue and runs it in the order the client sees it. This split helps ensure that watch callbacks won't block session re-establishment should the connection be lost during a Zookeeper client call. Watch and completion callbacks should avoid blocking behavior as the next callback of that type won't be run until it completes. If you need to block, spawn a new thread and return immediately so callbacks can proceed. .. note:: Completion callbacks can block to wait on Zookeeper calls, but no other completion callbacks will execute until the callback returns. """ name = "sequential_threading_handler" timeout_exception = TimeoutError sleep_func = staticmethod(time.sleep) def __init__(self): """Create a :class:`SequentialThreadingHandler` instance""" self.callback_queue = Queue.Queue() self.completion_queue = Queue.Queue() self._running = False self._state_change = threading.Lock() self._workers = [] atexit.register(self.stop) def _create_thread_worker(self, queue): def thread_worker(): # pragma: nocover while True: try: func = queue.get() try: if func is _STOP: break func() except Exception: log.exception("Exception in worker queue thread") finally: queue.task_done() except Queue.Empty: continue t = threading.Thread(target=thread_worker) # Even though these should be joined, it's possible stop might # not issue in time so we set them to daemon to let the program # exit anyways t.daemon = True t.start() return t def start(self): """Start the worker threads.""" with self._state_change: if self._running: return # Spawn our worker threads, we have # - A callback worker for watch events to be called # - A completion worker for completion events to be called for queue in (self.completion_queue, self.callback_queue): w = self._create_thread_worker(queue) self._workers.append(w) self._running = True def stop(self): """Stop the worker threads and empty all queues.""" with self._state_change: if not self._running: return self._running = False for queue in (self.completion_queue, self.callback_queue): queue.put(_STOP) self._workers.reverse() while self._workers: worker = self._workers.pop() worker.join() # Clear the queues self.callback_queue = Queue.Queue() self.completion_queue = Queue.Queue() def select(self, *args, **kwargs): return select.select(*args, **kwargs) def socket(self): return create_tcp_socket(socket) def event_object(self): """Create an appropriate Event object""" return threading.Event() def lock_object(self): """Create a lock object""" return threading.Lock() def rlock_object(self): """Create an appropriate RLock object""" return threading.RLock() def async_result(self): """Create a :class:`AsyncResult` instance""" return AsyncResult(self) def spawn(self, func, *args, **kwargs): t = threading.Thread(target=func, args=args, kwargs=kwargs) t.daemon = True t.start() return t def dispatch_callback(self, callback): """Dispatch to the callback object The callback is put on separate queues to run depending on the type as documented for the :class:`SequentialThreadingHandler`. """ self.callback_queue.put(lambda: callback.func(*callback.args)) kazoo-1.2.1/kazoo/handlers/utils.py000066400000000000000000000040671217652145400173110ustar00rootroot00000000000000"""Kazoo handler helpers""" HAS_FNCTL = True try: import fcntl except ImportError: # pragma: nocover HAS_FNCTL = False import functools import os def create_pipe(): """Create a non-blocking read/write pipe. """ r, w = os.pipe() if HAS_FNCTL: fcntl.fcntl(r, fcntl.F_SETFL, os.O_NONBLOCK) fcntl.fcntl(w, fcntl.F_SETFL, os.O_NONBLOCK) return r, w def create_tcp_socket(module): """Create a TCP socket with the CLOEXEC flag set. """ type_ = module.SOCK_STREAM if hasattr(module, 'SOCK_CLOEXEC'): # pragma: nocover # if available, set cloexec flag during socket creation type_ != module.SOCK_CLOEXEC sock = module.socket(module.AF_INET, type_) sock.setsockopt(module.IPPROTO_TCP, module.TCP_NODELAY, 1) if HAS_FNCTL: flags = fcntl.fcntl(sock, fcntl.F_GETFD) fcntl.fcntl(sock, fcntl.F_SETFD, flags | fcntl.FD_CLOEXEC) return sock def capture_exceptions(async_result): """Return a new decorated function that propagates the exceptions of the wrapped function to an async_result. :param async_result: An async result implementing :class:`IAsyncResult` """ def capture(function): @functools.wraps(function) def captured_function(*args, **kwargs): try: return function(*args, **kwargs) except Exception as exc: async_result.set_exception(exc) return captured_function return capture def wrap(async_result): """Return a new decorated function that propagates the return value or exception of wrapped function to an async_result. NOTE: Only propagates a non-None return value. :param async_result: An async result implementing :class:`IAsyncResult` """ def capture(function): @capture_exceptions(async_result) def captured_function(*args, **kwargs): value = function(*args, **kwargs) if value is not None: async_result.set(value) return value return captured_function return capture kazoo-1.2.1/kazoo/hosts.py000066400000000000000000000022471217652145400155070ustar00rootroot00000000000000import random class HostIterator(object): """An iterator that returns selected hosts in order. A host is guaranteed to not be selected twice unless there is only one host in the collection. """ def __init__(self, hosts): self.hosts = hosts def __iter__(self): for host in self.hosts[:]: yield host def __len__(self): return len(self.hosts) class RandomHostIterator(HostIterator): """An iterator that returns a randomly selected host.""" def __iter__(self): hostslist = self.hosts[:] random.shuffle(hostslist) for host in hostslist: yield host def collect_hosts(hosts, randomize=True): """Collect a set of hosts and an optional chroot from a string.""" host_ports, chroot = hosts.partition("/")[::2] chroot = "/" + chroot if chroot else None result = [] for host_port in host_ports.split(","): host, port = host_port.partition(":")[::2] port = int(port.strip()) if port else 2181 result.append((host.strip(), port)) if randomize: return (RandomHostIterator(result), chroot) return (HostIterator(result), chroot) kazoo-1.2.1/kazoo/interfaces.py000066400000000000000000000135731217652145400164760ustar00rootroot00000000000000"""Kazoo Interfaces""" from zope.interface import ( Attribute, Interface, ) # public API class IHandler(Interface): """A Callback Handler for Zookeeper completion and watch callbacks This object must implement several methods responsible for determining how completion / watch callbacks are handled as well as the method for calling :class:`IAsyncResult` callback functions. These functions are used to abstract differences between a Python threading environment and asynchronous single-threaded environments like gevent. The minimum functionality needed for Kazoo to handle these differences is encompassed in this interface. The Handler should document how callbacks are called for: * Zookeeper completion events * Zookeeper watch events """ name = Attribute( """Human readable name of the Handler interface""") timeout_exception = Attribute( """Exception class that should be thrown and captured if a result is not available within the given time""") sleep_func = Attribute( """Appropriate sleep function that can be called with a single argument and sleep.""") def start(): """Start the handler, used for setting up the handler.""" def stop(): """Stop the handler. Should block until the handler is safely stopped.""" def select(): """A select method that implements Python's select.select API""" def socket(): """A socket method that implements Python's socket.socket API""" def event_object(): """Return an appropriate object that implements Python's threading.Event API""" def lock_object(): """Return an appropriate object that implements Python's threading.Lock API""" def rlock_object(): """Return an appropriate object that implements Python's threading.RLock API""" def async_result(): """Return an instance that conforms to the :class:`~IAsyncResult` interface appropriate for this handler""" def spawn(func, *args, **kwargs): """Spawn a function to run asynchronously :param args: args to call the function with. :param kwargs: keyword args to call the function with. This method should return immediately and execute the function with the provided args and kwargs in an asynchronous manner. """ def dispatch_callback(callback): """Dispatch to the callback object :param callback: A :class:`~kazoo.protocol.states.Callback` object to be called. """ class IAsyncResult(Interface): """An Async Result object that can be queried for a value that has been set asynchronously This object is modeled on the ``gevent`` AsyncResult object. The implementation must account for the fact that the :meth:`set` and :meth:`set_exception` methods will be called from within the Zookeeper thread which may require extra care under asynchronous environments. """ value = Attribute( """Holds the value passed to :meth:`set` if :meth:`set` was called. Otherwise `None`""") exception = Attribute( """Holds the exception instance passed to :meth:`set_exception` if :meth:`set_exception` was called. Otherwise `None`""") def ready(): """Return `True` if and only if it holds a value or an exception""" def successful(): """Return `True` if and only if it is ready and holds a value""" def set(value=None): """Store the value. Wake up the waiters. :param value: Value to store as the result. Any waiters blocking on :meth:`get` or :meth:`wait` are woken up. Sequential calls to :meth:`wait` and :meth:`get` will not block at all.""" def set_exception(exception): """Store the exception. Wake up the waiters. :param exception: Exception to raise when fetching the value. Any waiters blocking on :meth:`get` or :meth:`wait` are woken up. Sequential calls to :meth:`wait` and :meth:`get` will not block at all.""" def get(block=True, timeout=None): """Return the stored value or raise the exception :param block: Whether this method should block or return immediately. :type block: bool :param timeout: How long to wait for a value when `block` is `True`. :type timeout: float If this instance already holds a value / an exception, return / raise it immediately. Otherwise, block until :meth:`set` or :meth:`set_exception` has been called or until the optional timeout occurs.""" def get_nowait(): """Return the value or raise the exception without blocking. If nothing is available, raise the Timeout exception class on the associated :class:`IHandler` interface.""" def wait(timeout=None): """Block until the instance is ready. :param timeout: How long to wait for a value when `block` is `True`. :type timeout: float If this instance already holds a value / an exception, return / raise it immediately. Otherwise, block until :meth:`set` or :meth:`set_exception` has been called or until the optional timeout occurs.""" def rawlink(callback): """Register a callback to call when a value or an exception is set :param callback: A callback function to call after :meth:`set` or :meth:`set_exception` has been called. This function will be passed a single argument, this instance. :type callback: func """ def unlink(callback): """Remove the callback set by :meth:`rawlink` :param callback: A callback function to remove. :type callback: func """ kazoo-1.2.1/kazoo/protocol/000077500000000000000000000000001217652145400156315ustar00rootroot00000000000000kazoo-1.2.1/kazoo/protocol/__init__.py000066400000000000000000000000021217652145400177320ustar00rootroot00000000000000# kazoo-1.2.1/kazoo/protocol/connection.py000066400000000000000000000546541217652145400203600ustar00rootroot00000000000000"""Zookeeper Protocol Connection Handler""" import logging import itertools import os import random import select import socket import sys import time from binascii import hexlify from contextlib import contextmanager from kazoo.exceptions import ( AuthFailedError, ConnectionDropped, EXCEPTIONS, SessionExpiredError, NoNodeError ) from kazoo.handlers.utils import create_pipe from kazoo.protocol.serialization import ( Auth, Close, Connect, Exists, GetChildren, Ping, PingInstance, ReplyHeader, Transaction, Watch, int_struct ) from kazoo.protocol.states import ( Callback, KeeperState, WatchedEvent, EVENT_TYPE_MAP, ) from kazoo.retry import ( ForceRetryError, RetryFailedError ) log = logging.getLogger(__name__) # Special testing hook objects used to force a session expired error as # if it came from the server _SESSION_EXPIRED = object() _CONNECTION_DROP = object() STOP_CONNECTING = object() CREATED_EVENT = 1 DELETED_EVENT = 2 CHANGED_EVENT = 3 CHILD_EVENT = 4 WATCH_XID = -1 PING_XID = -2 AUTH_XID = -4 CLOSE_RESPONSE = Close.type if sys.version_info > (3, ): # pragma: nocover def buffer(obj, offset=0): return memoryview(obj)[offset:] advance_iterator = next else: # pragma: nocover def advance_iterator(it): return it.next() class RWPinger(object): """A Read/Write Server Pinger Iterable This object is initialized with the hosts iterator object and the socket creation function. Anytime `next` is called on its iterator it yields either False, or a host, port tuple if it found a r/w capable Zookeeper node. After the first run-through of hosts, an exponential back-off delay is added before the next run. This delay is tracked internally and the iterator will yield False if called too soon. """ def __init__(self, hosts, socket_func, socket_handling): self.hosts = hosts self.socket = socket_func self.last_attempt = None self.socket_handling = socket_handling def __iter__(self): if not self.last_attempt: self.last_attempt = time.time() delay = 0.5 while True: yield self._next_server(delay) def _next_server(self, delay): jitter = random.randint(0, 100) / 100.0 while time.time() < self.last_attempt + delay + jitter: # Skip rw ping checks if its too soon return False for host, port in self.hosts: sock = self.socket() log.debug("Pinging server for r/w: %s:%s", host, port) self.last_attempt = time.time() try: with self.socket_handling(): sock.connect((host, port)) sock.sendall(b"isro") result = sock.recv(8192) sock.close() if result == b'rw': return (host, port) else: return False except ConnectionDropped: return False # Add some jitter between host pings while time.time() < self.last_attempt + jitter: return False delay *= 2 class RWServerAvailable(Exception): """Thrown if a RW Server becomes available""" class ConnectionHandler(object): """Zookeeper connection handler""" def __init__(self, client, retry_sleeper, logger=None): self.client = client self.handler = client.handler self.retry_sleeper = retry_sleeper self.logger = logger or log # Our event objects self.connection_closed = client.handler.event_object() self.connection_closed.set() self.connection_stopped = client.handler.event_object() self.connection_stopped.set() self.ping_outstanding = client.handler.event_object() self._read_pipe = None self._write_pipe = None self._socket = None self._xid = None self._rw_server = None self._ro_mode = False self._connection_routine = None # This is instance specific to avoid odd thread bug issues in Python # during shutdown global cleanup @contextmanager def _socket_error_handling(self): try: yield except (socket.error, select.error) as e: err = getattr(e, 'strerror', e) raise ConnectionDropped("socket connection error: %s" % (err,)) def start(self): """Start the connection up""" if self.connection_closed.is_set(): self._read_pipe, self._write_pipe = create_pipe() self.connection_closed.clear() if self._connection_routine: raise Exception("Unable to start, connection routine already " "active.") self._connection_routine = self.handler.spawn(self.zk_loop) def stop(self, timeout=None): """Ensure the writer has stopped, wait to see if it does.""" self.connection_stopped.wait(timeout) if self._connection_routine: self._connection_routine.join() self._connection_routine = None return self.connection_stopped.is_set() def close(self): """Release resources held by the connection The connection can be restarted afterwards. """ if not self.connection_stopped.is_set(): raise Exception("Cannot close connection until it is stopped") self.connection_closed.set() wp, rp = self._write_pipe, self._read_pipe self._write_pipe = self._read_pipe = None os.close(wp) os.close(rp) def _server_pinger(self): """Returns a server pinger iterable, that will ping the next server in the list, and apply a back-off between attempts.""" return RWPinger(self.client.hosts, self.handler.socket, self._socket_error_handling) def _read_header(self, timeout): b = self._read(4, timeout) length = int_struct.unpack(b)[0] b = self._read(length, timeout) header, offset = ReplyHeader.deserialize(b, 0) return header, b, offset def _read(self, length, timeout): msgparts = [] remaining = length with self._socket_error_handling(): while remaining > 0: s = self.handler.select([self._socket], [], [], timeout)[0] if not s: # pragma: nocover # If the read list is empty, we got a timeout. We don't # have to check wlist and xlist as we don't set any raise self.handler.timeout_exception("socket time-out") chunk = self._socket.recv(remaining) if chunk == b'': raise ConnectionDropped('socket connection broken') msgparts.append(chunk) remaining -= len(chunk) return b"".join(msgparts) def _invoke(self, timeout, request, xid=None): """A special writer used during connection establishment only""" self._submit(request, timeout, xid) zxid = None if xid: header, buffer, offset = self._read_header(timeout) if header.xid != xid: raise RuntimeError('xids do not match, expected %r received %r', xid, header.xid) if header.zxid > 0: zxid = header.zxid if header.err: callback_exception = EXCEPTIONS[header.err]() self.logger.info('Received error(xid=%s) %r', xid, callback_exception) raise callback_exception return zxid msg = self._read(4, timeout) length = int_struct.unpack(msg)[0] msg = self._read(length, timeout) if hasattr(request, 'deserialize'): try: obj, _ = request.deserialize(msg, 0) except Exception: self.logger.exception("Exception raised during deserialization" " of request: %s", request) # raise ConnectionDropped so connect loop will retry raise ConnectionDropped('invalid server response') self.logger.debug('Read response %s', obj) return obj, zxid return zxid def _submit(self, request, timeout, xid=None): """Submit a request object with a timeout value and optional xid""" b = bytearray() if xid: b.extend(int_struct.pack(xid)) if request.type: b.extend(int_struct.pack(request.type)) b += request.serialize() self.logger.log((logging.DEBUG if isinstance(request, Ping) else logging.INFO), "Sending request(xid=%s): %s", xid, request) self._write(int_struct.pack(len(b)) + b, timeout) def _write(self, msg, timeout): """Write a raw msg to the socket""" sent = 0 msg_length = len(msg) with self._socket_error_handling(): while sent < msg_length: s = self.handler.select([], [self._socket], [], timeout)[1] if not s: # pragma: nocover # If the write list is empty, we got a timeout. We don't # have to check rlist and xlist as we don't set any raise self.handler.timeout_exception("socket time-out") msg_slice = buffer(msg, sent) bytes_sent = self._socket.send(msg_slice) if not bytes_sent: raise ConnectionDropped('socket connection broken') sent += bytes_sent def _read_watch_event(self, buffer, offset): client = self.client watch, offset = Watch.deserialize(buffer, offset) path = watch.path self.logger.info('Received EVENT: %s', watch) watchers = [] if watch.type in (CREATED_EVENT, CHANGED_EVENT): watchers.extend(client._data_watchers.pop(path, [])) elif watch.type == DELETED_EVENT: watchers.extend(client._data_watchers.pop(path, [])) watchers.extend(client._child_watchers.pop(path, [])) elif watch.type == CHILD_EVENT: watchers.extend(client._child_watchers.pop(path, [])) else: self.logger.warn('Received unknown event %r', watch.type) return # Strip the chroot if needed path = client.unchroot(path) ev = WatchedEvent(EVENT_TYPE_MAP[watch.type], client._state, path) # Last check to ignore watches if we've been stopped if client._stopped.is_set(): return # Dump the watchers to the watch thread for watch in watchers: client.handler.dispatch_callback(Callback('watch', watch, (ev,))) def _read_response(self, header, buffer, offset): client = self.client request, async_object, xid = client._pending.popleft() if header.zxid and header.zxid > 0: client.last_zxid = header.zxid if header.xid != xid: raise RuntimeError('xids do not match, expected %r ' 'received %r', xid, header.xid) # Determine if its an exists request and a no node error exists_error = (header.err == NoNodeError.code and request.type == Exists.type) # Set the exception if its not an exists error if header.err and not exists_error: callback_exception = EXCEPTIONS[header.err]() self.logger.info('Received error(xid=%s) %r', xid, callback_exception) if async_object: async_object.set_exception(callback_exception) elif request and async_object: if exists_error: # It's a NoNodeError, which is fine for an exists # request async_object.set(None) else: try: response = request.deserialize(buffer, offset) except Exception as exc: self.logger.exception("Exception raised during deserialization" " of request: %s", request) async_object.set_exception(exc) return self.logger.info('Received response(xid=%s): %r', xid, response) # We special case a Transaction as we have to unchroot things if request.type == Transaction.type: response = Transaction.unchroot(client, response) async_object.set(response) # Determine if watchers should be registered watcher = getattr(request, 'watcher', None) if not client._stopped.is_set() and watcher: if isinstance(request, GetChildren): client._child_watchers[request.path].add(watcher) else: client._data_watchers[request.path].add(watcher) if isinstance(request, Close): self.logger.debug('Read close response') return CLOSE_RESPONSE def _read_socket(self, read_timeout): """Called when there's something to read on the socket""" client = self.client header, buffer, offset = self._read_header(read_timeout) if header.xid == PING_XID: self.logger.debug('Received Ping') self.ping_outstanding.clear() elif header.xid == AUTH_XID: self.logger.debug('Received AUTH') if header.err: # We go ahead and fail out the connection, mainly because # thats what Zookeeper client docs think is appropriate # XXX TODO: Should we fail out? Or handle auth failure # differently here since the session id is actually valid! client._session_callback(KeeperState.AUTH_FAILED) elif header.xid == WATCH_XID: self._read_watch_event(buffer, offset) else: self.logger.debug('Reading for header %r', header) return self._read_response(header, buffer, offset) def _send_request(self, read_timeout, connect_timeout): """Called when we have something to send out on the socket""" client = self.client try: request, async_object = client._queue[0] except IndexError: # Not actually something on the queue, this can occur if # something happens to cancel the request such that we # don't clear the pipe below after sending try: # Clear possible inconsistence (no request in the queue # but have data in the read pipe), which causes cpu to spin. os.read(self._read_pipe, 1) except OSError: pass return # Special case for testing, if this is a _SessionExpire object # then throw a SessionExpiration error as if we were dropped if request is _SESSION_EXPIRED: raise SessionExpiredError("Session expired: Testing") if request is _CONNECTION_DROP: raise ConnectionDropped("Connection dropped: Testing") # Special case for auth packets if request.type == Auth.type: self._submit(request, connect_timeout, AUTH_XID) client._queue.popleft() os.read(self._read_pipe, 1) return self._xid += 1 self._submit(request, connect_timeout, self._xid) client._queue.popleft() os.read(self._read_pipe, 1) client._pending.append((request, async_object, self._xid)) def _send_ping(self, connect_timeout): self.ping_outstanding.set() self._submit(PingInstance, connect_timeout, PING_XID) # Determine if we need to check for a r/w server if self._ro_mode: result = advance_iterator(self._ro_mode) if result: self._rw_server = result raise RWServerAvailable() def zk_loop(self): """Main Zookeeper handling loop""" self.logger.debug('ZK loop started') self.connection_stopped.clear() retry = self.retry_sleeper.copy() try: hosts = itertools.cycle(self.client.hosts) while not self.client._stopped.is_set(): # If the connect_loop returns STOP_CONNECTING, stop retrying if retry(self._connect_loop, hosts, retry) is STOP_CONNECTING: break except RetryFailedError: self.logger.warning("Failed connecting to Zookeeper " "within the connection retry policy.") self.client._session_callback(KeeperState.CLOSED) finally: self.connection_stopped.set() self.logger.debug('Connection stopped') def _connect_loop(self, hosts, retry): # Iterate through the hosts a full cycle before starting over total_hosts = len(self.client.hosts) cur = 0 status = None while cur < total_hosts and status is not STOP_CONNECTING: if self.client._stopped.is_set(): status = STOP_CONNECTING break status = self._connect_attempt(hosts, retry) cur += 1 if status is STOP_CONNECTING: return STOP_CONNECTING else: raise ForceRetryError('Reconnecting') def _connect_attempt(self, hosts, retry): client = self.client TimeoutError = self.handler.timeout_exception close_connection = False host, port = advance_iterator(hosts) self._socket = self.handler.socket() # Were we given a r/w server? If so, use that instead if self._rw_server: self.logger.debug("Found r/w server to use, %s:%s", host, port) host, port = self._rw_server self._rw_server = None if client._state != KeeperState.CONNECTING: client._session_callback(KeeperState.CONNECTING) try: read_timeout, connect_timeout = self._connect(host, port) read_timeout = read_timeout / 1000.0 connect_timeout = connect_timeout / 1000.0 retry.reset() self._xid = 0 while not close_connection: # Watch for something to read or send timeout = read_timeout / 2.0 - random.randint(0, 40) / 100.0 s = self.handler.select([self._socket, self._read_pipe], [], [], timeout)[0] if not s: if self.ping_outstanding.is_set(): self.ping_outstanding.clear() raise ConnectionDropped( "outstanding heartbeat ping not received") self._send_ping(connect_timeout) elif s[0] == self._socket: response = self._read_socket(read_timeout) close_connection = response == CLOSE_RESPONSE else: self._send_request(read_timeout, connect_timeout) self.logger.info('Closing connection to %s:%s', host, port) client._session_callback(KeeperState.CLOSED) return STOP_CONNECTING except (ConnectionDropped, TimeoutError) as e: if isinstance(e, ConnectionDropped): self.logger.warning('Connection dropped: %s', e) else: self.logger.warning('Connection time-out') if client._state != KeeperState.CONNECTING: self.logger.warning("Transition to CONNECTING") client._session_callback(KeeperState.CONNECTING) except AuthFailedError: retry.reset() self.logger.warning('AUTH_FAILED closing') client._session_callback(KeeperState.AUTH_FAILED) return STOP_CONNECTING except SessionExpiredError: retry.reset() self.logger.warning('Session has expired') client._session_callback(KeeperState.EXPIRED_SESSION) except RWServerAvailable: retry.reset() self.logger.warning('Found a RW server, dropping connection') client._session_callback(KeeperState.CONNECTING) except Exception: self.logger.exception('Unhandled exception in connection loop') raise finally: self._socket.close() def _connect(self, host, port): client = self.client self.logger.info('Connecting to %s:%s', host, port) self.logger.debug(' Using session_id: %r session_passwd: %s', client._session_id, hexlify(client._session_passwd)) self._socket.settimeout(client._session_timeout) with self._socket_error_handling(): self._socket.connect((host, port)) self._socket.setblocking(0) connect = Connect(0, client.last_zxid, client._session_timeout, client._session_id or 0, client._session_passwd, client.read_only) connect_result, zxid = self._invoke(client._session_timeout, connect) if connect_result.time_out <= 0: raise SessionExpiredError("Session has expired") if zxid: client.last_zxid = zxid # Load return values client._session_id = connect_result.session_id negotiated_session_timeout = connect_result.time_out connect_timeout = negotiated_session_timeout / len(client.hosts) read_timeout = negotiated_session_timeout * 2.0 / 3.0 client._session_passwd = connect_result.passwd self.logger.debug('Session created, session_id: %r session_passwd: %s\n' ' negotiated session timeout: %s\n' ' connect timeout: %s\n' ' read timeout: %s', client._session_id, hexlify(client._session_passwd), negotiated_session_timeout, connect_timeout, read_timeout) if connect_result.read_only: client._session_callback(KeeperState.CONNECTED_RO) self._ro_mode = iter(self._server_pinger()) else: client._session_callback(KeeperState.CONNECTED) self._ro_mode = None for scheme, auth in client.auth_data: ap = Auth(0, scheme, auth) zxid = self._invoke(connect_timeout, ap, xid=AUTH_XID) if zxid: client.last_zxid = zxid return read_timeout, connect_timeout kazoo-1.2.1/kazoo/protocol/paths.py000066400000000000000000000025001217652145400173170ustar00rootroot00000000000000def normpath(path, trailing=False): """Normalize path, eliminating double slashes, etc.""" comps = path.split('/') new_comps = [] for comp in comps: if comp == '': continue if comp in ('.', '..'): raise ValueError('relative paths not allowed') new_comps.append(comp) new_path = '/'.join(new_comps) if trailing is True and path.endswith('/'): new_path += '/' if path.startswith('/'): return '/' + new_path return new_path def join(a, *p): """Join two or more pathname components, inserting '/' as needed. If any component is an absolute path, all previous path components will be discarded. """ path = a for b in p: if b.startswith('/'): path = b elif path == '' or path.endswith('/'): path += b else: path += '/' + b return path def isabs(s): """Test whether a path is absolute""" return s.startswith('/') def basename(p): """Returns the final component of a pathname""" i = p.rfind('/') + 1 return p[i:] def _prefix_root(root, path, trailing=False): """Prepend a root to a path. """ return normpath(join(_norm_root(root), path.lstrip('/')), trailing=trailing) def _norm_root(root): return normpath(join('/', root)) kazoo-1.2.1/kazoo/protocol/serialization.py000066400000000000000000000261771217652145400210750ustar00rootroot00000000000000"""Zookeeper Serializers, Deserializers, and NamedTuple objects""" from collections import namedtuple import struct from kazoo.exceptions import EXCEPTIONS from kazoo.protocol.states import ZnodeStat from kazoo.security import ACL from kazoo.security import Id # Struct objects with formats compiled bool_struct = struct.Struct('B') int_struct = struct.Struct('!i') int_int_struct = struct.Struct('!ii') int_int_long_struct = struct.Struct('!iiq') int_long_int_long_struct = struct.Struct('!iqiq') multiheader_struct = struct.Struct('!iBi') reply_header_struct = struct.Struct('!iqi') stat_struct = struct.Struct('!qqqqiiiqiiq') try: # pragma: nocover basestring except NameError: basestring = str def read_string(buffer, offset): """Reads an int specified buffer into a string and returns the string and the new offset in the buffer""" length = int_struct.unpack_from(buffer, offset)[0] offset += int_struct.size if length < 0: return None, offset else: index = offset offset += length return buffer[index:index + length].decode('utf-8'), offset def read_acl(bytes, offset): perms = int_struct.unpack_from(bytes, offset)[0] offset += int_struct.size scheme, offset = read_string(bytes, offset) id, offset = read_string(bytes, offset) return ACL(perms, Id(scheme, id)), offset def write_string(bytes): if not bytes: return int_struct.pack(-1) else: utf8_str = bytes.encode('utf-8') return int_struct.pack(len(utf8_str)) + utf8_str def write_buffer(bytes): if not bytes: return int_struct.pack(-1) else: return int_struct.pack(len(bytes)) + bytes def read_buffer(bytes, offset): length = int_struct.unpack_from(bytes, offset)[0] offset += int_struct.size if length < 0: return b'', offset else: index = offset offset += length return bytes[index:index + length], offset class Close(namedtuple('Close', '')): type = -11 @classmethod def serialize(cls): return b'' CloseInstance = Close() class Ping(namedtuple('Ping', '')): type = 11 @classmethod def serialize(cls): return b'' PingInstance = Ping() class Connect(namedtuple('Connect', 'protocol_version last_zxid_seen' ' time_out session_id passwd read_only')): type = None def serialize(self): b = bytearray() b.extend(int_long_int_long_struct.pack( self.protocol_version, self.last_zxid_seen, self.time_out, self.session_id)) b.extend(write_buffer(self.passwd)) b.extend([1 if self.read_only else 0]) return b @classmethod def deserialize(cls, bytes, offset): proto_version, timeout, session_id = int_int_long_struct.unpack_from( bytes, offset) offset += int_int_long_struct.size password, offset = read_buffer(bytes, offset) try: read_only = bool_struct.unpack_from(bytes, offset)[0] is 1 offset += bool_struct.size except struct.error: read_only = False return cls(proto_version, 0, timeout, session_id, password, read_only), offset class Create(namedtuple('Create', 'path data acl flags')): type = 1 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend(write_buffer(self.data)) b.extend(int_struct.pack(len(self.acl))) for acl in self.acl: b.extend(int_struct.pack(acl.perms) + write_string(acl.id.scheme) + write_string(acl.id.id)) b.extend(int_struct.pack(self.flags)) return b @classmethod def deserialize(cls, bytes, offset): return read_string(bytes, offset)[0] class Delete(namedtuple('Delete', 'path version')): type = 2 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend(int_struct.pack(self.version)) return b @classmethod def deserialize(self, bytes, offset): return True class Exists(namedtuple('Exists', 'path watcher')): type = 3 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend([1 if self.watcher else 0]) return b @classmethod def deserialize(cls, bytes, offset): stat = ZnodeStat._make(stat_struct.unpack_from(bytes, offset)) return stat if stat.czxid != -1 else None class GetData(namedtuple('GetData', 'path watcher')): type = 4 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend([1 if self.watcher else 0]) return b @classmethod def deserialize(cls, bytes, offset): data, offset = read_buffer(bytes, offset) stat = ZnodeStat._make(stat_struct.unpack_from(bytes, offset)) return data, stat class SetData(namedtuple('SetData', 'path data version')): type = 5 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend(write_buffer(self.data)) b.extend(int_struct.pack(self.version)) return b @classmethod def deserialize(cls, bytes, offset): return ZnodeStat._make(stat_struct.unpack_from(bytes, offset)) class GetACL(namedtuple('GetACL', 'path')): type = 6 def serialize(self): return bytearray(write_string(self.path)) @classmethod def deserialize(cls, bytes, offset): count = int_struct.unpack_from(bytes, offset)[0] offset += int_struct.size if count == -1: # pragma: nocover return [] acls = [] for c in range(count): acl, offset = read_acl(bytes, offset) acls.append(acl) stat = ZnodeStat._make(stat_struct.unpack_from(bytes, offset)) return acls, stat class SetACL(namedtuple('SetACL', 'path acls version')): type = 7 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend(int_struct.pack(len(self.acls))) for acl in self.acls: b.extend(int_struct.pack(acl.perms) + write_string(acl.id.scheme) + write_string(acl.id.id)) b.extend(int_struct.pack(self.version)) return b @classmethod def deserialize(cls, bytes, offset): return ZnodeStat._make(stat_struct.unpack_from(bytes, offset)) class GetChildren(namedtuple('GetChildren', 'path watcher')): type = 8 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend([1 if self.watcher else 0]) return b @classmethod def deserialize(cls, bytes, offset): count = int_struct.unpack_from(bytes, offset)[0] offset += int_struct.size if count == -1: # pragma: nocover return [] children = [] for c in range(count): child, offset = read_string(bytes, offset) children.append(child) return children class Sync(namedtuple('Sync', 'path')): type = 9 def serialize(self): return write_string(self.path) @classmethod def deserialize(cls, buffer, offset): return read_string(buffer, offset)[0] class GetChildren2(namedtuple('GetChildren2', 'path watcher')): type = 12 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend([1 if self.watcher else 0]) return b @classmethod def deserialize(cls, bytes, offset): count = int_struct.unpack_from(bytes, offset)[0] offset += int_struct.size if count == -1: # pragma: nocover return [] children = [] for c in range(count): child, offset = read_string(bytes, offset) children.append(child) stat = ZnodeStat._make(stat_struct.unpack_from(bytes, offset)) return children, stat class CheckVersion(namedtuple('CheckVersion', 'path version')): type = 13 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend(int_struct.pack(self.version)) return b class Transaction(namedtuple('Transaction', 'operations')): type = 14 def serialize(self): b = bytearray() for op in self.operations: b.extend(MultiHeader(op.type, False, -1).serialize() + op.serialize()) return b + multiheader_struct.pack(-1, True, -1) @classmethod def deserialize(cls, bytes, offset): header = MultiHeader(None, False, None) results = [] response = None while not header.done: if header.type == Create.type: response, offset = read_string(bytes, offset) elif header.type == Delete.type: response = True elif header.type == SetData.type: response = ZnodeStat._make( stat_struct.unpack_from(bytes, offset)) offset += stat_struct.size elif header.type == CheckVersion.type: response = True elif header.type == -1: err = int_struct.unpack_from(bytes, offset)[0] offset += int_struct.size response = EXCEPTIONS[err]() if response: results.append(response) header, offset = MultiHeader.deserialize(bytes, offset) return results @staticmethod def unchroot(client, response): resp = [] for result in response: if isinstance(result, basestring): resp.append(client.unchroot(result)) else: resp.append(result) return resp class Auth(namedtuple('Auth', 'auth_type scheme auth')): type = 100 def serialize(self): return (int_struct.pack(self.auth_type) + write_string(self.scheme) + write_string(self.auth)) class Watch(namedtuple('Watch', 'type state path')): @classmethod def deserialize(cls, bytes, offset): """Given bytes and the current bytes offset, return the type, state, path, and new offset""" type, state = int_int_struct.unpack_from(bytes, offset) offset += int_int_struct.size path, offset = read_string(bytes, offset) return cls(type, state, path), offset class ReplyHeader(namedtuple('ReplyHeader', 'xid, zxid, err')): @classmethod def deserialize(cls, bytes, offset): """Given bytes and the current bytes offset, return a :class:`ReplyHeader` instance and the new offset""" new_offset = offset + reply_header_struct.size return cls._make( reply_header_struct.unpack_from(bytes, offset)), new_offset class MultiHeader(namedtuple('MultiHeader', 'type done err')): def serialize(self): b = bytearray() b.extend(int_struct.pack(self.type)) b.extend([1 if self.done else 0]) b.extend(int_struct.pack(self.err)) return b @classmethod def deserialize(cls, bytes, offset): t, done, err = multiheader_struct.unpack_from(bytes, offset) offset += multiheader_struct.size return cls(t, done is 1, err), offset kazoo-1.2.1/kazoo/protocol/states.py000066400000000000000000000142401217652145400175070ustar00rootroot00000000000000"""Kazoo State and Event objects""" from collections import namedtuple class KazooState(object): """High level connection state values States inspired by Netflix Curator. .. attribute:: SUSPENDED The connection has been lost but may be recovered. We should operate in a "safe mode" until then. When the connection is resumed, it may be discovered that the session expired. A client should not assume that locks are valid during this time. .. attribute:: CONNECTED The connection is alive and well. .. attribute:: LOST The connection has been confirmed dead. Any ephemeral nodes will need to be recreated upon re-establishing a connection. If locks were acquired or recipes using ephemeral nodes are in use, they can be considered lost as well. """ SUSPENDED = "SUSPENDED" CONNECTED = "CONNECTED" LOST = "LOST" class KeeperState(object): """Zookeeper State Represents the Zookeeper state. Watch functions will receive a :class:`KeeperState` attribute as their state argument. .. attribute:: AUTH_FAILED Authentication has failed, this is an unrecoverable error. .. attribute:: CONNECTED Zookeeper is connected. .. attribute:: CONNECTED_RO Zookeeper is connected in read-only state. .. attribute:: CONNECTING Zookeeper is currently attempting to establish a connection. .. attribute:: EXPIRED_SESSION The prior session was invalid, all prior ephemeral nodes are gone. """ AUTH_FAILED = 'AUTH_FAILED' CONNECTED = 'CONNECTED' CONNECTED_RO = 'CONNECTED_RO' CONNECTING = 'CONNECTING' CLOSED = 'CLOSED' EXPIRED_SESSION = 'EXPIRED_SESSION' class EventType(object): """Zookeeper Event Represents a Zookeeper event. Events trigger watch functions which will receive a :class:`EventType` attribute as their event argument. .. attribute:: CREATED A node has been created. .. attribute:: DELETED A node has been deleted. .. attribute:: CHANGED The data for a node has changed. .. attribute:: CHILD The children under a node have changed (a child was added or removed). This event does not indicate the data for a child node has changed, which must have its own watch established. """ CREATED = 'CREATED' DELETED = 'DELETED' CHANGED = 'CHANGED' CHILD = 'CHILD' EVENT_TYPE_MAP = { 1: EventType.CREATED, 2: EventType.DELETED, 3: EventType.CHANGED, 4: EventType.CHILD } class WatchedEvent(namedtuple('WatchedEvent', ('type', 'state', 'path'))): """A change on ZooKeeper that a Watcher is able to respond to. The :class:`WatchedEvent` includes exactly what happened, the current state of ZooKeeper, and the path of the node that was involved in the event. An instance of :class:`WatchedEvent` will be passed to registered watch functions. .. attribute:: type A :class:`EventType` attribute indicating the event type. .. attribute:: state A :class:`KeeperState` attribute indicating the Zookeeper state. .. attribute:: path The path of the node for the watch event. """ class Callback(namedtuple('Callback', ('type', 'func', 'args'))): """A callback that is handed to a handler for dispatch :param type: Type of the callback, currently is only 'watch' :param func: Callback function :param args: Argument list for the callback function """ class ZnodeStat(namedtuple('ZnodeStat', 'czxid mzxid ctime mtime version' ' cversion aversion ephemeralOwner dataLength' ' numChildren pzxid')): """A ZnodeStat structure with convenience properties When getting the value of a node from Zookeeper, the properties for the node known as a "Stat structure" will be retrieved. The :class:`ZnodeStat` object provides access to the standard Stat properties and additional properties that are more readable and use Python time semantics (seconds since epoch instead of ms). .. note:: The original Zookeeper Stat name is in parens next to the name when it differs from the convenience attribute. These are **not functions**, just attributes. .. attribute:: creation_transaction_id (czxid) The transaction id of the change that caused this znode to be created. .. attribute:: last_modified_transaction_id (mzxid) The transaction id of the change that last modified this znode. .. attribute:: created (ctime) The time in seconds from epoch when this node was created. (ctime is in milliseconds) .. attribute:: last_modified (mtime) The time in seconds from epoch when this znode was last modified. (mtime is in milliseconds) .. attribute:: version The number of changes to the data of this znode. .. attribute:: acl_version (aversion) The number of changes to the ACL of this znode. .. attribute:: owner_session_id (ephemeralOwner) The session id of the owner of this znode if the znode is an ephemeral node. If it is not an ephemeral node, it will be `None`. (ephemeralOwner will be 0 if it is not ephemeral) .. attribute:: data_length (dataLength) The length of the data field of this znode. .. attribute:: children_count (numChildren) The number of children of this znode. """ @property def acl_version(self): return self.aversion @property def children_version(self): return self.cversion @property def created(self): return self.ctime / 1000.0 @property def last_modified(self): return self.mtime / 1000.0 @property def owner_session_id(self): return self.ephemeralOwner or None @property def creation_transaction_id(self): return self.czxid @property def last_modified_transaction_id(self): return self.mzxid @property def data_length(self): return self.dataLength @property def children_count(self): return self.numChildren kazoo-1.2.1/kazoo/recipe/000077500000000000000000000000001217652145400152375ustar00rootroot00000000000000kazoo-1.2.1/kazoo/recipe/__init__.py000066400000000000000000000000021217652145400173400ustar00rootroot00000000000000# kazoo-1.2.1/kazoo/recipe/barrier.py000066400000000000000000000144051217652145400172430ustar00rootroot00000000000000"""Zookeeper Barriers""" import os import socket import uuid from kazoo.protocol.states import EventType from kazoo.exceptions import KazooException from kazoo.exceptions import NoNodeError from kazoo.exceptions import NodeExistsError class Barrier(object): """Kazoo Barrier Implements a barrier to block processing of a set of nodes until a condition is met at which point the nodes will be allowed to proceed. The barrier is in place if its node exists. .. warning:: The :meth:`wait` function does not handle connection loss and may raise :exc:`~kazoo.exceptions.ConnectionLossException` if the connection is lost while waiting. """ def __init__(self, client, path): """Create a Kazoo Barrier :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The barrier path to use. """ self.client = client self.path = path def create(self): """Establish the barrier if it doesn't exist already""" self.client.retry(self.client.ensure_path, self.path) def remove(self): """Remove the barrier :returns: Whether the barrier actually needed to be removed. :rtype: bool """ try: self.client.retry(self.client.delete, self.path) return True except NoNodeError: return False def wait(self, timeout=None): """Wait on the barrier to be cleared :returns: True if the barrier has been cleared, otherwise False. :rtype: bool """ cleared = self.client.handler.event_object() def wait_for_clear(event): if event.type == EventType.DELETED: cleared.set() exists = self.client.exists(self.path, watch=wait_for_clear) if not exists: return True cleared.wait(timeout) return cleared.is_set() class DoubleBarrier(object): """Kazoo Double Barrier Double barriers are used to synchronize the beginning and end of a distributed task. The barrier blocks when entering it until all the members have joined, and blocks when leaving until all the members have left. .. note:: You should register a listener for session loss as the process will no longer be part of the barrier once the session is gone. Connection losses will be retried with the default retry policy. """ def __init__(self, client, path, num_clients, identifier=None): """Create a Double Barrier :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The barrier path to use. :param num_clients: How many clients must enter the barrier to proceed. :type num_clients: int :param identifier: An identifier to use for this member of the barrier when participating. Defaults to the hostname + process id. """ self.client = client self.path = path self.num_clients = num_clients self._identifier = identifier or '%s-%s' % ( socket.getfqdn(), os.getpid()) self.participating = False self.assured_path = False self.node_name = uuid.uuid4().hex self.create_path = self.path + "/" + self.node_name def enter(self): """Enter the barrier, blocks until all nodes have entered""" try: self.client.retry(self._inner_enter) self.participating = True except KazooException: # We failed to enter, best effort cleanup self._best_effort_cleanup() self.participating = False def _inner_enter(self): # make sure our barrier parent node exists if not self.assured_path: self.client.ensure_path(self.path) self.assured_path = True ready = self.client.handler.event_object() try: self.client.create(self.create_path, self._identifier.encode('utf-8'), ephemeral=True) except NodeExistsError: pass def created(event): if event.type == EventType.CREATED: ready.set() self.client.exists(self.path + '/' + 'ready', watch=created) children = self.client.get_children(self.path) if len(children) < self.num_clients: ready.wait() else: self.client.ensure_path(self.path + '/ready') return True def leave(self): """Leave the barrier, blocks until all nodes have left""" try: self.client.retry(self._inner_leave) except KazooException: # pragma: nocover # Failed to cleanly leave self._best_effort_cleanup() self.participating = False def _inner_leave(self): # Delete the ready node if its around try: self.client.delete(self.path + '/ready') except NoNodeError: pass while True: children = self.client.get_children(self.path) if not children: return True if len(children) == 1 and children[0] == self.node_name: self.client.delete(self.create_path) return True children.sort() ready = self.client.handler.event_object() def deleted(event): if event.type == EventType.DELETED: ready.set() if self.node_name == children[0]: # We're first, wait on the highest to leave if not self.client.exists(self.path + '/' + children[-1], watch=deleted): continue ready.wait() continue # Delete our node self.client.delete(self.create_path) # Wait on the first if not self.client.exists(self.path + '/' + children[0], watch=deleted): continue # Wait for the lowest to be deleted ready.wait() def _best_effort_cleanup(self): try: self.client.retry(self.client.delete, self.create_path) except NoNodeError: pass kazoo-1.2.1/kazoo/recipe/counter.py000066400000000000000000000051141217652145400172710ustar00rootroot00000000000000"""Zookeeper Counter""" from kazoo.exceptions import BadVersionError from kazoo.retry import ForceRetryError class Counter(object): """Kazoo Counter A shared counter of either int or float values. Changes to the counter are done atomically. The general retry policy is used to retry operations if concurrent changes are detected. The data is marshaled using `repr(value)` and converted back using `type(counter.default)(value)` both using an ascii encoding. As such other data types might be used for the counter value. Counter changes can raise :class:`~kazoo.exceptions.BadVersionError` if the retry policy wasn't able to apply a change. Example usage: .. code-block:: python zk = KazooClient() counter = zk.Counter("/int") counter += 2 counter -= 1 counter.value == 1 counter = zk.Counter("/float", default=1.0) counter += 2.0 counter.value == 3.0 """ def __init__(self, client, path, default=0): """Create a Kazoo Counter :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The counter path to use. :param default: The default value. """ self.client = client self.path = path self.default = default self.default_type = type(default) self._ensured_path = False def _ensure_node(self): if not self._ensured_path: # make sure our node exists self.client.ensure_path(self.path) self._ensured_path = True def _value(self): self._ensure_node() old, stat = self.client.get(self.path) old = old.decode('ascii') if old != b'' else self.default version = stat.version data = self.default_type(old) return data, version @property def value(self): return self._value()[0] def _change(self, value): if not isinstance(value, self.default_type): raise TypeError('invalid type for value change') self.client.retry(self._inner_change, value) return self def _inner_change(self, value): data, version = self._value() data = repr(data + value).encode('ascii') try: self.client.set(self.path, data, version=version) except BadVersionError: # pragma: nocover raise ForceRetryError() def __add__(self, value): """Add value to counter.""" return self._change(value) def __sub__(self, value): """Subtract value from counter.""" return self._change(-value) kazoo-1.2.1/kazoo/recipe/election.py000066400000000000000000000042101217652145400174100ustar00rootroot00000000000000"""ZooKeeper Leader Elections""" from kazoo.exceptions import CancelledError class Election(object): """Kazoo Basic Leader Election Example usage with a :class:`~kazoo.client.KazooClient` instance:: zk = KazooClient() election = zk.Election("/electionpath", "my-identifier") # blocks until the election is won, then calls # my_leader_function() election.run(my_leader_function) """ def __init__(self, client, path, identifier=None): """Create a Kazoo Leader Election :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The election path to use. :param identifier: Name to use for this lock contender. This can be useful for querying to see who the current lock contenders are. """ self.lock = client.Lock(path, identifier) def run(self, func, *args, **kwargs): """Contend for the leadership This call will block until either this contender is cancelled or this contender wins the election and the provided leadership function subsequently returns or fails. :param func: A function to be called if/when the election is won. :param args: Arguments to leadership function. :param kwargs: Keyword arguments to leadership function. """ if not callable(func): raise ValueError("leader function is not callable") try: with self.lock: func(*args, **kwargs) except CancelledError: pass def cancel(self): """Cancel participation in the election .. note:: If this contender has already been elected leader, this method will not interrupt the leadership function. """ self.lock.cancel() def contenders(self): """Return an ordered list of the current contenders in the election .. note:: If the contenders did not set an identifier, it will appear as a blank string. """ return self.lock.contenders() kazoo-1.2.1/kazoo/recipe/lock.py000066400000000000000000000404041217652145400165430ustar00rootroot00000000000000"""Zookeeper Locking Implementations Error Handling ============== It's highly recommended to add a state listener with :meth:`~KazooClient.add_listener` and watch for :attr:`~KazooState.LOST` and :attr:`~KazooState.SUSPENDED` state changes and re-act appropriately. In the event that a :attr:`~KazooState.LOST` state occurs, its certain that the lock and/or the lease has been lost. """ import uuid from kazoo.retry import ( KazooRetry, RetryFailedError, ForceRetryError ) from kazoo.exceptions import CancelledError from kazoo.exceptions import KazooException from kazoo.exceptions import LockTimeout from kazoo.exceptions import NoNodeError from kazoo.protocol.states import KazooState class Lock(object): """Kazoo Lock Example usage with a :class:`~kazoo.client.KazooClient` instance: .. code-block:: python zk = KazooClient() lock = zk.Lock("/lockpath", "my-identifier") with lock: # blocks waiting for lock acquisition # do something with the lock """ _NODE_NAME = '__lock__' def __init__(self, client, path, identifier=None): """Create a Kazoo lock. :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The lock path to use. :param identifier: Name to use for this lock contender. This can be useful for querying to see who the current lock contenders are. """ self.client = client self.path = path # some data is written to the node. this can be queried via # contenders() to see who is contending for the lock self.data = str(identifier or "").encode('utf-8') self.wake_event = client.handler.event_object() # props to Netflix Curator for this trick. It is possible for our # create request to succeed on the server, but for a failure to # prevent us from getting back the full path name. We prefix our # lock name with a uuid and can check for its presence on retry. self.prefix = uuid.uuid4().hex + self._NODE_NAME self.create_path = self.path + "/" + self.prefix self.create_tried = False self.is_acquired = False self.assured_path = False self.cancelled = False self._retry = KazooRetry(max_tries=None) def _ensure_path(self): self.client.ensure_path(self.path) self.assured_path = True def cancel(self): """Cancel a pending lock acquire.""" self.cancelled = True self.wake_event.set() def acquire(self, blocking=True, timeout=None): """ Acquire the lock. By defaults blocks and waits forever. :param blocking: Block until lock is obtained or return immediately. :type blocking: bool :param timeout: Don't wait forever to acquire the lock. :type timeout: float or None :returns: Was the lock acquired? :rtype: bool :raises: :exc:`~kazoo.exceptions.LockTimeout` if the lock wasn't acquired within `timeout` seconds. .. versionadded:: 1.1 The timeout option. """ try: retry = self._retry.copy() retry.deadline = timeout self.is_acquired = retry(self._inner_acquire, blocking=blocking, timeout=timeout) except KazooException: # if we did ultimately fail, attempt to clean up self._best_effort_cleanup() self.cancelled = False raise except RetryFailedError: self._best_effort_cleanup() if not self.is_acquired: self._delete_node(self.node) return self.is_acquired def _inner_acquire(self, blocking, timeout): # make sure our election parent node exists if not self.assured_path: self._ensure_path() node = None if self.create_tried: node = self._find_node() else: self.create_tried = True if not node: node = self.client.create(self.create_path, self.data, ephemeral=True, sequence=True) # strip off path to node node = node[len(self.path) + 1:] self.node = node while True: self.wake_event.clear() # bail out with an exception if cancellation has been requested if self.cancelled: raise CancelledError() children = self._get_sorted_children() try: our_index = children.index(node) except ValueError: # pragma: nocover # somehow we aren't in the children -- probably we are # recovering from a session failure and our ephemeral # node was removed raise ForceRetryError() if self.acquired_lock(children, our_index): return True if not blocking: return False # otherwise we are in the mix. watch predecessor and bide our time predecessor = self.path + "/" + children[our_index - 1] if self.client.exists(predecessor, self._watch_predecessor): self.wake_event.wait(timeout) if not self.wake_event.isSet(): raise LockTimeout("Failed to acquire lock on %s after %s " "seconds" % (self.path, timeout)) def acquired_lock(self, children, index): return index == 0 def _watch_predecessor(self, event): self.wake_event.set() def _get_sorted_children(self): children = self.client.get_children(self.path) # can't just sort directly: the node names are prefixed by uuids lockname = self._NODE_NAME children.sort(key=lambda c: c[c.find(lockname) + len(lockname):]) return children def _find_node(self): children = self.client.get_children(self.path) for child in children: if child.startswith(self.prefix): return child return None def _delete_node(self, node): self.client.delete(self.path + "/" + node) def _best_effort_cleanup(self): try: node = self._find_node() if node: self._delete_node(node) except KazooException: # pragma: nocover pass def release(self): """Release the lock immediately.""" return self.client.retry(self._inner_release) def _inner_release(self): if not self.is_acquired: return False try: self._delete_node(self.node) except NoNodeError: # pragma: nocover pass self.is_acquired = False self.node = None return True def contenders(self): """Return an ordered list of the current contenders for the lock. .. note:: If the contenders did not set an identifier, it will appear as a blank string. """ # make sure our election parent node exists if not self.assured_path: self._ensure_path() children = self._get_sorted_children() contenders = [] for child in children: try: data, stat = self.client.get(self.path + "/" + child) contenders.append(data.decode('utf-8')) except NoNodeError: # pragma: nocover pass return contenders def __enter__(self): self.acquire() def __exit__(self, exc_type, exc_value, traceback): self.release() class Semaphore(object): """A Zookeeper-based Semaphore This synchronization primitive operates in the same manner as the Python threading version only uses the concept of leases to indicate how many available leases are available for the lock rather than counting. Example: .. code-block:: python zk = KazooClient() semaphore = zk.Semaphore("/leasepath", "my-identifier") with semaphore: # blocks waiting for lock acquisition # do something with the semaphore .. warning:: This class stores the allowed max_leases as the data on the top-level semaphore node. The stored value is checked once against the max_leases of each instance. This check is performed when acquire is called the first time. The semaphore node needs to be deleted to change the allowed leases. .. versionadded:: 0.6 The Semaphore class. .. versionadded:: 1.1 The max_leases check. """ def __init__(self, client, path, identifier=None, max_leases=1): """Create a Kazoo Lock :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The semaphore path to use. :param identifier: Name to use for this lock contender. This can be useful for querying to see who the current lock contenders are. :param max_leases: The maximum amount of leases available for the semaphore. """ # Implementation notes about how excessive thundering herd # and watches are avoided # - A node (lease pool) holds children for each lease in use # - A lock is acquired for a process attempting to acquire a # lease. If a lease is available, the ephemeral node is # created in the lease pool and the lock is released. # - Only the lock holder watches for children changes in the # lease pool self.client = client self.path = path # some data is written to the node. this can be queried via # contenders() to see who is contending for the lock self.data = str(identifier or "").encode('utf-8') self.max_leases = max_leases self.wake_event = client.handler.event_object() self.create_path = self.path + "/" + uuid.uuid4().hex self.lock_path = path + '-' + '__lock__' self.is_acquired = False self.assured_path = False self.cancelled = False self._session_expired = False def _ensure_path(self): result = self.client.ensure_path(self.path) self.assured_path = True if result is True: # node did already exist data, _ = self.client.get(self.path) try: leases = int(data.decode('utf-8')) except (ValueError, TypeError): # ignore non-numeric data, maybe the node data is used # for other purposes pass else: if leases != self.max_leases: raise ValueError( "Inconsistent max leases: %s, expected: %s" % (leases, self.max_leases) ) else: self.client.set(self.path, str(self.max_leases).encode('utf-8')) def cancel(self): """Cancel a pending semaphore acquire.""" self.cancelled = True self.wake_event.set() def acquire(self, blocking=True, timeout=None): """Acquire the semaphore. By defaults blocks and waits forever. :param blocking: Block until semaphore is obtained or return immediately. :type blocking: bool :param timeout: Don't wait forever to acquire the semaphore. :type timeout: float or None :returns: Was the semaphore acquired? :rtype: bool :raises: ValueError if the max_leases value doesn't match the stored value. :exc:`~kazoo.exceptions.LockTimeout` if the semaphore wasn't acquired within `timeout` seconds. .. versionadded:: 1.1 The blocking, timeout arguments and the max_leases check. """ # If the semaphore had previously been canceled, make sure to # reset that state. self.cancelled = False try: self.is_acquired = self.client.retry( self._inner_acquire, blocking=blocking, timeout=timeout) except KazooException: # if we did ultimately fail, attempt to clean up self._best_effort_cleanup() self.cancelled = False raise return self.is_acquired def _inner_acquire(self, blocking, timeout=None): """Inner loop that runs from the top anytime a command hits a retryable Zookeeper exception.""" self._session_expired = False self.client.add_listener(self._watch_session) if not self.assured_path: self._ensure_path() # Do we already have a lease? if self.client.exists(self.create_path): return True with self.client.Lock(self.lock_path, self.data): while True: self.wake_event.clear() # Attempt to grab our lease... if self._get_lease(): return True if blocking: # If blocking, wait until self._watch_lease_change() is # called before returning self.wake_event.wait(timeout) if not self.wake_event.isSet(): raise LockTimeout( "Failed to acquire semaphore on %s " "after %s seconds" % (self.path, timeout)) else: # If not blocking, register another watch that will trigger # self._get_lease() as soon as the children change again. self.client.get_children(self.path, self._get_lease) return False def _watch_lease_change(self, event): self.wake_event.set() def _get_lease(self, data=None): # Make sure the session is still valid if self._session_expired: raise ForceRetryError("Retry on session loss at top") # Make sure that the request hasn't been canceled if self.cancelled: raise CancelledError("Semaphore cancelled") # Get a list of the current potential lock holders. If they change, # notify our wake_event object. This is used to unblock a blocking # self._inner_acquire call. children = self.client.get_children(self.path, self._watch_lease_change) # If there are leases available, acquire one if len(children) < self.max_leases: self.client.create(self.create_path, self.data, ephemeral=True) # Check if our acquisition was successful or not. Update our state. if self.client.exists(self.create_path): self.is_acquired = True else: self.is_acquired = False # Return current state return self.is_acquired def _watch_session(self, state): if state == KazooState.LOST: self._session_expired = True self.wake_event.set() # Return true to de-register return True def _best_effort_cleanup(self): try: self.client.delete(self.create_path) except KazooException: # pragma: nocover pass def release(self): """Release the lease immediately.""" return self.client.retry(self._inner_release) def _inner_release(self): if not self.is_acquired: return False try: self.client.delete(self.create_path) except NoNodeError: # pragma: nocover pass self.is_acquired = False return True def lease_holders(self): """Return an unordered list of the current lease holders. .. note:: If the lease holder did not set an identifier, it will appear as a blank string. """ if not self.client.exists(self.path): return [] children = self.client.get_children(self.path) lease_holders = [] for child in children: try: data, stat = self.client.get(self.path + "/" + child) lease_holders.append(data.decode('utf-8')) except NoNodeError: # pragma: nocover pass return lease_holders def __enter__(self): self.acquire() def __exit__(self, exc_type, exc_value, traceback): self.release() kazoo-1.2.1/kazoo/recipe/partitioner.py000066400000000000000000000275451217652145400201660ustar00rootroot00000000000000"""Zookeeper Partitioner Implementation :class:`SetPartitioner` implements a partitioning scheme using Zookeeper for dividing up resources amongst members of a party. This is useful when there is a set of resources that should only be accessed by a single process at a time that multiple processes across a cluster might want to divide up. Example Use-Case ---------------- - Multiple workers across a cluster need to divide up a list of queues so that no two workers own the same queue. """ import logging import os import socket from functools import partial from kazoo.exceptions import KazooException from kazoo.protocol.states import KazooState from kazoo.recipe.watchers import PatientChildrenWatch log = logging.getLogger(__name__) class PartitionState(object): """High level partition state values .. attribute:: ALLOCATING The set needs to be partitioned, and may require an existing partition set to be released before acquiring a new partition of the set. .. attribute:: ACQUIRED The set has been partitioned and acquired. .. attribute:: RELEASE The set needs to be repartitioned, and the current partitions must be released before a new allocation can be made. .. attribute:: FAILURE The set partition has failed. This occurs when the maximum time to partition the set is exceeded or the Zookeeper session is lost. The partitioner is unusable after this state and must be recreated. """ ALLOCATING = "ALLOCATING" ACQUIRED = "ACQUIRED" RELEASE = "RELEASE" FAILURE = "FAILURE" class SetPartitioner(object): """Partitions a set amongst members of a party This class will partition a set amongst members of a party such that each member will be given zero or more items of the set and each set item will be given to a single member. When new members enter or leave the party, the set will be re-partitioned amongst the members. When the :class:`SetPartitioner` enters the :attr:`~PartitionState.FAILURE` state, it is unrecoverable and a new :class:`SetPartitioner` should be created. Example: .. code-block:: python from kazoo.client import KazooClient client = KazooClient() qp = client.SetPartitioner( path='/work_queues', set=('queue-1', 'queue-2', 'queue-3')) while 1: if qp.failed: raise Exception("Lost or unable to acquire partition") elif qp.release: qp.release_set() elif qp.acquired: for partition in qp: # Do something with each partition elif qp.allocating: qp.wait_for_acquire() **State Transitions** When created, the :class:`SetPartitioner` enters the :attr:`PartitionState.ALLOCATING` state. :attr:`~PartitionState.ALLOCATING` -> :attr:`~PartitionState.ACQUIRED` Set was partitioned successfully, the partition list assigned is accessible via list/iter methods or calling list() on the :class:`SetPartitioner` instance. :attr:`~PartitionState.ALLOCATING` -> :attr:`~PartitionState.FAILURE` Allocating the set failed either due to a Zookeeper session expiration, or failure to acquire the items of the set within the timeout period. :attr:`~PartitionState.ACQUIRED` -> :attr:`~PartitionState.RELEASE` The members of the party have changed, and the set needs to be repartitioned. :meth:`SetPartitioner.release` should be called as soon as possible. :attr:`~PartitionState.ACQUIRED` -> :attr:`~PartitionState.FAILURE` The current partition was lost due to a Zookeeper session expiration. :attr:`~PartitionState.RELEASE` -> :attr:`~PartitionState.ALLOCATING` The current partition was released and is being re-allocated. """ def __init__(self, client, path, set, partition_func=None, identifier=None, time_boundary=30): """Create a :class:`~SetPartitioner` instance :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The partition path to use. :param set: The set of items to partition. :param partition_func: A function to use to decide how to partition the set. :param identifier: An identifier to use for this member of the party when participating. Defaults to the hostname + process id. :param time_boundary: How long the party members must be stable before allocation can complete. """ self.state = PartitionState.ALLOCATING self._client = client self._path = path self._set = set self._partition_set = [] self._partition_func = partition_func or self._partitioner self._identifier = identifier or '%s-%s' % ( socket.getfqdn(), os.getpid()) self._locks = [] self._lock_path = '/'.join([path, 'locks']) self._party_path = '/'.join([path, 'party']) self._time_boundary = time_boundary self._acquire_event = client.handler.event_object() # Create basic path nodes client.ensure_path(path) client.ensure_path(self._lock_path) client.ensure_path(self._party_path) # Join the party self._party = client.ShallowParty(self._party_path, identifier=self._identifier) self._party.join() self._state_change = client.handler.rlock_object() client.add_listener(self._establish_sessionwatch) # Now watch the party and set the callback on the async result # so we know when we're ready self._children_updated = False self._child_watching(self._allocate_transition, async=True) def __iter__(self): """Return the partitions in this partition set""" for partition in self._partition_set: yield partition @property def failed(self): """Corresponds to the :attr:`PartitionState.FAILURE` state""" return self.state == PartitionState.FAILURE @property def release(self): """Corresponds to the :attr:`PartitionState.RELEASE` state""" return self.state == PartitionState.RELEASE @property def allocating(self): """Corresponds to the :attr:`PartitionState.ALLOCATING` state""" return self.state == PartitionState.ALLOCATING @property def acquired(self): """Corresponds to the :attr:`PartitionState.ACQUIRED` state""" return self.state == PartitionState.ACQUIRED def wait_for_acquire(self, timeout=30): """Wait for the set to be partitioned and acquired :param timeout: How long to wait before returning. :type timeout: int """ self._acquire_event.wait(timeout) def release_set(self): """Call to release the set This method begins the step of allocating once the set has been released. """ self._release_locks() if self._locks: # pragma: nocover # This shouldn't happen, it means we couldn't release our # locks, abort self._fail_out() return else: with self._state_change: if self.failed: return self.state = PartitionState.ALLOCATING self._child_watching(self._allocate_transition, async=True) def finish(self): """Call to release the set and leave the party""" self._release_locks() self._fail_out() def _fail_out(self): with self._state_change: self.state = PartitionState.FAILURE if self._party.participating: try: self._party.leave() except KazooException: # pragma: nocover pass def _allocate_transition(self, result): """Called when in allocating mode, and the children settled""" # Did we get an exception waiting for children to settle? if result.exception: # pragma: nocover self._fail_out() return children, async_result = result.get() self._children_updated = False # Add a callback when children change on the async_result def updated(result): with self._state_change: if self.acquired: self.state = PartitionState.RELEASE self._children_updated = True async_result.rawlink(updated) # Split up the set self._partition_set = self._partition_func( self._identifier, list(self._party), self._set) # Proceed to acquire locks for the working set as needed for member in self._partition_set: if self._children_updated or self.failed: # Still haven't settled down, release locks acquired # so far and go back return self._abort_lock_acquisition() lock = self._client.Lock(self._lock_path + '/' + str(member)) try: lock.acquire() except KazooException: # pragma: nocover return self.finish() self._locks.append(lock) # All locks acquired! Time for state transition, make sure # we didn't inadvertently get lost thus far with self._state_change: if self.failed: # pragma: nocover return self.finish() self.state = PartitionState.ACQUIRED self._acquire_event.set() def _release_locks(self): """Attempt to completely remove all the locks""" self._acquire_event.clear() for lock in self._locks[:]: try: lock.release() except KazooException: # pragma: nocover # We proceed to remove as many as possible, and leave # the ones we couldn't remove pass else: self._locks.remove(lock) def _abort_lock_acquisition(self): """Called during lock acquisition if a party change occurs""" self._partition_set = [] self._release_locks() if self._locks: # This shouldn't happen, it means we couldn't release our # locks, abort self._fail_out() return return self._child_watching(self._allocate_transition) def _child_watching(self, func=None, async=False): """Called when children are being watched to stabilize This actually returns immediately, child watcher spins up a new thread/greenlet and waits for it to stabilize before any callbacks might run. """ watcher = PatientChildrenWatch(self._client, self._party_path, self._time_boundary) asy = watcher.start() if func is not None: # We spin up the function in a separate thread/greenlet # to ensure that the rawlink's it might use won't be # blocked if async: func = partial(self._client.handler.spawn, func) asy.rawlink(func) return asy def _establish_sessionwatch(self, state): """Register ourself to listen for session events, we shut down if we become lost""" if state == KazooState.LOST: self._client.handler.spawn(self._fail_out) return True def _partitioner(self, identifier, members, partitions): # Ensure consistent order of partitions/members all_partitions = sorted(partitions) workers = sorted(members) i = workers.index(identifier) # Now return the partition list starting at our location and # skipping the other workers return all_partitions[i::len(workers)] kazoo-1.2.1/kazoo/recipe/party.py000066400000000000000000000073561217652145400167630ustar00rootroot00000000000000"""Party A Zookeeper pool of party members. The :class:`Party` object can be used for determining members of a party. """ import uuid from kazoo.exceptions import NodeExistsError, NoNodeError class BaseParty(object): """Base implementation of a party.""" def __init__(self, client, path, identifier=None): """ :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The party path to use. :param identifier: An identifier to use for this member of the party when participating. """ self.client = client self.path = path self.data = str(identifier or "").encode('utf-8') self.ensured_path = False self.participating = False def _ensure_parent(self): if not self.ensured_path: # make sure our parent node exists self.client.ensure_path(self.path) self.ensured_path = True def join(self): """Join the party""" return self.client.retry(self._inner_join) def _inner_join(self): self._ensure_parent() try: self.client.create(self.create_path, self.data, ephemeral=True) self.participating = True except NodeExistsError: # node was already created, perhaps we are recovering from a # suspended connection self.participating = True def leave(self): """Leave the party""" self.participating = False return self.client.retry(self._inner_leave) def _inner_leave(self): try: self.client.delete(self.create_path) except NoNodeError: return False return True def __len__(self): """Return a count of participating clients""" self._ensure_parent() return len(self._get_children()) def _get_children(self): return self.client.retry(self.client.get_children, self.path) class Party(BaseParty): """Simple pool of participating processes""" _NODE_NAME = "__party__" def __init__(self, client, path, identifier=None): BaseParty.__init__(self, client, path, identifier=identifier) self.node = uuid.uuid4().hex + self._NODE_NAME self.create_path = self.path + "/" + self.node def __iter__(self): """Get a list of participating clients' data values""" self._ensure_parent() children = self._get_children() for child in children: try: d, _ = self.client.retry(self.client.get, self.path + "/" + child) yield d.decode('utf-8') except NoNodeError: # pragma: nocover pass def _get_children(self): children = BaseParty._get_children(self) return [c for c in children if self._NODE_NAME in c] class ShallowParty(BaseParty): """Simple shallow pool of participating processes This differs from the :class:`Party` as the identifier is used in the name of the party node itself, rather than the data. This places some restrictions on the length as it must be a valid Zookeeper node (an alphanumeric string), but reduces the overhead of getting a list of participants to a single Zookeeper call. """ def __init__(self, client, path, identifier=None): BaseParty.__init__(self, client, path, identifier=identifier) self.node = '-'.join([uuid.uuid4().hex, self.data.decode('utf-8')]) self.create_path = self.path + "/" + self.node def __iter__(self): """Get a list of participating clients' identifiers""" self._ensure_parent() children = self._get_children() for child in children: yield child[child.find('-') + 1:] kazoo-1.2.1/kazoo/recipe/queue.py000066400000000000000000000255371217652145400167510ustar00rootroot00000000000000""" Zookeeper based queue implementations. """ import uuid from kazoo.exceptions import NoNodeError, NodeExistsError from kazoo.retry import ForceRetryError from kazoo.protocol.states import EventType class BaseQueue(object): """A common base class for queue implementations.""" def __init__(self, client, path): """ :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The queue path to use in ZooKeeper. """ self.client = client self.path = path self._entries_path = path self.structure_paths = (self.path, ) self.ensured_path = False def _check_put_arguments(self, value, priority=100): if not isinstance(value, bytes): raise TypeError("value must be a byte string") if not isinstance(priority, int): raise TypeError("priority must be an int") elif priority < 0 or priority > 999: raise ValueError("priority must be between 0 and 999") def _ensure_paths(self): if not self.ensured_path: # make sure our parent / internal structure nodes exists for path in self.structure_paths: self.client.ensure_path(path) self.ensured_path = True def __len__(self): self._ensure_paths() _, stat = self.client.retry(self.client.get, self._entries_path) return stat.children_count class Queue(BaseQueue): """A distributed queue with optional priority support. This queue does not offer reliable consumption. An entry is removed from the queue prior to being processed. So if an error occurs, the consumer has to re-queue the item or it will be lost. """ prefix = "entry-" def __len__(self): """Return queue size.""" return super(Queue, self).__len__() def get(self): """ Get item data and remove an item from the queue. :returns: Item data or None. :rtype: bytes """ self._ensure_paths() children = self.client.retry(self.client.get_children, self.path) children = list(sorted(children)) return self.client.retry(self._inner_get, children) def _inner_get(self, children): if not children: return None name = children.pop(0) try: data, stat = self.client.get(self.path + "/" + name) except NoNodeError: # pragma: nocover # the first node has vanished in the meantime, try to # get another one raise ForceRetryError() try: self.client.delete(self.path + "/" + name) except NoNodeError: # pragma: nocover # we were able to get the data but someone else has removed # the node in the meantime. consider the item as processed # by the other process raise ForceRetryError() return data def put(self, value, priority=100): """Put an item into the queue. :param value: Byte string to put into the queue. :param priority: An optional priority as an integer with at most 3 digits. Lower values signify higher priority. """ self._check_put_arguments(value, priority) self._ensure_paths() path = '{path}/{prefix}{priority:03d}-'.format( path=self.path, prefix=self.prefix, priority=priority) self.client.create(path, value, sequence=True) class LockingQueue(BaseQueue): """A distributed queue with priority and locking support. Upon retrieving an entry from the queue, the entry gets locked with an ephemeral node (instead of deleted). If an error occurs, this lock gets released so that others could retake the entry. This adds a little penalty as compared to :class:`Queue` implementation. The user should call the :meth:`LockingQueue.get` method first to lock and retrieve the next entry. When finished processing the entry, a user should call the :meth:`LockingQueue.consume` method that will remove the entry from the queue. This queue will not track connection status with ZooKeeper. If a node locks an element, then loses connection with ZooKeeper and later reconnects, the lock will probably be removed by Zookeeper in the meantime, but a node would still think that it holds a lock. The user should check the connection status with Zookeeper or call :meth:`LockingQueue.holds_lock` method that will check if a node still holds the lock. """ lock = "/taken" entries = "/entries" entry = "entry" def __init__(self, client, path): """ :param client: A :class:`~kazoo.client.KazooClient` instance. :param path: The queue path to use in ZooKeeper. """ super(LockingQueue, self).__init__(client, path) self.id = uuid.uuid4().hex.encode() self.processing_element = None self._lock_path = self.path + self.lock self._entries_path = self.path + self.entries self.structure_paths = (self._lock_path, self._entries_path) def __len__(self): """Returns the current length of the queue. :returns: queue size (includes locked entries count). """ return super(LockingQueue, self).__len__() def put(self, value, priority=100): """Put an entry into the queue. :param value: Byte string to put into the queue. :param priority: An optional priority as an integer with at most 3 digits. Lower values signify higher priority. """ self._check_put_arguments(value, priority) self._ensure_paths() self.client.create( "{path}/{prefix}-{priority:03d}-".format( path=self._entries_path, prefix=self.entry, priority=priority), value, sequence=True) def put_all(self, values, priority=100): """Put several entries into the queue. The action only succeeds if all entries where put into the queue. :param values: A list of values to put into the queue. :param priority: An optional priority as an integer with at most 3 digits. Lower values signify higher priority. """ if not isinstance(values, list): raise TypeError("values must be a list of byte strings") if not isinstance(priority, int): raise TypeError("priority must be an int") elif priority < 0 or priority > 999: raise ValueError("priority must be between 0 and 999") self._ensure_paths() with self.client.transaction() as transaction: for value in values: if not isinstance(value, bytes): raise TypeError("value must be a byte string") transaction.create( "{path}/{prefix}-{priority:03d}-".format( path=self._entries_path, prefix=self.entry, priority=priority), value, sequence=True) def get(self, timeout=None): """Locks and gets an entry from the queue. If a previously got entry was not consumed, this method will return that entry. :param timeout: Maximum waiting time in seconds. If None then it will wait untill an entry appears in the queue. :returns: A locked entry value or None if the timeout was reached. :rtype: bytes """ self._ensure_paths() if not self.processing_element is None: return self.processing_element[1] else: return self._inner_get(timeout) def holds_lock(self): """Checks if a node still holds the lock. :returns: True if a node still holds the lock, False otherwise. :rtype: bool """ if self.processing_element is None: return False lock_id, _ = self.processing_element lock_path = "{path}/{id}".format(path=self._lock_path, id=lock_id) self.client.sync(lock_path) value, stat = self.client.retry(self.client.get, lock_path) return value == self.id def consume(self): """Removes a currently processing entry from the queue. :returns: True if element was removed successfully, False otherwise. :rtype: bool """ if not self.processing_element is None and self.holds_lock: id_, value = self.processing_element with self.client.transaction() as transaction: transaction.delete("{path}/{id}".format( path=self._entries_path, id=id_)) transaction.delete("{path}/{id}".format( path=self._lock_path, id=id_)) self.processing_element = None return True else: return False def _inner_get(self, timeout): flag = self.client.handler.event_object() lock = self.client.handler.lock_object() canceled = False value = [] def check_for_updates(event): if not event is None and event.type != EventType.CHILD: return with lock: if canceled or flag.isSet(): return values = self.client.retry(self.client.get_children, self._entries_path, check_for_updates) taken = self.client.retry(self.client.get_children, self._lock_path, check_for_updates) available = self._filter_locked(values, taken) if len(available) > 0: ret = self._take(available[0]) if not ret is None: # By this time, no one took the task value.append(ret) flag.set() check_for_updates(None) retVal = None flag.wait(timeout) with lock: canceled = True if len(value) > 0: # We successfully locked an entry self.processing_element = value[0] retVal = value[0][1] return retVal def _filter_locked(self, values, taken): taken = set(taken) available = sorted(values) return (available if len(taken) == 0 else [x for x in available if x not in taken]) def _take(self, id_): try: self.client.create( "{path}/{id}".format( path=self._lock_path, id=id_), self.id, ephemeral=True) value, stat = self.client.retry(self.client.get, "{path}/{id}".format(path=self._entries_path, id=id_)) except (NoNodeError, NodeExistsError): # Item is already consumed or locked return None return (id_, value) kazoo-1.2.1/kazoo/recipe/watchers.py000066400000000000000000000322041217652145400174320ustar00rootroot00000000000000"""Higher level child and data watching API's. """ import logging import time import warnings from functools import partial, wraps from kazoo.retry import KazooRetry from kazoo.exceptions import ConnectionClosedError, NoNodeError from kazoo.protocol.states import KazooState log = logging.getLogger(__name__) _STOP_WATCHING = object() def _ignore_closed(func): @wraps(func) def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except ConnectionClosedError: pass return wrapper class DataWatch(object): """Watches a node for data updates and calls the specified function each time it changes The function will also be called the very first time its registered to get the data. Returning `False` from the registered function will disable future data change calls. If the client connection is closed (using the close command), the DataWatch will no longer get updates. If the function supplied takes three arguments, then the third one will be a :class:`~kazoo.protocol.states.WatchedEvent`. It will only be set if the change to the data occurs as a result of the server notifying the watch that there has been a change. Events like reconnection or the first call will not include an event. If the node does not exist, then the function will be called with ``None`` for all values. Example with client: .. code-block:: python @client.DataWatch('/path/to/watch') def my_func(data, stat): print("Data is %s" % data) print("Version is %s" % stat.version) # Above function is called immediately and prints # Or if you want the event object @client.DataWatch('/path/to/watch') def my_func(data, stat, event): print("Data is %s" % data) print("Version is %s" % stat.version) print("Event is %s" % event) .. versionchanged:: 1.2 DataWatch now ignores additional arguments that were previously passed to it and warns that they are no longer respected. """ def __init__(self, client, path, func=None, *args, **kwargs): """Create a data watcher for a path :param client: A zookeeper client. :type client: :class:`~kazoo.client.KazooClient` :param path: The path to watch for data changes on. :type path: str :param func: Function to call initially and every time the node changes. `func` will be called with a tuple, the value of the node and a :class:`~kazoo.client.ZnodeStat` instance. :type func: callable """ self._client = client self._path = path self._func = func self._stopped = False self._run_lock = client.handler.lock_object() self._version = None self._retry = KazooRetry(max_tries=None, sleep_func=client.handler.sleep_func) self._include_event = None self._ever_called = False if args or kwargs: warnings.warn('Passing additional arguments to DataWatch is' ' deprecated. ignore_missing_node is now assumed ' ' to be True by default, and the event will be ' ' sent if the function can handle receiving it', DeprecationWarning, stacklevel=2) # Register our session listener if we're going to resume # across session losses if func is not None: self._client.add_listener(self._session_watcher) self._get_data() def __call__(self, func): """Callable version for use as a decorator :param func: Function to call initially and every time the data changes. `func` will be called with a tuple, the value of the node and a :class:`~kazoo.client.ZnodeStat` instance. :type func: callable """ self._func = func self._client.add_listener(self._session_watcher) self._get_data() return func def _log_func_exception(self, data, stat, event=None): try: # For backwards compatibility, don't send event to the # callback unless the send_event is set in constructor if not self._ever_called: self._ever_called = True try: result = self._func(data, stat, event) except TypeError: result = self._func(data, stat) if result is False: self._stopped = True self._client.remove_listener(self._session_watcher) except Exception as exc: log.exception(exc) raise @_ignore_closed def _get_data(self, event=None): # Ensure this runs one at a time, possible because the session # watcher may trigger a run with self._run_lock: if self._stopped: return initial_version = self._version try: data, stat = self._retry(self._client.get, self._path, self._watcher) except NoNodeError: data = None # This will set 'stat' to None if the node does not yet # exist. stat = self._retry(self._client.exists, self._path, self._watcher) if stat: self._client.handler.spawn(self._get_data) return # No node data, clear out version if stat is None: self._version = None else: self._version = stat.mzxid # Call our function if its the first time ever, or if the # version has changed if initial_version != self._version or not self._ever_called: self._log_func_exception(data, stat, event) def _watcher(self, event): self._get_data(event=event) def _set_watch(self, state): with self._run_lock: self._watch_established = state def _session_watcher(self, state): if state == KazooState.CONNECTED: self._client.handler.spawn(self._get_data) class ChildrenWatch(object): """Watches a node for children updates and calls the specified function each time it changes The function will also be called the very first time its registered to get children. Returning `False` from the registered function will disable future children change calls. If the client connection is closed (using the close command), the ChildrenWatch will no longer get updates. if send_event=True in __init__, then the function will always be called with second parameter, ``event``. Upon initial call or when recovering a lost session the ``event`` is always ``None``. Otherwise it's a :class:`~kazoo.prototype.state.WatchedEvent` instance. Example with client: .. code-block:: python @client.ChildrenWatch('/path/to/watch') def my_func(children): print "Children are %s" % children # Above function is called immediately and prints children """ def __init__(self, client, path, func=None, allow_session_lost=True, send_event=False): """Create a children watcher for a path :param client: A zookeeper client. :type client: :class:`~kazoo.client.KazooClient` :param path: The path to watch for children on. :type path: str :param func: Function to call initially and every time the children change. `func` will be called with a single argument, the list of children. :type func: callable :param allow_session_lost: Whether the watch should be re-registered if the zookeeper session is lost. :type allow_session_lost: bool :type send_event: bool :param send_event: Whether the function should be passed the event sent by ZooKeeper or None upon initialization (see class documentation) The path must already exist for the children watcher to run. """ self._client = client self._path = path self._func = func self._send_event = send_event self._stopped = False self._watch_established = False self._allow_session_lost = allow_session_lost self._run_lock = client.handler.lock_object() self._prior_children = None # Register our session listener if we're going to resume # across session losses if func is not None: if allow_session_lost: self._client.add_listener(self._session_watcher) self._get_children() def __call__(self, func): """Callable version for use as a decorator :param func: Function to call initially and every time the children change. `func` will be called with a single argument, the list of children. :type func: callable """ self._func = func if self._allow_session_lost: self._client.add_listener(self._session_watcher) self._get_children() return func @_ignore_closed def _get_children(self, event=None): with self._run_lock: # Ensure this runs one at a time if self._stopped: return children = self._client.retry(self._client.get_children, self._path, self._watcher) if not self._watch_established: self._watch_established = True if self._prior_children is not None and \ self._prior_children == children: return self._prior_children = children try: if self._send_event: result = self._func(children, event) else: result = self._func(children) if result is False: self._stopped = True except Exception as exc: log.exception(exc) raise def _watcher(self, event): self._get_children(event) def _session_watcher(self, state): if state in (KazooState.LOST, KazooState.SUSPENDED): self._watch_established = False elif state == KazooState.CONNECTED and \ not self._watch_established and not self._stopped: self._client.handler.spawn(self._get_children) class PatientChildrenWatch(object): """Patient Children Watch that returns values after the children of a node don't change for a period of time A separate watcher for the children of a node, that ignores changes within a boundary time and sets the result only when the boundary time has elapsed with no children changes. Example:: watcher = PatientChildrenWatch(client, '/some/path', time_boundary=5) async_object = watcher.start() # Blocks until the children have not changed for time boundary # (5 in this case) seconds, returns children list and an # async_result that will be set if the children change in the # future children, child_async = async_object.get() .. note:: This Watch is different from :class:`DataWatch` and :class:`ChildrenWatch` as it only returns once, does not take a function that is called, and provides an :class:`~kazoo.interfaces.IAsyncResult` object that can be checked to see if the children have changed later. """ def __init__(self, client, path, time_boundary=30): self.client = client self.path = path self.children = [] self.time_boundary = time_boundary self.children_changed = client.handler.event_object() def start(self): """Begin the watching process asynchronously :returns: An :class:`~kazoo.interfaces.IAsyncResult` instance that will be set when no change has occurred to the children for time boundary seconds. """ self.asy = asy = self.client.handler.async_result() self.client.handler.spawn(self._inner_start) return asy def _inner_start(self): try: while True: async_result = self.client.handler.async_result() self.children = self.client.retry( self.client.get_children, self.path, partial(self._children_watcher, async_result)) self.client.handler.sleep_func(self.time_boundary) if self.children_changed.is_set(): self.children_changed.clear() else: break self.asy.set((self.children, async_result)) except Exception as exc: self.asy.set_exception(exc) def _children_watcher(self, async, event): self.children_changed.set() async.set(time.time()) kazoo-1.2.1/kazoo/retry.py000066400000000000000000000122551217652145400155140ustar00rootroot00000000000000import logging import random import time from kazoo.exceptions import ( ConnectionClosedError, ConnectionLoss, KazooException, OperationTimeoutError, SessionExpiredError, ) log = logging.getLogger(__name__) class ForceRetryError(Exception): """Raised when some recipe logic wants to force a retry.""" class RetryFailedError(KazooException): """Raised when retrying an operation ultimately failed, after retrying the maximum number of attempts. """ class InterruptedError(RetryFailedError): """Raised when the retry is forcibly interrupted by the interrupt function""" class KazooRetry(object): """Helper for retrying a method in the face of retry-able exceptions""" RETRY_EXCEPTIONS = ( ConnectionLoss, OperationTimeoutError, ForceRetryError ) EXPIRED_EXCEPTIONS = ( SessionExpiredError, ) def __init__(self, max_tries=1, delay=0.1, backoff=2, max_jitter=0.8, max_delay=3600, ignore_expire=True, sleep_func=time.sleep, deadline=None, interrupt=None): """Create a :class:`KazooRetry` instance for retrying function calls :param max_tries: How many times to retry the command. :param delay: Initial delay between retry attempts. :param backoff: Backoff multiplier between retry attempts. Defaults to 2 for exponential backoff. :param max_jitter: Additional max jitter period to wait between retry attempts to avoid slamming the server. :param max_delay: Maximum delay in seconds, regardless of other backoff settings. Defaults to one hour. :param ignore_expire: Whether a session expiration should be ignored and treated as a retry-able command. :param interrupt: Function that will be called with no args that may return True if the retry should be ceased immediately. This will be called no more than every 0.1 seconds during a wait between retries. """ self.max_tries = max_tries self.delay = delay self.backoff = backoff self.max_jitter = int(max_jitter * 100) self.max_delay = float(max_delay) self._attempts = 0 self._cur_delay = delay self.deadline = deadline self._cur_stoptime = None self.sleep_func = sleep_func self.retry_exceptions = self.RETRY_EXCEPTIONS self.interrupt = interrupt if ignore_expire: self.retry_exceptions += self.EXPIRED_EXCEPTIONS def reset(self): """Reset the attempt counter""" self._attempts = 0 self._cur_delay = self.delay self._cur_stoptime = None def copy(self): """Return a clone of this retry manager""" obj = KazooRetry(self.max_tries, self.delay, self.backoff, self.max_jitter / 100.0, self.max_delay, self.sleep_func, deadline=self.deadline, interrupt=self.interrupt) obj.retry_exceptions = self.retry_exceptions return obj def __call__(self, func, *args, **kwargs): """Call a function with arguments until it completes without throwing a Kazoo exception :param func: Function to call :param args: Positional arguments to call the function with :params kwargs: Keyword arguments to call the function with The function will be called until it doesn't throw one of the retryable exceptions (ConnectionLoss, OperationTimeout, or ForceRetryError), and optionally retrying on session expiration. """ self.reset() while True: try: if self.deadline is not None and self._cur_stoptime is None: self._cur_stoptime = time.time() + self.deadline return func(*args, **kwargs) except ConnectionClosedError: raise except self.retry_exceptions: if self._attempts == self.max_tries: raise RetryFailedError("Too many retry attempts") self._attempts += 1 sleeptime = self._cur_delay + (random.randint(0, self.max_jitter) / 100.0) if self._cur_stoptime is not None and time.time() + sleeptime >= self._cur_stoptime: raise RetryFailedError("Exceeded retry deadline") if self.interrupt: while sleeptime > 0: # Break the time period down and sleep for no longer than # 0.1 before calling the interrupt if sleeptime < 0.1: self.sleep_func(sleeptime) sleeptime -= sleeptime else: self.sleep_func(0.1) sleeptime -= 0.1 if self.interrupt(): raise InterruptedError() else: self.sleep_func(sleeptime) self._cur_delay = min(self._cur_delay * self.backoff, self.max_delay) kazoo-1.2.1/kazoo/security.py000066400000000000000000000104151217652145400162120ustar00rootroot00000000000000"""Kazoo Security""" from base64 import b64encode from collections import namedtuple import hashlib # Represents a Zookeeper ID and ACL object Id = namedtuple('Id', 'scheme id') class ACL(namedtuple('ACL', 'perms id')): """An ACL for a Zookeeper Node An ACL object is created by using an :class:`Id` object along with a :class:`Permissions` setting. For convenience, :meth:`make_digest_acl` should be used to create an ACL object with the desired scheme, id, and permissions. """ @property def acl_list(self): perms = [] if self.perms & Permissions.ALL == Permissions.ALL: perms.append('ALL') return perms if self.perms & Permissions.READ == Permissions.READ: perms.append('READ') if self.perms & Permissions.WRITE == Permissions.WRITE: perms.append('WRITE') if self.perms & Permissions.CREATE == Permissions.CREATE: perms.append('CREATE') if self.perms & Permissions.DELETE == Permissions.DELETE: perms.append('DELETE') if self.perms & Permissions.ADMIN == Permissions.ADMIN: perms.append('ADMIN') return perms def __repr__(self): return 'ACL(perms=%r, acl_list=%s, id=%r)' % ( self.perms, self.acl_list, self.id) class Permissions(object): READ = 1 WRITE = 2 CREATE = 4 DELETE = 8 ADMIN = 16 ALL = 31 # Shortcuts for common Ids ANYONE_ID_UNSAFE = Id('world', 'anyone') AUTH_IDS = Id('auth', '') # Shortcuts for common ACLs OPEN_ACL_UNSAFE = [ACL(Permissions.ALL, ANYONE_ID_UNSAFE)] CREATOR_ALL_ACL = [ACL(Permissions.ALL, AUTH_IDS)] READ_ACL_UNSAFE = [ACL(Permissions.READ, ANYONE_ID_UNSAFE)] def make_digest_acl_credential(username, password): """Create a SHA1 digest credential""" credential = username.encode('utf-8') + b":" + password.encode('utf-8') cred_hash = b64encode(hashlib.sha1(credential).digest()).strip() return username + ":" + cred_hash.decode('utf-8') def make_acl(scheme, credential, read=False, write=False, create=False, delete=False, admin=False, all=False): """Given a scheme and credential, return an :class:`ACL` object appropriate for use with Kazoo. :param scheme: The scheme to use. I.e. `digest`. :param credential: A colon separated username, password. The password should be hashed with the `scheme` specified. The :meth:`make_digest_acl_credential` method will create and return a credential appropriate for use with the `digest` scheme. :param write: Write permission. :type write: bool :param create: Create permission. :type create: bool :param delete: Delete permission. :type delete: bool :param admin: Admin permission. :type admin: bool :param all: All permissions. :type all: bool :rtype: :class:`ACL` """ if all: permissions = Permissions.ALL else: permissions = 0 if read: permissions |= Permissions.READ if write: permissions |= Permissions.WRITE if create: permissions |= Permissions.CREATE if delete: permissions |= Permissions.DELETE if admin: permissions |= Permissions.ADMIN return ACL(permissions, Id(scheme, credential)) def make_digest_acl(username, password, read=False, write=False, create=False, delete=False, admin=False, all=False): """Create a digest ACL for Zookeeper with the given permissions This method combines :meth:`make_digest_acl_credential` and :meth:`make_acl` to create an :class:`ACL` object appropriate for use with Kazoo's ACL methods. :param username: Username to use for the ACL. :param password: A plain-text password to hash. :param write: Write permission. :type write: bool :param create: Create permission. :type create: bool :param delete: Delete permission. :type delete: bool :param admin: Admin permission. :type admin: bool :param all: All permissions. :type all: bool :rtype: :class:`ACL` """ cred = make_digest_acl_credential(username, password) return make_acl("digest", cred, read=read, write=write, create=create, delete=delete, admin=admin, all=all) kazoo-1.2.1/kazoo/testing/000077500000000000000000000000001217652145400154455ustar00rootroot00000000000000kazoo-1.2.1/kazoo/testing/__init__.py000066400000000000000000000002271217652145400175570ustar00rootroot00000000000000from kazoo.testing.harness import KazooTestCase from kazoo.testing.harness import KazooTestHarness __all__ = ('KazooTestHarness', 'KazooTestCase', ) kazoo-1.2.1/kazoo/testing/common.py000066400000000000000000000222151217652145400173110ustar00rootroot00000000000000# # Copyright (C) 2010-2011, 2011 Canonical Ltd. All Rights Reserved # # This file was originally taken from txzookeeper and modified later. # # Authors: # Kapil Thangavelu and the Kazoo team # # txzookeeper is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # txzookeeper is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with txzookeeper. If not, see . import code import os import os.path import shutil import signal import subprocess import tempfile import traceback from itertools import chain from collections import namedtuple from glob import glob def debug(sig, frame): """Interrupt running process, and provide a python prompt for interactive debugging.""" d = {'_frame': frame} # Allow access to frame object. d.update(frame.f_globals) # Unless shadowed by global d.update(frame.f_locals) i = code.InteractiveConsole(d) message = "Signal recieved : entering python shell.\nTraceback:\n" message += ''.join(traceback.format_stack(frame)) i.interact(message) def listen(): if os.name != 'nt': # SIGUSR1 is not supported on Windows signal.signal(signal.SIGUSR1, debug) # Register handler listen() def to_java_compatible_path(path): if os.name == 'nt': path = path.replace('\\', '/') return path ServerInfo = namedtuple( "ServerInfo", "server_id client_port election_port leader_port") class ManagedZooKeeper(object): """Class to manage the running of a ZooKeeper instance for testing. Note: no attempt is made to probe the ZooKeeper instance is actually available, or that the selected port is free. In the future, we may want to do that, especially when run in a Hudson/Buildbot context, to ensure more test robustness.""" def __init__(self, software_path, server_info, peers=(), classpath=None): """Define the ZooKeeper test instance. @param install_path: The path to the install for ZK @param port: The port to run the managed ZK instance """ self.install_path = software_path self._classpath = classpath self.server_info = server_info self.host = "127.0.0.1" self.peers = peers self.working_path = tempfile.mkdtemp() self._running = False def run(self): """Run the ZooKeeper instance under a temporary directory. Writes ZK log messages to zookeeper.log in the current directory. """ if self.running: return config_path = os.path.join(self.working_path, "zoo.cfg") log_path = os.path.join(self.working_path, "log") log4j_path = os.path.join(self.working_path, "log4j.properties") data_path = os.path.join(self.working_path, "data") # various setup steps if not os.path.exists(self.working_path): os.mkdir(self.working_path) if not os.path.exists(log_path): os.mkdir(log_path) if not os.path.exists(data_path): os.mkdir(data_path) with open(config_path, "w") as config: config.write(""" tickTime=2000 dataDir=%s clientPort=%s maxClientCnxns=0 """ % (to_java_compatible_path(data_path), self.server_info.client_port)) # setup a replicated setup if peers are specified if self.peers: servers_cfg = [] for p in chain((self.server_info,), self.peers): servers_cfg.append("server.%s=localhost:%s:%s" % ( p.server_id, p.leader_port, p.election_port)) with open(config_path, "a") as config: config.write(""" initLimit=4 syncLimit=2 %s """ % ("\n".join(servers_cfg))) # Write server ids into datadir with open(os.path.join(data_path, "myid"), "w") as myid_file: myid_file.write(str(self.server_info.server_id)) with open(log4j_path, "w") as log4j: log4j.write(""" # DEFAULT: console appender only log4j.rootLogger=INFO, ROLLINGFILE log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender log4j.appender.ROLLINGFILE.Threshold=DEBUG log4j.appender.ROLLINGFILE.File=""" + to_java_compatible_path( self.working_path + os.sep + "zookeeper.log\n")) self.process = subprocess.Popen( args=["java", "-cp", self.classpath, "-Dreadonlymode.enabled=true", "-Dzookeeper.log.dir=%s" % log_path, "-Dzookeeper.root.logger=INFO,CONSOLE", "-Dlog4j.configuration=file:%s" % log4j_path, # "-Dlog4j.debug", "org.apache.zookeeper.server.quorum.QuorumPeerMain", config_path]) self._running = True @property def classpath(self): """Get the classpath necessary to run ZooKeeper.""" if self._classpath: return self._classpath # Two possibilities, as seen in zkEnv.sh: # Check for a release - top-level zookeeper-*.jar? jars = glob((os.path.join( self.install_path, 'zookeeper-*.jar'))) if jars: # Release build (`ant package`) jars.extend(glob(os.path.join( self.install_path, "lib/*.jar"))) # support for different file locations on Debian/Ubuntu jars.extend(glob(os.path.join( self.install_path, "log4j-*.jar"))) jars.extend(glob(os.path.join( self.install_path, "slf4j-*.jar"))) else: # Development build (plain `ant`) jars = glob((os.path.join( self.install_path, 'build/zookeeper-*.jar'))) jars.extend(glob(os.path.join( self.install_path, "build/lib/*.jar"))) return os.pathsep.join(jars) @property def address(self): """Get the address of the ZooKeeper instance.""" return "%s:%s" % (self.host, self.client_port) @property def running(self): return self._running @property def client_port(self): return self.server_info.client_port def reset(self): """Stop the zookeeper instance, cleaning out its on disk-data.""" self.stop() shutil.rmtree(os.path.join(self.working_path, "data")) os.mkdir(os.path.join(self.working_path, "data")) with open(os.path.join(self.working_path, "data", "myid"), "w") as fh: fh.write(str(self.server_info.server_id)) def stop(self): """Stop the Zookeeper instance, retaining on disk state.""" if not self.running: return self.process.terminate() self.process.wait() self._running = False def destroy(self): """Stop the ZooKeeper instance and destroy its on disk-state""" # called by at exit handler, reimport to avoid cleanup race. import shutil self.stop() shutil.rmtree(self.working_path) class ZookeeperCluster(object): def __init__(self, install_path=None, classpath=None, size=3, port_offset=20000): self._install_path = install_path self._classpath = classpath self._servers = [] # Calculate ports and peer group port = port_offset peers = [] for i in range(size): info = ServerInfo(i + 1, port, port + 1, port + 2) peers.append(info) port += 10 # Instantiate Managed ZK Servers for i in range(size): server_peers = list(peers) server_info = server_peers.pop(i) self._servers.append( ManagedZooKeeper( self._install_path, server_info, server_peers, classpath=self._classpath)) def __getitem__(self, k): return self._servers[k] def __iter__(self): return iter(self._servers) def start(self): # Zookeeper client expresses a preference for either lower ports or # lexicographical ordering of hosts, to ensure that all servers have a # chance to startup, start them in reverse order. for server in reversed(list(self)): server.run() # Giving the servers a moment to start, decreases the overall time # required for a client to successfully connect (2s vs. 4s without # the sleep). import time time.sleep(2) def stop(self): for server in self: server.stop() self._servers = [] def terminate(self): for server in self: server.destroy() def reset(self): for server in self: server.reset() kazoo-1.2.1/kazoo/testing/harness.py000066400000000000000000000117551217652145400174730ustar00rootroot00000000000000"""Kazoo testing harnesses""" import atexit import logging import os import uuid import threading import unittest from kazoo.client import KazooClient from kazoo.exceptions import NotEmptyError from kazoo.protocol.states import ( KazooState ) from kazoo.testing.common import ZookeeperCluster from kazoo.protocol.connection import _SESSION_EXPIRED log = logging.getLogger(__name__) CLUSTER = None def get_global_cluster(): global CLUSTER if CLUSTER is None: ZK_HOME = os.environ.get("ZOOKEEPER_PATH") ZK_CLASSPATH = os.environ.get("ZOOKEEPER_CLASSPATH") assert ZK_HOME or ZK_CLASSPATH, ( "either ZOOKEEPER_PATH or ZOOKEEPER_CLASSPATH environment variable " "must be defined.\n" "For deb package installations this is /usr/share/java") CLUSTER = ZookeeperCluster(ZK_HOME, classpath=ZK_CLASSPATH) atexit.register(lambda cluster: cluster.terminate(), CLUSTER) return CLUSTER class KazooTestHarness(unittest.TestCase): """Harness for testing code that uses Kazoo This object can be used directly or as a mixin. It supports starting and stopping a complete ZooKeeper cluster locally and provides an API for simulating errors and expiring sessions. Example:: class MyTestCase(KazooTestHarness): def setUp(self): self.setup_zookeeper() # additional test setup def tearDown(self): self.teardown_zookeeper() def test_something(self): something_that_needs_a_kazoo_client(self.client) def test_something_else(self): something_that_needs_zk_servers(self.servers) """ def __init__(self, *args, **kw): super(KazooTestHarness, self).__init__(*args, **kw) self.client = None self._clients = [] @property def cluster(self): return get_global_cluster() @property def servers(self): return ",".join([s.address for s in self.cluster]) def _get_nonchroot_client(self): return KazooClient(self.servers) def _get_client(self, **kwargs): kwargs['retry_max_delay'] = 2 kwargs['max_retries'] = 35 c = KazooClient(self.hosts, **kwargs) try: self._clients.append(c) except AttributeError: self._client = [c] return c def expire_session(self, client_id=None): """Force ZK to expire a client session :param client_id: id of client to expire. If unspecified, the id of self.client will be used. """ client_id = client_id or self.client.client_id lost = threading.Event() safe = threading.Event() def watch_loss(state): if state == KazooState.LOST: lost.set() if lost.is_set() and state == KazooState.CONNECTED: safe.set() return True self.client.add_listener(watch_loss) self.client._call(_SESSION_EXPIRED, None) lost.wait(5) if not lost.isSet(): raise Exception("Failed to get notified of session loss") # Wait for the reconnect now safe.wait(15) if not safe.isSet(): raise Exception("Failed to see client reconnect") self.client.retry(self.client.get_async, '/') def setup_zookeeper(self, **client_options): """Create a ZK cluster and chrooted :class:`KazooClient` The cluster will only be created on the first invocation and won't be fully torn down until exit. """ if not self.cluster[0].running: self.cluster.start() namespace = "/kazootests" + uuid.uuid4().hex self.hosts = self.servers + namespace if 'timeout' not in client_options: client_options['timeout'] = 0.8 self.client = self._get_client(**client_options) self.client.start() self.client.ensure_path("/") def teardown_zookeeper(self): """Clean up any ZNodes created during the test """ if not self.cluster[0].running: self.cluster.start() tries = 0 if self.client and self.client.connected: while tries < 3: try: self.client.retry(self.client.delete, '/', recursive=True) break except NotEmptyError: pass tries += 1 self.client.stop() self.client.close() del self.client else: client = self._get_client() client.start() client.retry(client.delete, '/', recursive=True) client.stop() client.close() del client for client in self._clients: client.stop() del client self._clients = None class KazooTestCase(KazooTestHarness): def setUp(self): self.setup_zookeeper() def tearDown(self): self.teardown_zookeeper() kazoo-1.2.1/kazoo/tests/000077500000000000000000000000001217652145400151325ustar00rootroot00000000000000kazoo-1.2.1/kazoo/tests/__init__.py000066400000000000000000000000001217652145400172310ustar00rootroot00000000000000kazoo-1.2.1/kazoo/tests/test_barrier.py000066400000000000000000000100671217652145400201750ustar00rootroot00000000000000import threading from nose.tools import eq_ from kazoo.testing import KazooTestCase class KazooBarrierTests(KazooTestCase): def test_barrier_not_exist(self): b = self.client.Barrier("/some/path") eq_(b.wait(), True) def test_barrier_exists(self): b = self.client.Barrier("/some/path") b.create() eq_(b.wait(0), False) b.remove() eq_(b.wait(), True) def test_remove_nonexistent_barrier(self): b = self.client.Barrier("/some/path") eq_(b.remove(), False) class KazooDoubleBarrierTests(KazooTestCase): def test_basic_barrier(self): b = self.client.DoubleBarrier("/some/path", 1) eq_(b.participating, False) b.enter() eq_(b.participating, True) b.leave() eq_(b.participating, False) def test_two_barrier(self): av = threading.Event() ev = threading.Event() bv = threading.Event() release_all = threading.Event() b1 = self.client.DoubleBarrier("/some/path", 2) b2 = self.client.DoubleBarrier("/some/path", 2) def make_barrier_one(): b1.enter() ev.set() release_all.wait() b1.leave() ev.set() def make_barrier_two(): bv.wait() b2.enter() av.set() release_all.wait() b2.leave() av.set() # Spin up both of them t1 = threading.Thread(target=make_barrier_one) t1.start() t2 = threading.Thread(target=make_barrier_two) t2.start() eq_(b1.participating, False) eq_(b2.participating, False) bv.set() av.wait() ev.wait() eq_(b1.participating, True) eq_(b2.participating, True) av.clear() ev.clear() release_all.set() av.wait() ev.wait() eq_(b1.participating, False) eq_(b2.participating, False) t1.join() t2.join() def test_three_barrier(self): av = threading.Event() ev = threading.Event() bv = threading.Event() release_all = threading.Event() b1 = self.client.DoubleBarrier("/some/path", 3) b2 = self.client.DoubleBarrier("/some/path", 3) b3 = self.client.DoubleBarrier("/some/path", 3) def make_barrier_one(): b1.enter() ev.set() release_all.wait() b1.leave() ev.set() def make_barrier_two(): bv.wait() b2.enter() av.set() release_all.wait() b2.leave() av.set() # Spin up both of them t1 = threading.Thread(target=make_barrier_one) t1.start() t2 = threading.Thread(target=make_barrier_two) t2.start() eq_(b1.participating, False) eq_(b2.participating, False) bv.set() eq_(b1.participating, False) eq_(b2.participating, False) b3.enter() ev.wait() av.wait() eq_(b1.participating, True) eq_(b2.participating, True) eq_(b3.participating, True) av.clear() ev.clear() release_all.set() b3.leave() av.wait() ev.wait() eq_(b1.participating, False) eq_(b2.participating, False) eq_(b3.participating, False) t1.join() t2.join() def test_barrier_existing_parent_node(self): b = self.client.DoubleBarrier('/some/path', 1) self.assertFalse(b.participating) self.client.create('/some', ephemeral=True) # the barrier cannot create children under an ephemeral node b.enter() self.assertFalse(b.participating) def test_barrier_existing_node(self): b = self.client.DoubleBarrier('/some', 1) self.assertFalse(b.participating) self.client.ensure_path(b.path) self.client.create(b.create_path, ephemeral=True) # the barrier will re-use an existing node b.enter() self.assertTrue(b.participating) b.leave() kazoo-1.2.1/kazoo/tests/test_build.py000066400000000000000000000015661217652145400176520ustar00rootroot00000000000000import os from nose import SkipTest from kazoo.testing import KazooTestCase class TestBuildEnvironment(KazooTestCase): def setUp(self): KazooTestCase.setUp(self) if not os.environ.get('TRAVIS'): raise SkipTest('Only run build config tests on Travis.') def test_gevent_version(self): try: import gevent except ImportError: raise SkipTest('gevent not available.') env_version = os.environ.get('GEVENT_VERSION') if env_version: self.assertEqual(env_version, gevent.__version__) def test_zookeeper_version(self): server_version = self.client.server_version() server_version = '.'.join([str(i) for i in server_version]) env_version = os.environ.get('ZOOKEEPER_VERSION') if env_version: self.assertEqual(env_version, server_version) kazoo-1.2.1/kazoo/tests/test_client.py000066400000000000000000000741431217652145400200320ustar00rootroot00000000000000import sys import threading import time import uuid import unittest from mock import patch from nose import SkipTest from nose.tools import eq_ from nose.tools import raises from kazoo.testing import KazooTestCase from kazoo.exceptions import ( BadArgumentsError, ConfigurationError, ConnectionClosedError, ConnectionLoss, InvalidACLError, NoAuthError, NoNodeError, NodeExistsError, ) if sys.version_info > (3, ): # pragma: nocover def u(s): return s else: # pragma: nocover def u(s): return unicode(s, "unicode_escape") class TestClientTransitions(KazooTestCase): def test_connection_and_disconnection(self): from kazoo.client import KazooState states = [] rc = threading.Event() @self.client.add_listener def listener(state): states.append(state) if state == KazooState.CONNECTED: rc.set() self.client.stop() eq_(states, [KazooState.LOST]) states.pop() self.client.start() rc.wait(2) eq_(states, [KazooState.CONNECTED]) rc.clear() states.pop() self.expire_session() rc.wait(2) req_states = [KazooState.LOST, KazooState.CONNECTED] eq_(states, req_states) class TestClientConstructor(unittest.TestCase): def _makeOne(self, *args, **kw): from kazoo.client import KazooClient return KazooClient(*args, **kw) def test_invalid_handler(self): from kazoo.handlers.threading import SequentialThreadingHandler self.assertRaises(ConfigurationError, self._makeOne, handler=SequentialThreadingHandler) def test_chroot(self): self.assertEqual(self._makeOne(hosts='127.0.0.1:2181/').chroot, '') self.assertEqual(self._makeOne(hosts='127.0.0.1:2181/a').chroot, '/a') self.assertEqual(self._makeOne(hosts='127.0.0.1/a').chroot, '/a') self.assertEqual(self._makeOne(hosts='127.0.0.1/a/b').chroot, '/a/b') self.assertEqual(self._makeOne( hosts='127.0.0.1:2181,127.0.0.1:2182/a/b').chroot, '/a/b') def test_connection_timeout(self): from kazoo.handlers.threading import TimeoutError client = self._makeOne(hosts='127.0.0.1:9') self.assertTrue(client.handler.timeout_exception is TimeoutError) self.assertRaises(TimeoutError, client.start, 0.1) def test_ordered_host_selection(self): client = self._makeOne(hosts='127.0.0.1:9,127.0.0.2:9/a', randomize_hosts=False) hosts = [h for h in client.hosts] eq_(hosts, [('127.0.0.1', 9), ('127.0.0.2', 9)]) def test_invalid_hostname(self): client = self._makeOne(hosts='nosuchhost/a') timeout = client.handler.timeout_exception self.assertRaises(timeout, client.start, 0.1) class TestConnection(KazooTestCase): def _makeAuth(self, *args, **kwargs): from kazoo.security import make_digest_acl return make_digest_acl(*args, **kwargs) def test_chroot_warning(self): k = self._get_nonchroot_client() k.chroot = 'abba' try: with patch('warnings.warn') as mock_func: k.start() assert mock_func.called finally: k.stop() def test_auth(self): username = uuid.uuid4().hex password = uuid.uuid4().hex digest_auth = "%s:%s" % (username, password) acl = self._makeAuth(username, password, all=True) self.client.add_auth("digest", digest_auth) self.client.default_acl = (acl,) try: self.client.create("/1") self.client.create("/1/2") self.client.ensure_path("/1/2/3") eve = self._get_client() eve.start() self.assertRaises(NoAuthError, eve.get, "/1/2") # try again with the wrong auth token eve.add_auth("digest", "badbad:bad") self.assertRaises(NoAuthError, eve.get, "/1/2") finally: # Ensure we remove the ACL protected nodes self.client.delete("/1", recursive=True) eve.stop() eve.close() def test_connect_auth(self): username = uuid.uuid4().hex password = uuid.uuid4().hex digest_auth = "%s:%s" % (username, password) acl = self._makeAuth(username, password, all=True) client = self._get_client(auth_data=[('digest', digest_auth)]) client.start() try: client.create('/1', acl=(acl,)) # give ZK a chance to copy data to other node time.sleep(0.1) self.assertRaises(NoAuthError, self.client.get, "/1") finally: client.delete('/1') client.stop() client.close() def test_unicode_auth(self): username = u("xe4/\hm") password = u("/\xe4hm") digest_auth = "%s:%s" % (username, password) acl = self._makeAuth(username, password, all=True) self.client.add_auth("digest", digest_auth) self.client.default_acl = (acl,) try: self.client.create("/1") self.client.ensure_path("/1/2/3") eve = self._get_client() eve.start() self.assertRaises(NoAuthError, eve.get, "/1/2") # try again with the wrong auth token eve.add_auth("digest", "badbad:bad") self.assertRaises(NoAuthError, eve.get, "/1/2") finally: # Ensure we remove the ACL protected nodes self.client.delete("/1", recursive=True) eve.stop() eve.close() def test_invalid_auth(self): self.assertRaises(TypeError, self.client.add_auth, 'digest', ('user', 'pass')) self.assertRaises(TypeError, self.client.add_auth, None, ('user', 'pass')) def test_session_expire(self): from kazoo.protocol.states import KazooState cv = threading.Event() def watch_events(event): if event == KazooState.LOST: cv.set() self.client.add_listener(watch_events) self.expire_session() cv.wait(3) assert cv.is_set() def test_bad_session_expire(self): from kazoo.protocol.states import KazooState cv = threading.Event() ab = threading.Event() def watch_events(event): if event == KazooState.LOST: ab.set() raise Exception("oops") cv.set() self.client.add_listener(watch_events) self.expire_session() ab.wait(0.5) assert ab.is_set() cv.wait(0.5) assert not cv.is_set() def test_state_listener(self): from kazoo.protocol.states import KazooState states = [] condition = threading.Condition() def listener(state): with condition: states.append(state) condition.notify_all() self.client.stop() eq_(self.client.state, KazooState.LOST) self.client.add_listener(listener) self.client.start(5) with condition: if not states: condition.wait(5) eq_(len(states), 1) eq_(states[0], KazooState.CONNECTED) def test_invalid_listener(self): self.assertRaises(ConfigurationError, self.client.add_listener, 15) def test_listener_only_called_on_real_state_change(self): from kazoo.protocol.states import KazooState self.assertTrue(self.client.state, KazooState.CONNECTED) called = [False] condition = threading.Event() def listener(state): called[0] = True condition.set() self.client.add_listener(listener) self.client._make_state_change(KazooState.CONNECTED) condition.wait(3) self.assertFalse(called[0]) def test_no_connection(self): from kazoo.exceptions import SessionExpiredError client = self.client client.stop() self.assertFalse(client.connected) self.assertTrue(client.client_id is None) self.assertRaises(SessionExpiredError, client.exists, '/') def test_double_start(self): self.assertTrue(self.client.connected) self.client.start() self.assertTrue(self.client.connected) def test_double_stop(self): self.client.stop() self.assertFalse(self.client.connected) self.client.stop() self.assertFalse(self.client.connected) def test_restart(self): self.assertTrue(self.client.connected) self.client.restart() self.assertTrue(self.client.connected) def test_closed(self): client = self.client client.stop() write_pipe = client._connection._write_pipe # close the connection to free the pipe client.close() eq_(client._connection._write_pipe, None) from kazoo.protocol.states import KeeperState # sneak in and patch client to simulate race between a thread # calling stop(); close() and one running a command oldstate = client._state client._state = KeeperState.CONNECTED client._connection._write_pipe = write_pipe try: # simulate call made after write pipe is closed self.assertRaises(ConnectionClosedError, client.exists, '/') # simualte call made after write pipe is set to None client._connection._write_pipe = None self.assertRaises(ConnectionClosedError, client.exists, '/') finally: # reset for teardown client._state = oldstate client._connection._write_pipe = None class TestClient(KazooTestCase): def _getKazooState(self): from kazoo.protocol.states import KazooState return KazooState def test_client_id(self): client_id = self.client.client_id self.assertEqual(type(client_id), tuple) # make sure password is of correct length self.assertEqual(len(client_id[1]), 16) def test_connected(self): client = self.client self.assertTrue(client.connected) def test_create(self): client = self.client path = client.create("/1") eq_(path, "/1") self.assertTrue(client.exists("/1")) def test_create_unicode_path(self): client = self.client path = client.create(u("/ascii")) eq_(path, u("/ascii")) path = client.create(u("/\xe4hm")) eq_(path, u("/\xe4hm")) def test_create_async_returns_unchrooted_path(self): client = self.client path = client.create_async('/1').get() eq_(path, "/1") def test_create_invalid_path(self): client = self.client self.assertRaises(TypeError, client.create, ('a', )) self.assertRaises(ValueError, client.create, ".") self.assertRaises(ValueError, client.create, "/a/../b") self.assertRaises(BadArgumentsError, client.create, "/b\x00") self.assertRaises(BadArgumentsError, client.create, "/b\x1e") def test_create_invalid_arguments(self): from kazoo.security import OPEN_ACL_UNSAFE single_acl = OPEN_ACL_UNSAFE[0] client = self.client self.assertRaises(TypeError, client.create, 'a', acl='all') self.assertRaises(TypeError, client.create, 'a', acl=single_acl) self.assertRaises(TypeError, client.create, 'a', value=['a']) self.assertRaises(TypeError, client.create, 'a', ephemeral='yes') self.assertRaises(TypeError, client.create, 'a', sequence='yes') self.assertRaises(TypeError, client.create, 'a', makepath='yes') def test_create_value(self): client = self.client client.create("/1", b"bytes") data, stat = client.get("/1") eq_(data, b"bytes") def test_create_unicode_value(self): client = self.client self.assertRaises(TypeError, client.create, "/1", u("\xe4hm")) def test_create_large_value(self): client = self.client kb_512 = b"a" * (512 * 1024) client.create("/1", kb_512) self.assertTrue(client.exists("/1")) mb_2 = b"a" * (2 * 1024 * 1024) self.assertRaises(ConnectionLoss, client.create, "/2", mb_2) def test_create_acl_duplicate(self): from kazoo.security import OPEN_ACL_UNSAFE single_acl = OPEN_ACL_UNSAFE[0] client = self.client client.create("/1", acl=[single_acl, single_acl]) acls, stat = client.get_acls("/1") # ZK >3.4 removes duplicate ACL entries version = client.server_version() self.assertEqual(len(acls), 1 if version > (3, 4) else 2) def test_version_no_connection(self): @raises(ConnectionLoss) def testit(): self.client.server_version() self.client.stop() testit() def test_create_ephemeral(self): client = self.client client.create("/1", b"ephemeral", ephemeral=True) data, stat = client.get("/1") eq_(data, b"ephemeral") eq_(stat.ephemeralOwner, client.client_id[0]) def test_create_no_ephemeral(self): client = self.client client.create("/1", b"val1") data, stat = client.get("/1") self.assertFalse(stat.ephemeralOwner) def test_create_ephemeral_no_children(self): from kazoo.exceptions import NoChildrenForEphemeralsError client = self.client client.create("/1", b"ephemeral", ephemeral=True) self.assertRaises(NoChildrenForEphemeralsError, client.create, "/1/2", b"val1") self.assertRaises(NoChildrenForEphemeralsError, client.create, "/1/2", b"val1", ephemeral=True) def test_create_sequence(self): client = self.client client.create("/folder") path = client.create("/folder/a", b"sequence", sequence=True) eq_(path, "/folder/a0000000000") path2 = client.create("/folder/a", b"sequence", sequence=True) eq_(path2, "/folder/a0000000001") path3 = client.create("/folder/", b"sequence", sequence=True) eq_(path3, "/folder/0000000002") def test_create_ephemeral_sequence(self): basepath = "/" + uuid.uuid4().hex realpath = self.client.create(basepath, b"sandwich", sequence=True, ephemeral=True) self.assertTrue(basepath != realpath and realpath.startswith(basepath)) data, stat = self.client.get(realpath) eq_(data, b"sandwich") def test_create_makepath(self): self.client.create("/1/2", b"val1", makepath=True) data, stat = self.client.get("/1/2") eq_(data, b"val1") self.client.create("/1/2/3/4/5", b"val2", makepath=True) data, stat = self.client.get("/1/2/3/4/5") eq_(data, b"val2") self.assertRaises(NodeExistsError, self.client.create, "/1/2/3/4/5", b"val2", makepath=True) def test_create_makepath_incompatible_acls(self): from kazoo.client import KazooClient from kazoo.security import make_digest_acl_credential, CREATOR_ALL_ACL credential = make_digest_acl_credential("username", "password") alt_client = KazooClient(self.cluster[0].address + self.client.chroot, max_retries=5, auth_data=[("digest", credential)]) alt_client.start() alt_client.create("/1/2", b"val2", makepath=True, acl=CREATOR_ALL_ACL) try: self.assertRaises(NoAuthError, self.client.create, "/1/2/3/4/5", b"val2", makepath=True) finally: alt_client.delete('/', recursive=True) alt_client.stop() def test_create_no_makepath(self): self.assertRaises(NoNodeError, self.client.create, "/1/2", b"val1") self.assertRaises(NoNodeError, self.client.create, "/1/2", b"val1", makepath=False) self.client.create("/1/2", b"val1", makepath=True) self.assertRaises(NoNodeError, self.client.create, "/1/2/3/4", b"val1", makepath=False) def test_create_exists(self): from kazoo.exceptions import NodeExistsError client = self.client path = client.create("/1") self.assertRaises(NodeExistsError, client.create, path) def test_create_get_set(self): nodepath = "/" + uuid.uuid4().hex self.client.create(nodepath, b"sandwich", ephemeral=True) data, stat = self.client.get(nodepath) eq_(data, b"sandwich") newstat = self.client.set(nodepath, b"hats", stat.version) self.assertTrue(newstat) assert newstat.version > stat.version # Some other checks of the ZnodeStat object we got eq_(newstat.acl_version, stat.acl_version) eq_(newstat.created, stat.ctime / 1000.0) eq_(newstat.last_modified, newstat.mtime / 1000.0) eq_(newstat.owner_session_id, stat.ephemeralOwner) eq_(newstat.creation_transaction_id, stat.czxid) eq_(newstat.last_modified_transaction_id, newstat.mzxid) eq_(newstat.data_length, newstat.dataLength) eq_(newstat.children_count, stat.numChildren) eq_(newstat.children_version, stat.cversion) def test_get_invalid_arguments(self): client = self.client self.assertRaises(TypeError, client.get, ('a', 'b')) self.assertRaises(TypeError, client.get, 'a', watch=True) def test_bad_argument(self): client = self.client client.ensure_path("/1") self.assertRaises(TypeError, self.client.set, "/1", 1) def test_ensure_path(self): client = self.client client.ensure_path("/1/2") self.assertTrue(client.exists("/1/2")) client.ensure_path("/1/2/3/4") self.assertTrue(client.exists("/1/2/3/4")) def test_sync(self): client = self.client self.assertTrue(client.sync('/'), '/') def test_exists(self): nodepath = "/" + uuid.uuid4().hex exists = self.client.exists(nodepath) eq_(exists, None) self.client.create(nodepath, b"sandwich", ephemeral=True) exists = self.client.exists(nodepath) self.assertTrue(exists) assert isinstance(exists.version, int) multi_node_nonexistent = "/" + uuid.uuid4().hex + "/hats" exists = self.client.exists(multi_node_nonexistent) eq_(exists, None) def test_exists_invalid_arguments(self): client = self.client self.assertRaises(TypeError, client.exists, ('a', 'b')) self.assertRaises(TypeError, client.exists, 'a', watch=True) def test_exists_watch(self): nodepath = "/" + uuid.uuid4().hex event = self.client.handler.event_object() def w(watch_event): eq_(watch_event.path, nodepath) event.set() exists = self.client.exists(nodepath, watch=w) eq_(exists, None) self.client.create(nodepath, ephemeral=True) event.wait(1) self.assertTrue(event.is_set()) def test_exists_watcher_exception(self): nodepath = "/" + uuid.uuid4().hex event = self.client.handler.event_object() # if the watcher throws an exception, all we can really do is log it def w(watch_event): eq_(watch_event.path, nodepath) event.set() raise Exception("test exception in callback") exists = self.client.exists(nodepath, watch=w) eq_(exists, None) self.client.create(nodepath, ephemeral=True) event.wait(1) self.assertTrue(event.is_set()) def test_create_delete(self): nodepath = "/" + uuid.uuid4().hex self.client.create(nodepath, b"zzz") self.client.delete(nodepath) exists = self.client.exists(nodepath) eq_(exists, None) def test_get_acls(self): from kazoo.security import make_digest_acl acl = make_digest_acl('user', 'pass', all=True) client = self.client try: client.create('/a', acl=[acl]) self.assertTrue(acl in client.get_acls('/a')[0]) finally: client.delete('/a') def test_get_acls_invalid_arguments(self): client = self.client self.assertRaises(TypeError, client.get_acls, ('a', 'b')) def test_set_acls(self): from kazoo.security import make_digest_acl acl = make_digest_acl('user', 'pass', all=True) client = self.client client.create('/a') try: client.set_acls('/a', [acl]) self.assertTrue(acl in client.get_acls('/a')[0]) finally: client.delete('/a') def test_set_acls_empty(self): client = self.client client.create('/a') self.assertRaises(InvalidACLError, client.set_acls, '/a', []) def test_set_acls_no_node(self): from kazoo.security import OPEN_ACL_UNSAFE client = self.client self.assertRaises(NoNodeError, client.set_acls, '/a', OPEN_ACL_UNSAFE) def test_set_acls_invalid_arguments(self): from kazoo.security import OPEN_ACL_UNSAFE single_acl = OPEN_ACL_UNSAFE[0] client = self.client self.assertRaises(TypeError, client.set_acls, ('a', 'b'), ()) self.assertRaises(TypeError, client.set_acls, 'a', single_acl) self.assertRaises(TypeError, client.set_acls, 'a', 'all') self.assertRaises(TypeError, client.set_acls, 'a', [single_acl], 'V1') def test_set(self): client = self.client client.create('a', b'first') stat = client.set('a', b'second') data, stat2 = client.get('a') self.assertEqual(data, b'second') self.assertEqual(stat, stat2) def test_set_invalid_arguments(self): client = self.client client.create('a', b'first') self.assertRaises(TypeError, client.set, ('a', 'b'), b'value') self.assertRaises(TypeError, client.set, 'a', ['v', 'w']) self.assertRaises(TypeError, client.set, 'a', b'value', 'V1') def test_delete(self): client = self.client client.ensure_path('/a/b') self.assertTrue('b' in client.get_children('a')) client.delete('/a/b') self.assertFalse('b' in client.get_children('a')) def test_delete_recursive(self): client = self.client client.ensure_path('/a/b/c') client.ensure_path('/a/b/d') client.delete('/a/b', recursive=True) client.delete('/a/b/c', recursive=True) self.assertFalse('b' in client.get_children('a')) def test_delete_invalid_arguments(self): client = self.client client.ensure_path('/a/b') self.assertRaises(TypeError, client.delete, '/a/b', recursive='all') self.assertRaises(TypeError, client.delete, ('a', 'b')) self.assertRaises(TypeError, client.delete, '/a/b', version='V1') def test_get_children(self): client = self.client client.ensure_path('/a/b/c') client.ensure_path('/a/b/d') self.assertEqual(client.get_children('/a'), ['b']) self.assertEqual(set(client.get_children('/a/b')), set(['c', 'd'])) self.assertEqual(client.get_children('/a/b/c'), []) def test_get_children2(self): client = self.client client.ensure_path('/a/b') children, stat = client.get_children('/a', include_data=True) value, stat2 = client.get('/a') self.assertEqual(children, ['b']) self.assertEqual(stat2.version, stat.version) def test_get_children2_many_nodes(self): client = self.client client.ensure_path('/a/b') client.ensure_path('/a/c') client.ensure_path('/a/d') children, stat = client.get_children('/a', include_data=True) value, stat2 = client.get('/a') self.assertEqual(set(children), set(['b', 'c', 'd'])) self.assertEqual(stat2.version, stat.version) def test_get_children_no_node(self): client = self.client self.assertRaises(NoNodeError, client.get_children, '/none') self.assertRaises(NoNodeError, client.get_children, '/none', include_data=True) def test_get_children_invalid_path(self): client = self.client self.assertRaises(ValueError, client.get_children, '../a') def test_get_children_invalid_arguments(self): client = self.client self.assertRaises(TypeError, client.get_children, ('a', 'b')) self.assertRaises(TypeError, client.get_children, 'a', watch=True) self.assertRaises(TypeError, client.get_children, 'a', include_data='yes') def test_invalid_auth(self): from kazoo.exceptions import AuthFailedError from kazoo.protocol.states import KeeperState client = self.client client.stop() client._state = KeeperState.AUTH_FAILED @raises(AuthFailedError) def testit(): client.get('/') testit() def test_client_state(self): from kazoo.protocol.states import KeeperState eq_(self.client.client_state, KeeperState.CONNECTED) dummy_dict = { 'aversion': 1, 'ctime': 0, 'cversion': 1, 'czxid': 110, 'dataLength': 1, 'ephemeralOwner': 'ben', 'mtime': 1, 'mzxid': 1, 'numChildren': 0, 'pzxid': 1, 'version': 1 } class TestClientTransactions(KazooTestCase): def setUp(self): KazooTestCase.setUp(self) ver = self.client.server_version() if ver[1] < 4: raise SkipTest("Must use zookeeper 3.4 or above") def test_basic_create(self): t = self.client.transaction() t.create('/freddy') t.create('/fred', ephemeral=True) t.create('/smith', sequence=True) results = t.commit() eq_(results[0], '/freddy') eq_(len(results), 3) self.assertTrue(results[2].startswith('/smith0')) def test_bad_creates(self): args_list = [(True,), ('/smith', 0), ('/smith', b'', 'bleh'), ('/smith', b'', None, 'fred'), ('/smith', b'', None, True, 'fred')] @raises(TypeError) def testit(args): t = self.client.transaction() t.create(*args) for args in args_list: testit(args) def test_default_acl(self): from kazoo.security import make_digest_acl username = uuid.uuid4().hex password = uuid.uuid4().hex digest_auth = "%s:%s" % (username, password) acl = make_digest_acl(username, password, all=True) self.client.add_auth("digest", digest_auth) self.client.default_acl = (acl,) t = self.client.transaction() t.create('/freddy') results = t.commit() eq_(results[0], '/freddy') def test_basic_delete(self): self.client.create('/fred') t = self.client.transaction() t.delete('/fred') results = t.commit() eq_(results[0], True) def test_bad_deletes(self): args_list = [(True,), ('/smith', 'woops'), ] @raises(TypeError) def testit(args): t = self.client.transaction() t.delete(*args) for args in args_list: testit(args) def test_set(self): self.client.create('/fred', b'01') t = self.client.transaction() t.set_data('/fred', b'oops') t.commit() res = self.client.get('/fred') eq_(res[0], b'oops') def test_bad_sets(self): args_list = [(42, 52), ('/smith', False), ('/smith', b'', 'oops')] @raises(TypeError) def testit(args): t = self.client.transaction() t.set_data(*args) for args in args_list: testit(args) def test_check(self): self.client.create('/fred') version = self.client.get('/fred')[1].version t = self.client.transaction() t.check('/fred', version) t.create('/blah') results = t.commit() eq_(results[0], True) eq_(results[1], '/blah') def test_bad_checks(self): args_list = [(42, 52), ('/smith', 'oops')] @raises(TypeError) def testit(args): t = self.client.transaction() t.check(*args) for args in args_list: testit(args) def test_bad_transaction(self): from kazoo.exceptions import RolledBackError, NoNodeError t = self.client.transaction() t.create('/fred') t.delete('/smith') results = t.commit() eq_(results[0].__class__, RolledBackError) eq_(results[1].__class__, NoNodeError) def test_bad_commit(self): t = self.client.transaction() @raises(ValueError) def testit(): t.commit() t.committed = True testit() def test_bad_context(self): @raises(TypeError) def testit(): with self.client.transaction() as t: t.check(4232) testit() def test_context(self): with self.client.transaction() as t: t.create('/smith', b'32') eq_(self.client.get('/smith')[0], b'32') class TestCallbacks(unittest.TestCase): def test_session_callback_states(self): from kazoo.protocol.states import KazooState, KeeperState from kazoo.client import KazooClient client = KazooClient() client._handle = 1 client._live.set() result = client._session_callback(KeeperState.CONNECTED) eq_(result, None) # Now with stopped client._stopped.set() result = client._session_callback(KeeperState.CONNECTED) eq_(result, None) # Test several state transitions client._stopped.clear() client.start_async = lambda: True client._session_callback(KeeperState.CONNECTED) eq_(client.state, KazooState.CONNECTED) client._session_callback(KeeperState.AUTH_FAILED) eq_(client.state, KazooState.LOST) client._handle = 1 client._session_callback(-250) eq_(client.state, KazooState.SUSPENDED) class TestNonChrootClient(KazooTestCase): def test_create(self): client = self._get_nonchroot_client() self.assertEqual(client.chroot, '') client.start() node = uuid.uuid4().hex path = client.create(node, ephemeral=True) client.delete(path) client.stop() def test_unchroot(self): client = self._get_nonchroot_client() client.chroot = '/a' self.assertEquals(client.unchroot('/a/b'), '/b') self.assertEquals(client.unchroot('/b/c'), '/b/c') kazoo-1.2.1/kazoo/tests/test_connection.py000066400000000000000000000222041217652145400207020ustar00rootroot00000000000000from collections import namedtuple import os import errno import threading import time import uuid import struct from nose import SkipTest from nose.tools import eq_ from nose.tools import raises import mock from kazoo.exceptions import ConnectionLoss from kazoo.protocol.serialization import ( Connect, int_struct, write_string, ) from kazoo.protocol.states import KazooState from kazoo.protocol.connection import _CONNECTION_DROP from kazoo.testing import KazooTestCase from kazoo.tests.util import wait class Delete(namedtuple('Delete', 'path version')): type = 2 def serialize(self): b = bytearray() b.extend(write_string(self.path)) b.extend(int_struct.pack(self.version)) return b @classmethod def deserialize(self, bytes, offset): raise ValueError("oh my") class TestConnectionHandler(KazooTestCase): def test_bad_deserialization(self): async_object = self.client.handler.async_result() self.client._queue.append((Delete(self.client.chroot, -1), async_object)) os.write(self.client._connection._write_pipe, b'\0') @raises(ValueError) def testit(): async_object.get() testit() def test_with_bad_sessionid(self): ev = threading.Event() def expired(state): if state == KazooState.CONNECTED: ev.set() password = os.urandom(16) client = self._get_client(client_id=(82838284824, password)) client.add_listener(expired) client.start() try: ev.wait(15) eq_(ev.is_set(), True) finally: client.stop() def test_connection_read_timeout(self): client = self.client ev = threading.Event() path = "/" + uuid.uuid4().hex handler = client.handler _select = handler.select _socket = client._connection._socket def delayed_select(*args, **kwargs): result = _select(*args, **kwargs) if len(args[0]) == 1 and _socket in args[0]: # for any socket read, simulate a timeout return [], [], [] return result def back(state): if state == KazooState.CONNECTED: ev.set() client.add_listener(back) client.create(path, b"1") try: handler.select = delayed_select self.assertRaises(ConnectionLoss, client.get, path) finally: handler.select = _select # the client reconnects automatically ev.wait(5) eq_(ev.is_set(), True) eq_(client.get(path)[0], b"1") def test_connection_write_timeout(self): client = self.client ev = threading.Event() path = "/" + uuid.uuid4().hex handler = client.handler _select = handler.select _socket = client._connection._socket def delayed_select(*args, **kwargs): result = _select(*args, **kwargs) if _socket in args[1]: # for any socket write, simulate a timeout return [], [], [] return result def back(state): if state == KazooState.CONNECTED: ev.set() client.add_listener(back) try: handler.select = delayed_select self.assertRaises(ConnectionLoss, client.create, path) finally: handler.select = _select # the client reconnects automatically ev.wait(5) eq_(ev.is_set(), True) eq_(client.exists(path), None) def test_connection_deserialize_fail(self): client = self.client ev = threading.Event() path = "/" + uuid.uuid4().hex handler = client.handler _select = handler.select _socket = client._connection._socket def delayed_select(*args, **kwargs): result = _select(*args, **kwargs) if _socket in args[1]: # for any socket write, simulate a timeout return [], [], [] return result def back(state): if state == KazooState.CONNECTED: ev.set() client.add_listener(back) deserialize_ev = threading.Event() def bad_deserialize(bytes, offset): deserialize_ev.set() raise struct.error() # force the connection to die but, on reconnect, cause the # server response to be non-deserializable. ensure that the client # continues to retry. This partially reproduces a rare bug seen # in production. with mock.patch.object(Connect, 'deserialize') as mock_deserialize: mock_deserialize.side_effect = bad_deserialize try: handler.select = delayed_select self.assertRaises(ConnectionLoss, client.create, path) finally: handler.select = _select # the client reconnects automatically but the first attempt will # hit a deserialize failure. wait for that. deserialize_ev.wait(5) eq_(deserialize_ev.is_set(), True) # this time should succeed ev.wait(5) eq_(ev.is_set(), True) eq_(client.exists(path), None) def test_connection_close(self): self.assertRaises(Exception, self.client.close) self.client.stop() self.client.close() # should be able to restart self.client.start() def test_connection_pipe(self): client = self.client read_pipe = client._connection._read_pipe write_pipe = client._connection._write_pipe assert read_pipe is not None assert write_pipe is not None # stop client and pipe should not yet be closed client.stop() assert read_pipe is not None assert write_pipe is not None os.fstat(read_pipe) os.fstat(write_pipe) # close client, and pipes should be client.close() try: os.fstat(read_pipe) except OSError as e: if not e.errno == errno.EBADF: raise else: self.fail("Expected read_pipe to be closed") try: os.fstat(write_pipe) except OSError as e: if not e.errno == errno.EBADF: raise else: self.fail("Expected write_pipe to be closed") # start client back up. should get a new, valid pipe client.start() read_pipe = client._connection._read_pipe write_pipe = client._connection._write_pipe assert read_pipe is not None assert write_pipe is not None os.fstat(read_pipe) os.fstat(write_pipe) def test_dirty_pipe(self): client = self.client read_pipe = client._connection._read_pipe write_pipe = client._connection._write_pipe # add a stray byte to the pipe and ensure that doesn't # blow up client. simulates case where some error leaves # a byte in the pipe which doesn't correspond to the # request queue. os.write(write_pipe, b'\0') # eventually this byte should disappear from pipe wait(lambda: client.handler.select([read_pipe], [], [], 0)[0] == []) class TestConnectionDrop(KazooTestCase): def test_connection_dropped(self): ev = threading.Event() def back(state): if state == KazooState.CONNECTED: ev.set() # create a node with a large value and stop the ZK node path = "/" + uuid.uuid4().hex self.client.create(path) self.client.add_listener(back) result = self.client.set_async(path, b'a' * 1000 * 1024) self.client._call(_CONNECTION_DROP, None) self.assertRaises(ConnectionLoss, result.get) # we have a working connection to a new node ev.wait(30) eq_(ev.is_set(), True) class TestReadOnlyMode(KazooTestCase): def setUp(self): self.setup_zookeeper(read_only=True) ver = self.client.server_version() if ver[1] < 4: raise SkipTest("Must use zookeeper 3.4 or above") def tearDown(self): self.client.stop() def test_read_only(self): from kazoo.exceptions import NotReadOnlyCallError from kazoo.protocol.states import KeeperState client = self.client states = [] ev = threading.Event() @client.add_listener def listen(state): states.append(state) if client.client_state == KeeperState.CONNECTED_RO: ev.set() try: self.cluster[1].stop() self.cluster[2].stop() ev.wait(6) eq_(ev.is_set(), True) eq_(client.client_state, KeeperState.CONNECTED_RO) # Test read only command eq_(client.get_children('/'), []) # Test error with write command @raises(NotReadOnlyCallError) def testit(): client.create('/fred') testit() # Wait for a ping time.sleep(15) finally: client.remove_listener(listen) self.cluster[1].run() self.cluster[2].run() kazoo-1.2.1/kazoo/tests/test_counter.py000066400000000000000000000015631217652145400202270ustar00rootroot00000000000000import uuid from nose.tools import eq_ from kazoo.testing import KazooTestCase class KazooCounterTests(KazooTestCase): def _makeOne(self, **kw): path = "/" + uuid.uuid4().hex return self.client.Counter(path, **kw) def test_int_counter(self): counter = self._makeOne() eq_(counter.value, 0) counter += 2 counter + 1 eq_(counter.value, 3) counter -= 3 counter - 1 eq_(counter.value, -1) def test_float_counter(self): counter = self._makeOne(default=0.0) eq_(counter.value, 0.0) counter += 2.1 eq_(counter.value, 2.1) counter -= 3.1 eq_(counter.value, -1.0) def test_errors(self): counter = self._makeOne() self.assertRaises(TypeError, counter.__add__, 2.1) self.assertRaises(TypeError, counter.__add__, b"a") kazoo-1.2.1/kazoo/tests/test_election.py000066400000000000000000000110321217652145400203420ustar00rootroot00000000000000import uuid import sys import threading from nose.tools import eq_ from kazoo.testing import KazooTestCase from kazoo.tests.util import wait class UniqueError(Exception): """Error raised only by test leader function """ class KazooElectionTests(KazooTestCase): def setUp(self): super(KazooElectionTests, self).setUp() self.path = "/" + uuid.uuid4().hex self.condition = threading.Condition() # election contenders set these when elected. The exit event is set by # the test to make the leader exit. self.leader_id = None self.exit_event = None # tests set this before the event to make the leader raise an error self.raise_exception = False # set by a worker thread when an unexpected error is hit. # better way to do this? self.thread_exc_info = None def _spawn_contender(self, contender_id, election): thread = threading.Thread(target=self._election_thread, args=(contender_id, election)) thread.daemon = True thread.start() return thread def _election_thread(self, contender_id, election): try: election.run(self._leader_func, contender_id) except UniqueError: if not self.raise_exception: self.thread_exc_info = sys.exc_info() except Exception: self.thread_exc_info = sys.exc_info() else: if self.raise_exception: e = Exception("expected leader func to raise exception") self.thread_exc_info = (Exception, e, None) def _leader_func(self, name): exit_event = threading.Event() with self.condition: self.exit_event = exit_event self.leader_id = name self.condition.notify_all() exit_event.wait(45) if self.raise_exception: raise UniqueError("expected error in the leader function") def _check_thread_error(self): if self.thread_exc_info: t, o, tb = self.thread_exc_info raise t(o) def test_election(self): elections = {} threads = {} for _ in range(3): contender = "c" + uuid.uuid4().hex elections[contender] = self.client.Election(self.path, contender) threads[contender] = self._spawn_contender(contender, elections[contender]) # wait for a leader to be elected times = 0 with self.condition: while not self.leader_id: self.condition.wait(5) times += 1 if times > 5: raise Exception("Still not a leader: lid: %s", self.leader_id) election = self.client.Election(self.path) # make sure all contenders are in the pool wait(lambda: len(election.contenders()) == len(elections)) contenders = election.contenders() eq_(set(contenders), set(elections.keys())) # first one in list should be leader first_leader = contenders[0] eq_(first_leader, self.leader_id) # tell second one to cancel election. should never get elected. elections[contenders[1]].cancel() # make leader exit. third contender should be elected. self.exit_event.set() with self.condition: while self.leader_id == first_leader: self.condition.wait(45) eq_(self.leader_id, contenders[2]) self._check_thread_error() # make first contender re-enter the race threads[first_leader].join() threads[first_leader] = self._spawn_contender(first_leader, elections[first_leader]) # contender set should now be the current leader plus the first leader wait(lambda: len(election.contenders()) == 2) contenders = election.contenders() eq_(set(contenders), set([self.leader_id, first_leader])) # make current leader raise an exception. first should be reelected self.raise_exception = True self.exit_event.set() with self.condition: while self.leader_id != first_leader: self.condition.wait(45) eq_(self.leader_id, first_leader) self._check_thread_error() self.exit_event.set() for thread in threads.values(): thread.join() self._check_thread_error() def test_bad_func(self): election = self.client.Election(self.path) self.assertRaises(ValueError, election.run, "not a callable") kazoo-1.2.1/kazoo/tests/test_exceptions.py000066400000000000000000000012241217652145400207230ustar00rootroot00000000000000from unittest import TestCase class ExceptionsTestCase(TestCase): def _get(self): from kazoo import exceptions return exceptions def test_backwards_alias(self): module = self._get() self.assertTrue(getattr(module, 'NoNodeException')) self.assertTrue(module.NoNodeException, module.NoNodeError) def test_exceptions_code(self): module = self._get() exc_8 = module.EXCEPTIONS[-8] self.assertTrue(isinstance(exc_8(), module.BadArgumentsError)) def test_invalid_code(self): module = self._get() self.assertRaises(RuntimeError, module.EXCEPTIONS.__getitem__, 666) kazoo-1.2.1/kazoo/tests/test_gevent_handler.py000066400000000000000000000103461217652145400215340ustar00rootroot00000000000000import unittest from nose import SkipTest from nose.tools import eq_ from nose.tools import raises from kazoo.client import KazooClient from kazoo.exceptions import NoNodeError from kazoo.protocol.states import Callback from kazoo.testing import KazooTestCase from kazoo.tests import test_client class TestGeventHandler(unittest.TestCase): def setUp(self): try: import gevent except ImportError: raise SkipTest('gevent not available.') def _makeOne(self, *args): from kazoo.handlers.gevent import SequentialGeventHandler return SequentialGeventHandler(*args) def _getAsync(self, *args): from kazoo.handlers.gevent import AsyncResult return AsyncResult def _getEvent(self): from gevent.event import Event return Event def test_proper_threading(self): h = self._makeOne() h.start() assert isinstance(h.event_object(), self._getEvent()) def test_matching_async(self): h = self._makeOne() h.start() async = self._getAsync() assert isinstance(h.async_result(), async) def test_exception_raising(self): h = self._makeOne() @raises(h.timeout_exception) def testit(): raise h.timeout_exception("This is a timeout") testit() def test_exception_in_queue(self): h = self._makeOne() h.start() ev = self._getEvent()() def func(): ev.set() raise ValueError('bang') call1 = Callback('completion', func, ()) h.dispatch_callback(call1) ev.wait() def test_queue_empty_exception(self): from gevent.queue import Empty h = self._makeOne() h.start() ev = self._getEvent()() def func(): ev.set() raise Empty() call1 = Callback('completion', func, ()) h.dispatch_callback(call1) ev.wait() class TestBasicGeventClient(KazooTestCase): def setUp(self): try: import gevent except ImportError: raise SkipTest('gevent not available.') KazooTestCase.setUp(self) def _makeOne(self, *args): from kazoo.handlers.gevent import SequentialGeventHandler return SequentialGeventHandler(*args) def _getEvent(self): from gevent.event import Event return Event def test_start(self): client = self._get_client(handler=self._makeOne()) client.start() self.assertEqual(client.state, 'CONNECTED') client.stop() def test_start_stop_double(self): client = self._get_client(handler=self._makeOne()) client.start() self.assertEqual(client.state, 'CONNECTED') client.handler.start() client.handler.stop() client.stop() def test_basic_commands(self): client = self._get_client(handler=self._makeOne()) client.start() self.assertEqual(client.state, 'CONNECTED') client.create('/anode', 'fred') eq_(client.get('/anode')[0], 'fred') eq_(client.delete('/anode'), True) eq_(client.exists('/anode'), None) client.stop() def test_failures(self): client = self._get_client(handler=self._makeOne()) client.start() self.assertRaises(NoNodeError, client.get, '/none') client.stop() def test_data_watcher(self): client = self._get_client(handler=self._makeOne()) client.start() client.ensure_path('/some/node') ev = self._getEvent()() @client.DataWatch('/some/node') def changed(d, stat): ev.set() ev.wait() ev.clear() client.set('/some/node', 'newvalue') ev.wait() client.stop() class TestGeventClient(test_client.TestClient): def setUp(self): try: import gevent except ImportError: raise SkipTest('gevent not available.') KazooTestCase.setUp(self) def _makeOne(self, *args): from kazoo.handlers.gevent import SequentialGeventHandler return SequentialGeventHandler(*args) def _get_client(self, **kwargs): kwargs["handler"] = self._makeOne() return KazooClient(self.hosts, **kwargs) kazoo-1.2.1/kazoo/tests/test_lock.py000066400000000000000000000357731217652145400175120ustar00rootroot00000000000000import uuid import threading from nose.tools import eq_, ok_ from kazoo.exceptions import CancelledError from kazoo.exceptions import LockTimeout from kazoo.testing import KazooTestCase from kazoo.tests.util import wait class KazooLockTests(KazooTestCase): def setUp(self): super(KazooLockTests, self).setUp() self.lockpath = "/" + uuid.uuid4().hex self.condition = threading.Condition() self.released = threading.Event() self.active_thread = None self.cancelled_threads = [] def _thread_lock_acquire_til_event(self, name, lock, event): try: with lock: with self.condition: eq_(self.active_thread, None) self.active_thread = name self.condition.notify_all() event.wait() with self.condition: eq_(self.active_thread, name) self.active_thread = None self.condition.notify_all() self.released.set() except CancelledError: with self.condition: self.cancelled_threads.append(name) self.condition.notify_all() def test_lock_one(self): lock_name = uuid.uuid4().hex lock = self.client.Lock(self.lockpath, lock_name) event = threading.Event() thread = threading.Thread(target=self._thread_lock_acquire_til_event, args=(lock_name, lock, event)) thread.start() lock2_name = uuid.uuid4().hex anotherlock = self.client.Lock(self.lockpath, lock2_name) # wait for any contender to show up on the lock wait(anotherlock.contenders) eq_(anotherlock.contenders(), [lock_name]) with self.condition: while self.active_thread != lock_name: self.condition.wait() # release the lock event.set() with self.condition: while self.active_thread: self.condition.wait() self.released.wait() thread.join() def test_lock(self): threads = [] names = ["contender" + str(i) for i in range(5)] contender_bits = {} for name in names: e = threading.Event() l = self.client.Lock(self.lockpath, name) t = threading.Thread(target=self._thread_lock_acquire_til_event, args=(name, l, e)) contender_bits[name] = (t, e) threads.append(t) # acquire the lock ourselves first to make the others line up lock = self.client.Lock(self.lockpath, "test") lock.acquire() for t in threads: t.start() # wait for everyone to line up on the lock wait(lambda: len(lock.contenders()) == 6) contenders = lock.contenders() eq_(contenders[0], "test") contenders = contenders[1:] remaining = list(contenders) # release the lock and contenders should claim it in order lock.release() for contender in contenders: thread, event = contender_bits[contender] with self.condition: while not self.active_thread: self.condition.wait() eq_(self.active_thread, contender) eq_(lock.contenders(), remaining) remaining = remaining[1:] event.set() with self.condition: while self.active_thread: self.condition.wait() for thread in threads: thread.join() def test_lock_non_blocking(self): lock_name = uuid.uuid4().hex lock = self.client.Lock(self.lockpath, lock_name) event = threading.Event() thread = threading.Thread(target=self._thread_lock_acquire_til_event, args=(lock_name, lock, event)) thread.start() lock1 = self.client.Lock(self.lockpath, lock_name) # wait for the thread to acquire the lock with self.condition: if not self.active_thread: self.condition.wait(5) ok_(not lock1.acquire(blocking=False)) eq_(lock.contenders(), [lock_name]) # just one - itself event.set() thread.join() def test_lock_fail_first_call(self): event1 = threading.Event() lock1 = self.client.Lock(self.lockpath, "one") thread1 = threading.Thread(target=self._thread_lock_acquire_til_event, args=("one", lock1, event1)) thread1.start() # wait for this thread to acquire the lock with self.condition: if not self.active_thread: self.condition.wait(5) eq_(self.active_thread, "one") eq_(lock1.contenders(), ["one"]) event1.set() thread1.join() def test_lock_cancel(self): event1 = threading.Event() lock1 = self.client.Lock(self.lockpath, "one") thread1 = threading.Thread(target=self._thread_lock_acquire_til_event, args=("one", lock1, event1)) thread1.start() # wait for this thread to acquire the lock with self.condition: if not self.active_thread: self.condition.wait(5) eq_(self.active_thread, "one") client2 = self._get_client() client2.start() event2 = threading.Event() lock2 = client2.Lock(self.lockpath, "two") thread2 = threading.Thread(target=self._thread_lock_acquire_til_event, args=("two", lock2, event2)) thread2.start() # this one should block in acquire. check that it is a contender wait(lambda: len(lock2.contenders()) > 1) eq_(lock2.contenders(), ["one", "two"]) lock2.cancel() with self.condition: if not "two" in self.cancelled_threads: self.condition.wait() assert "two" in self.cancelled_threads eq_(lock2.contenders(), ["one"]) thread2.join() event1.set() thread1.join() client2.stop() def test_lock_double_calls(self): lock1 = self.client.Lock(self.lockpath, "one") lock1.acquire() lock1.acquire() lock1.release() lock1.release() def test_lock_reacquire(self): lock = self.client.Lock(self.lockpath, "one") lock.acquire() lock.release() lock.acquire() lock.release() def test_lock_timeout(self): timeout = 3 e = threading.Event() started = threading.Event() # In the background thread, acquire the lock and wait thrice the time # that the main thread is going to wait to acquire the lock. lock1 = self.client.Lock(self.lockpath, "one") def _thread(lock, event, timeout): with lock: started.set() event.wait(timeout) if not event.isSet(): # Eventually fail to avoid hanging the tests self.fail("lock2 never timed out") t = threading.Thread(target=_thread, args=(lock1, e, timeout * 3)) t.start() # Start the main thread's kazoo client and try to acquire the lock # but give up after `timeout` seconds client2 = self._get_client() client2.start() started.wait(5) self.assertTrue(started.isSet()) lock2 = client2.Lock(self.lockpath, "two") try: lock2.acquire(timeout=timeout) except LockTimeout: # A timeout is the behavior we're expecting, since the background # thread should still be holding onto the lock pass else: self.fail("Main thread unexpectedly acquired the lock") finally: # Cleanup e.set() t.join() client2.stop() class TestSemaphore(KazooTestCase): def setUp(self): super(TestSemaphore, self).setUp() self.lockpath = "/" + uuid.uuid4().hex self.condition = threading.Condition() self.released = threading.Event() self.active_thread = None self.cancelled_threads = [] def test_basic(self): sem1 = self.client.Semaphore(self.lockpath) sem1.acquire() sem1.release() def test_lock_one(self): sem1 = self.client.Semaphore(self.lockpath, max_leases=1) sem2 = self.client.Semaphore(self.lockpath, max_leases=1) started = threading.Event() event = threading.Event() sem1.acquire() def sema_one(): started.set() with sem2: event.set() thread = threading.Thread(target=sema_one, args=()) thread.start() started.wait(10) self.assertFalse(event.is_set()) sem1.release() event.wait(10) self.assert_(event.is_set()) thread.join() def test_non_blocking(self): sem1 = self.client.Semaphore( self.lockpath, identifier='sem1', max_leases=2) sem2 = self.client.Semaphore( self.lockpath, identifier='sem2', max_leases=2) sem3 = self.client.Semaphore( self.lockpath, identifier='sem3', max_leases=2) sem1.acquire() sem2.acquire() ok_(not sem3.acquire(blocking=False)) eq_(set(sem1.lease_holders()), set(['sem1', 'sem2'])) sem2.release() # the next line isn't required, but avoids timing issues in tests sem3.acquire() eq_(set(sem1.lease_holders()), set(['sem1', 'sem3'])) sem1.release() sem3.release() def test_non_blocking_release(self): sem1 = self.client.Semaphore( self.lockpath, identifier='sem1', max_leases=1) sem2 = self.client.Semaphore( self.lockpath, identifier='sem2', max_leases=1) sem1.acquire() sem2.acquire(blocking=False) # make sure there's no shutdown / cleanup error sem1.release() sem2.release() def test_holders(self): started = threading.Event() event = threading.Event() def sema_one(): with self.client.Semaphore(self.lockpath, 'fred', max_leases=1): started.set() event.wait() thread = threading.Thread(target=sema_one, args=()) thread.start() started.wait() sem1 = self.client.Semaphore(self.lockpath) holders = sem1.lease_holders() eq_(holders, ['fred']) event.set() thread.join() def test_semaphore_cancel(self): sem1 = self.client.Semaphore(self.lockpath, 'fred', max_leases=1) sem2 = self.client.Semaphore(self.lockpath, 'george', max_leases=1) sem1.acquire() started = threading.Event() event = threading.Event() def sema_one(): started.set() try: with sem2: started.set() except CancelledError: event.set() thread = threading.Thread(target=sema_one, args=()) thread.start() started.wait() eq_(sem1.lease_holders(), ['fred']) eq_(event.is_set(), False) sem2.cancel() event.wait() eq_(event.is_set(), True) thread.join() def test_multiple_acquire_and_release(self): sem1 = self.client.Semaphore(self.lockpath, 'fred', max_leases=1) sem1.acquire() sem1.acquire() eq_(True, sem1.release()) eq_(False, sem1.release()) def test_handle_session_loss(self): expire_semaphore = self.client.Semaphore(self.lockpath, 'fred', max_leases=1) client = self._get_client() client.start() lh_semaphore = client.Semaphore(self.lockpath, 'george', max_leases=1) lh_semaphore.acquire() started = threading.Event() event = threading.Event() event2 = threading.Event() def sema_one(): started.set() with expire_semaphore: event.set() event2.wait() thread = threading.Thread(target=sema_one, args=()) thread.start() started.wait() eq_(lh_semaphore.lease_holders(), ['george']) # Fired in a separate thread to make sure we can see the effect expired = threading.Event() def expire(): self.expire_session() expired.set() thread = threading.Thread(target=expire, args=()) thread.start() expire_semaphore.wake_event.wait() expired.wait() lh_semaphore.release() client.stop() event.wait(5) eq_(expire_semaphore.lease_holders(), ['fred']) event2.set() thread.join() def test_inconsistent_max_leases(self): sem1 = self.client.Semaphore(self.lockpath, max_leases=1) sem2 = self.client.Semaphore(self.lockpath, max_leases=2) sem1.acquire() self.assertRaises(ValueError, sem2.acquire) def test_inconsistent_max_leases_other_data(self): sem1 = self.client.Semaphore(self.lockpath, max_leases=1) sem2 = self.client.Semaphore(self.lockpath, max_leases=2) self.client.ensure_path(self.lockpath) self.client.set(self.lockpath, b'a$') sem1.acquire() # sem2 thinks it's ok to have two lease holders ok_(sem2.acquire(blocking=False)) def test_reacquire(self): lock = self.client.Semaphore(self.lockpath) lock.acquire() lock.release() lock.acquire() lock.release() def test_acquire_after_cancelled(self): lock = self.client.Semaphore(self.lockpath) self.assertTrue(lock.acquire()) self.assertTrue(lock.release()) lock.cancel() self.assertTrue(lock.cancelled) self.assertTrue(lock.acquire()) def test_timeout(self): timeout = 3 e = threading.Event() started = threading.Event() # In the background thread, acquire the lock and wait thrice the time # that the main thread is going to wait to acquire the lock. sem1 = self.client.Semaphore(self.lockpath, "one") def _thread(sem, event, timeout): with sem: started.set() event.wait(timeout) if not event.isSet(): # Eventually fail to avoid hanging the tests self.fail("sem2 never timed out") t = threading.Thread(target=_thread, args=(sem1, e, timeout * 3)) t.start() # Start the main thread's kazoo client and try to acquire the lock # but give up after `timeout` seconds client2 = self._get_client() client2.start() started.wait(5) self.assertTrue(started.isSet()) sem2 = client2.Semaphore(self.lockpath, "two") try: sem2.acquire(timeout=timeout) except LockTimeout: # A timeout is the behavior we're expecting, since the background # thread will still be holding onto the lock e.set() finally: # Cleanup t.join() client2.stop() kazoo-1.2.1/kazoo/tests/test_partitioner.py000066400000000000000000000061531217652145400211100ustar00rootroot00000000000000import uuid import time from nose.tools import eq_ from kazoo.testing import KazooTestCase from kazoo.recipe.partitioner import PartitionState class KazooPartitionerTests(KazooTestCase): def setUp(self): super(KazooPartitionerTests, self).setUp() self.path = "/" + uuid.uuid4().hex def test_party_of_one(self): partitioner = self.client.SetPartitioner( self.path, set=(1, 2, 3), time_boundary=0.2) partitioner.wait_for_acquire(14) eq_(partitioner.state, PartitionState.ACQUIRED) eq_(list(partitioner), [1, 2, 3]) partitioner.finish() def test_party_of_two(self): partitioners = [self.client.SetPartitioner(self.path, (1, 2), identifier="p%s" % i, time_boundary=0.2) for i in range(2)] partitioners[0].wait_for_acquire(14) partitioners[1].wait_for_acquire(14) eq_(list(partitioners[0]), [1]) eq_(list(partitioners[1]), [2]) partitioners[0].finish() time.sleep(0.1) eq_(partitioners[1].release, True) partitioners[1].finish() def test_party_expansion(self): partitioners = [self.client.SetPartitioner(self.path, (1, 2, 3), identifier="p%s" % i, time_boundary=0.2) for i in range(2)] partitioners[0].wait_for_acquire(14) partitioners[1].wait_for_acquire(14) eq_(partitioners[0].state, PartitionState.ACQUIRED) eq_(partitioners[1].state, PartitionState.ACQUIRED) eq_(list(partitioners[0]), [1, 3]) eq_(list(partitioners[1]), [2]) # Add another partition, wait till they settle partitioners.append(self.client.SetPartitioner(self.path, (1, 2, 3), identifier="p2", time_boundary=0.2)) time.sleep(0.1) eq_(partitioners[0].release, True) for p in partitioners[:-1]: p.release_set() for p in partitioners: p.wait_for_acquire(14) eq_(list(partitioners[0]), [1]) eq_(list(partitioners[1]), [2]) eq_(list(partitioners[2]), [3]) for p in partitioners: p.finish() def test_more_members_than_set_items(self): partitioners = [self.client.SetPartitioner(self.path, (1,), identifier="p%s" % i, time_boundary=0.2) for i in range(2)] partitioners[0].wait_for_acquire(14) partitioners[1].wait_for_acquire(14) eq_(partitioners[0].state, PartitionState.ACQUIRED) eq_(partitioners[1].state, PartitionState.ACQUIRED) eq_(list(partitioners[0]), [1]) eq_(list(partitioners[1]), []) for p in partitioners: p.finish() def test_party_session_failure(self): partitioner = self.client.SetPartitioner( self.path, set=(1, 2, 3), time_boundary=0.2) partitioner.wait_for_acquire(14) eq_(partitioner.state, PartitionState.ACQUIRED) # simulate session failure partitioner._fail_out() partitioner.release_set() self.assertTrue(partitioner.failed) kazoo-1.2.1/kazoo/tests/test_party.py000066400000000000000000000045211217652145400177040ustar00rootroot00000000000000import uuid from nose.tools import eq_ from kazoo.testing import KazooTestCase class KazooPartyTests(KazooTestCase): def setUp(self): super(KazooPartyTests, self).setUp() self.path = "/" + uuid.uuid4().hex def test_party(self): parties = [self.client.Party(self.path, "p%s" % i) for i in range(5)] one_party = parties[0] eq_(list(one_party), []) eq_(len(one_party), 0) participants = set() for party in parties: party.join() participants.add(party.data.decode('utf-8')) eq_(set(party), participants) eq_(len(party), len(participants)) for party in parties: party.leave() participants.remove(party.data.decode('utf-8')) eq_(set(party), participants) eq_(len(party), len(participants)) def test_party_reuse_node(self): party = self.client.Party(self.path, "p1") self.client.ensure_path(self.path) self.client.create(party.create_path) party.join() self.assertTrue(party.participating) party.leave() self.assertFalse(party.participating) self.assertEqual(len(party), 0) def test_party_vanishing_node(self): party = self.client.Party(self.path, "p1") party.join() self.assertTrue(party.participating) self.client.delete(party.create_path) party.leave() self.assertFalse(party.participating) self.assertEqual(len(party), 0) class KazooShallowPartyTests(KazooTestCase): def setUp(self): super(KazooShallowPartyTests, self).setUp() self.path = "/" + uuid.uuid4().hex def test_party(self): parties = [self.client.ShallowParty(self.path, "p%s" % i) for i in range(5)] one_party = parties[0] eq_(list(one_party), []) eq_(len(one_party), 0) participants = set() for party in parties: party.join() participants.add(party.data.decode('utf-8')) eq_(set(party), participants) eq_(len(party), len(participants)) for party in parties: party.leave() participants.remove(party.data.decode('utf-8')) eq_(set(party), participants) eq_(len(party), len(participants)) kazoo-1.2.1/kazoo/tests/test_paths.py000066400000000000000000000060501217652145400176630ustar00rootroot00000000000000import sys from unittest import TestCase from kazoo.protocol import paths if sys.version_info > (3, ): # pragma: nocover def u(s): return s else: # pragma: nocover def u(s): return unicode(s, "unicode_escape") class NormPathTestCase(TestCase): def test_normpath(self): self.assertEqual(paths.normpath('/a/b'), '/a/b') def test_normpath_empty(self): self.assertEqual(paths.normpath(''), '') def test_normpath_unicode(self): self.assertEqual(paths.normpath(u('/\xe4/b')), u('/\xe4/b')) def test_normpath_dots(self): self.assertEqual(paths.normpath('/a./b../c'), '/a./b../c') def test_normpath_slash(self): self.assertEqual(paths.normpath('/'), '/') def test_normpath_multiple_slashes(self): self.assertEqual(paths.normpath('//'), '/') self.assertEqual(paths.normpath('//a/b'), '/a/b') self.assertEqual(paths.normpath('/a//b//'), '/a/b') self.assertEqual(paths.normpath('//a////b///c/'), '/a/b/c') def test_normpath_relative(self): self.assertRaises(ValueError, paths.normpath, './a/b') self.assertRaises(ValueError, paths.normpath, '/a/../b') class JoinTestCase(TestCase): def test_join(self): self.assertEqual(paths.join('/a'), '/a') self.assertEqual(paths.join('/a', 'b/'), '/a/b/') self.assertEqual(paths.join('/a', 'b', 'c'), '/a/b/c') def test_join_empty(self): self.assertEqual(paths.join(''), '') self.assertEqual(paths.join('', 'a', 'b'), 'a/b') self.assertEqual(paths.join('/a', '', 'b/', 'c'), '/a/b/c') def test_join_absolute(self): self.assertEqual(paths.join('/a/b', '/c'), '/c') class IsAbsTestCase(TestCase): def test_isabs(self): self.assertTrue(paths.isabs('/')) self.assertTrue(paths.isabs('/a')) self.assertTrue(paths.isabs('/a//b/c')) self.assertTrue(paths.isabs('//a/b')) def test_isabs_false(self): self.assertFalse(paths.isabs('')) self.assertFalse(paths.isabs('a/')) self.assertFalse(paths.isabs('a/../')) class BaseNameTestCase(TestCase): def test_basename(self): self.assertEquals(paths.basename(''), '') self.assertEquals(paths.basename('/'), '') self.assertEquals(paths.basename('//a'), 'a') self.assertEquals(paths.basename('//a/'), '') self.assertEquals(paths.basename('/a/b.//c..'), 'c..') class PrefixRootTestCase(TestCase): def test_prefix_root(self): self.assertEquals(paths._prefix_root('/a/', 'b/c'), '/a/b/c') self.assertEquals(paths._prefix_root('/a/b', 'c/d'), '/a/b/c/d') self.assertEquals(paths._prefix_root('/a', '/b/c'), '/a/b/c') self.assertEquals(paths._prefix_root('/a', '//b/c.'), '/a/b/c.') class NormRootTestCase(TestCase): def test_norm_root(self): self.assertEquals(paths._norm_root(''), '/') self.assertEquals(paths._norm_root('/'), '/') self.assertEquals(paths._norm_root('//a'), '/a') self.assertEquals(paths._norm_root('//a./b'), '/a./b') kazoo-1.2.1/kazoo/tests/test_queue.py000066400000000000000000000122371217652145400176740ustar00rootroot00000000000000import uuid from nose import SkipTest from nose.tools import eq_, ok_ from kazoo.testing import KazooTestCase class KazooQueueTests(KazooTestCase): def _makeOne(self): path = "/" + uuid.uuid4().hex return self.client.Queue(path) def test_queue_validation(self): queue = self._makeOne() self.assertRaises(TypeError, queue.put, {}) self.assertRaises(TypeError, queue.put, b"one", b"100") self.assertRaises(TypeError, queue.put, b"one", 10.0) self.assertRaises(ValueError, queue.put, b"one", -100) self.assertRaises(ValueError, queue.put, b"one", 100000) def test_empty_queue(self): queue = self._makeOne() eq_(len(queue), 0) self.assertTrue(queue.get() is None) eq_(len(queue), 0) def test_queue(self): queue = self._makeOne() queue.put(b"one") queue.put(b"two") queue.put(b"three") eq_(len(queue), 3) eq_(queue.get(), b"one") eq_(queue.get(), b"two") eq_(queue.get(), b"three") eq_(len(queue), 0) def test_priority(self): queue = self._makeOne() queue.put(b"four", priority=101) queue.put(b"one", priority=0) queue.put(b"two", priority=0) queue.put(b"three", priority=10) eq_(queue.get(), b"one") eq_(queue.get(), b"two") eq_(queue.get(), b"three") eq_(queue.get(), b"four") class KazooLockingQueueTests(KazooTestCase): def setUp(self): KazooTestCase.setUp(self) ver = self.client.server_version() if ver[1] < 4: raise SkipTest("Must use zookeeper 3.4 or above") def _makeOne(self): path = "/" + uuid.uuid4().hex return self.client.LockingQueue(path) def test_queue_validation(self): queue = self._makeOne() self.assertRaises(TypeError, queue.put, {}) self.assertRaises(TypeError, queue.put, b"one", b"100") self.assertRaises(TypeError, queue.put, b"one", 10.0) self.assertRaises(ValueError, queue.put, b"one", -100) self.assertRaises(ValueError, queue.put, b"one", 100000) self.assertRaises(TypeError, queue.put_all, {}) self.assertRaises(TypeError, queue.put_all, [{}]) self.assertRaises(TypeError, queue.put_all, [b"one"], b"100") self.assertRaises(TypeError, queue.put_all, [b"one"], 10.0) self.assertRaises(ValueError, queue.put_all, [b"one"], -100) self.assertRaises(ValueError, queue.put_all, [b"one"], 100000) def test_empty_queue(self): queue = self._makeOne() eq_(len(queue), 0) self.assertTrue(queue.get(0) is None) eq_(len(queue), 0) def test_queue(self): queue = self._makeOne() queue.put(b"one") queue.put_all([b"two", b"three"]) eq_(len(queue), 3) ok_(not queue.consume()) ok_(not queue.holds_lock()) eq_(queue.get(1), b"one") ok_(queue.holds_lock()) # Without consuming, should return the same element eq_(queue.get(1), b"one") ok_(queue.consume()) ok_(not queue.holds_lock()) eq_(queue.get(1), b"two") ok_(queue.holds_lock()) ok_(queue.consume()) ok_(not queue.holds_lock()) eq_(queue.get(1), b"three") ok_(queue.holds_lock()) ok_(queue.consume()) ok_(not queue.holds_lock()) ok_(not queue.consume()) eq_(len(queue), 0) def test_consume(self): queue = self._makeOne() queue.put(b"one") ok_(not queue.consume()) queue.get(.1) ok_(queue.consume()) ok_(not queue.consume()) def test_holds_lock(self): queue = self._makeOne() ok_(not queue.holds_lock()) queue.put(b"one") queue.get(.1) ok_(queue.holds_lock()) queue.consume() ok_(not queue.holds_lock()) def test_priority(self): queue = self._makeOne() queue.put(b"four", priority=101) queue.put(b"one", priority=0) queue.put(b"two", priority=0) queue.put(b"three", priority=10) eq_(queue.get(1), b"one") ok_(queue.consume()) eq_(queue.get(1), b"two") ok_(queue.consume()) eq_(queue.get(1), b"three") ok_(queue.consume()) eq_(queue.get(1), b"four") ok_(queue.consume()) def test_concurrent_execution(self): queue = self._makeOne() value1 = [] value2 = [] value3 = [] event1 = self.client.handler.event_object() event2 = self.client.handler.event_object() event3 = self.client.handler.event_object() def get_concurrently(value, event): q = self.client.LockingQueue(queue.path) value.append(q.get(.1)) event.set() self.client.handler.spawn(get_concurrently, value1, event1) self.client.handler.spawn(get_concurrently, value2, event2) self.client.handler.spawn(get_concurrently, value3, event3) queue.put(b"one") event1.wait(.2) event2.wait(.2) event3.wait(.2) result = value1 + value2 + value3 eq_(result.count(b"one"), 1) eq_(result.count(None), 2) kazoo-1.2.1/kazoo/tests/test_retry.py000066400000000000000000000037411217652145400177150ustar00rootroot00000000000000import unittest from nose.tools import eq_ class TestRetrySleeper(unittest.TestCase): def _pass(self): pass def _fail(self, times=1): from kazoo.retry import ForceRetryError scope = dict(times=0) def inner(): if scope['times'] >= times: pass else: scope['times'] += 1 raise ForceRetryError('Failed!') return inner def _makeOne(self, *args, **kwargs): from kazoo.retry import KazooRetry return KazooRetry(*args, **kwargs) def test_reset(self): retry = self._makeOne(delay=0, max_tries=2) retry(self._fail()) eq_(retry._attempts, 1) retry.reset() eq_(retry._attempts, 0) def test_too_many_tries(self): from kazoo.retry import RetryFailedError retry = self._makeOne(delay=0) self.assertRaises(RetryFailedError, retry, self._fail(times=999)) eq_(retry._attempts, 1) def test_maximum_delay(self): def sleep_func(_time): pass retry = self._makeOne(delay=10, max_tries=100, sleep_func=sleep_func) retry(self._fail(times=10)) self.assertTrue(retry._cur_delay < 4000, retry._cur_delay) # gevent's sleep function is picky about the type eq_(type(retry._cur_delay), float) class TestKazooRetry(unittest.TestCase): def _makeOne(self, **kw): from kazoo.retry import KazooRetry return KazooRetry(**kw) def test_connection_closed(self): from kazoo.exceptions import ConnectionClosedError retry = self._makeOne() def testit(): raise ConnectionClosedError() self.assertRaises(ConnectionClosedError, retry, testit) def test_session_expired(self): from kazoo.exceptions import SessionExpiredError retry = self._makeOne(max_tries=1) def testit(): raise SessionExpiredError() self.assertRaises(Exception, retry, testit) kazoo-1.2.1/kazoo/tests/test_security.py000066400000000000000000000024511217652145400204140ustar00rootroot00000000000000import unittest from nose.tools import eq_ from kazoo.security import Permissions class TestACL(unittest.TestCase): def _makeOne(self, *args, **kwargs): from kazoo.security import make_acl return make_acl(*args, **kwargs) def test_read_acl(self): acl = self._makeOne("digest", ":", read=True) eq_(acl.perms & Permissions.READ, Permissions.READ) def test_all_perms(self): acl = self._makeOne("digest", ":", read=True, write=True, create=True, delete=True, admin=True) for perm in [Permissions.READ, Permissions.CREATE, Permissions.WRITE, Permissions.DELETE, Permissions.ADMIN]: eq_(acl.perms & perm, perm) def test_perm_listing(self): from kazoo.security import ACL f = ACL(15, 'fred') self.assert_('READ' in f.acl_list) self.assert_('WRITE' in f.acl_list) self.assert_('CREATE' in f.acl_list) self.assert_('DELETE' in f.acl_list) f = ACL(16, 'fred') self.assert_('ADMIN' in f.acl_list) f = ACL(31, 'george') self.assert_('ALL' in f.acl_list) def test_perm_repr(self): from kazoo.security import ACL f = ACL(16, 'fred') self.assert_("ACL(perms=16, acl_list=['ADMIN']" in repr(f)) kazoo-1.2.1/kazoo/tests/test_threading_handler.py000066400000000000000000000201621217652145400222060ustar00rootroot00000000000000import threading import unittest import mock from nose.tools import assert_raises from nose.tools import eq_ from nose.tools import raises class TestThreadingHandler(unittest.TestCase): def _makeOne(self, *args): from kazoo.handlers.threading import SequentialThreadingHandler return SequentialThreadingHandler(*args) def _getAsync(self, *args): from kazoo.handlers.threading import AsyncResult return AsyncResult def test_proper_threading(self): h = self._makeOne() h.start() # In Python 3.3 _Event is gone, before Event is function event_class = getattr(threading, '_Event', threading.Event) assert isinstance(h.event_object(), event_class) def test_matching_async(self): h = self._makeOne() h.start() async = self._getAsync() assert isinstance(h.async_result(), async) def test_exception_raising(self): h = self._makeOne() @raises(h.timeout_exception) def testit(): raise h.timeout_exception("This is a timeout") testit() def test_double_start_stop(self): h = self._makeOne() h.start() self.assertTrue(h._running) h.start() h.stop() h.stop() self.assertFalse(h._running) class TestThreadingAsync(unittest.TestCase): def _makeOne(self, *args): from kazoo.handlers.threading import AsyncResult return AsyncResult(*args) def _makeHandler(self): from kazoo.handlers.threading import SequentialThreadingHandler return SequentialThreadingHandler() def test_ready(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) eq_(async.ready(), False) async.set('val') eq_(async.ready(), True) eq_(async.successful(), True) eq_(async.exception, None) def test_callback_queued(self): mock_handler = mock.Mock() mock_handler.completion_queue = mock.Mock() async = self._makeOne(mock_handler) async.rawlink(lambda a: a) async.set('val') assert mock_handler.completion_queue.put.called def test_set_exception(self): mock_handler = mock.Mock() mock_handler.completion_queue = mock.Mock() async = self._makeOne(mock_handler) async.rawlink(lambda a: a) async.set_exception(ImportError('Error occured')) assert isinstance(async.exception, ImportError) assert mock_handler.completion_queue.put.called def test_get_wait_while_setting(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) lst = [] bv = threading.Event() cv = threading.Event() def wait_for_val(): bv.set() val = async.get() lst.append(val) cv.set() th = threading.Thread(target=wait_for_val) th.start() bv.wait() async.set('fred') cv.wait() eq_(lst, ['fred']) th.join() def test_get_with_nowait(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) timeout = self._makeHandler().timeout_exception @raises(timeout) def test_it(): async.get(block=False) test_it() @raises(timeout) def test_nowait(): async.get_nowait() test_nowait() def test_get_with_exception(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) lst = [] bv = threading.Event() cv = threading.Event() def wait_for_val(): bv.set() try: val = async.get() except ImportError: lst.append('oops') else: lst.append(val) cv.set() th = threading.Thread(target=wait_for_val) th.start() bv.wait() async.set_exception(ImportError) cv.wait() eq_(lst, ['oops']) th.join() def test_wait(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) lst = [] bv = threading.Event() cv = threading.Event() def wait_for_val(): bv.set() try: val = async.wait(10) except ImportError: lst.append('oops') else: lst.append(val) cv.set() th = threading.Thread(target=wait_for_val) th.start() bv.wait(10) async.set("fred") cv.wait(15) eq_(lst, [True]) th.join() def test_set_before_wait(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) lst = [] cv = threading.Event() async.set('fred') def wait_for_val(): val = async.get() lst.append(val) cv.set() th = threading.Thread(target=wait_for_val) th.start() cv.wait() eq_(lst, ['fred']) th.join() def test_set_exc_before_wait(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) lst = [] cv = threading.Event() async.set_exception(ImportError) def wait_for_val(): try: val = async.get() except ImportError: lst.append('ooops') else: lst.append(val) cv.set() th = threading.Thread(target=wait_for_val) th.start() cv.wait() eq_(lst, ['ooops']) th.join() def test_linkage(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) cv = threading.Event() lst = [] def add_on(): lst.append(True) def wait_for_val(): async.get() cv.set() th = threading.Thread(target=wait_for_val) th.start() async.rawlink(add_on) async.set('fred') assert mock_handler.completion_queue.put.called async.unlink(add_on) cv.wait() eq_(async.value, 'fred') th.join() def test_linkage_not_ready(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) lst = [] def add_on(): lst.append(True) async.set('fred') assert not mock_handler.completion_queue.called async.rawlink(add_on) assert mock_handler.completion_queue.put.called def test_link_and_unlink(self): mock_handler = mock.Mock() async = self._makeOne(mock_handler) lst = [] def add_on(): lst.append(True) async.rawlink(add_on) assert not mock_handler.completion_queue.put.called async.unlink(add_on) async.set('fred') assert not mock_handler.completion_queue.put.called def test_captured_exception(self): from kazoo.handlers.utils import capture_exceptions mock_handler = mock.Mock() async = self._makeOne(mock_handler) @capture_exceptions(async) def exceptional_function(): return 1/0 exceptional_function() assert_raises(ZeroDivisionError, async.get) def test_no_capture_exceptions(self): from kazoo.handlers.utils import capture_exceptions mock_handler = mock.Mock() async = self._makeOne(mock_handler) lst = [] def add_on(): lst.append(True) async.rawlink(add_on) @capture_exceptions(async) def regular_function(): return True regular_function() assert not mock_handler.completion_queue.put.called def test_wraps(self): from kazoo.handlers.utils import wrap mock_handler = mock.Mock() async = self._makeOne(mock_handler) lst = [] def add_on(result): lst.append(result.get()) async.rawlink(add_on) @wrap(async) def regular_function(): return 'hello' assert regular_function() == 'hello' assert mock_handler.completion_queue.put.called assert async.get() == 'hello' kazoo-1.2.1/kazoo/tests/test_watchers.py000066400000000000000000000266201217652145400203710ustar00rootroot00000000000000import time import threading import uuid from nose.tools import eq_ from nose.tools import raises from kazoo.protocol.states import EventType from kazoo.testing import KazooTestCase class KazooDataWatcherTests(KazooTestCase): def setUp(self): super(KazooDataWatcherTests, self).setUp() self.path = "/" + uuid.uuid4().hex self.client.ensure_path(self.path) def test_data_watcher(self): update = threading.Event() data = [True] # Make it a non-existent path self.path += 'f' @self.client.DataWatch(self.path) def changed(d, stat): data.pop() data.append(d) update.set() update.wait(10) eq_(data, [None]) update.clear() self.client.create(self.path, b'fred') update.wait(10) eq_(data[0], b'fred') update.clear() def test_data_watcher_with_event(self): # Test that the data watcher gets passed the event, if it # accepts three arguments update = threading.Event() data = [True] # Make it a non-existent path self.path += 'f' @self.client.DataWatch(self.path) def changed(d, stat, event): data.pop() data.append(event) update.set() update.wait(10) eq_(data, [None]) update.clear() self.client.create(self.path, b'fred') update.wait(10) eq_(data[0].type, EventType.CREATED) update.clear() def test_func_style_data_watch(self): update = threading.Event() data = [True] # Make it a non-existent path path = self.path + 'f' def changed(d, stat): data.pop() data.append(d) update.set() self.client.DataWatch(path, changed) update.wait(10) eq_(data, [None]) update.clear() self.client.create(path, b'fred') update.wait(10) eq_(data[0], b'fred') update.clear() def test_datawatch_across_session_expire(self): update = threading.Event() data = [True] @self.client.DataWatch(self.path) def changed(d, stat): data.pop() data.append(d) update.set() update.wait(10) eq_(data, [b""]) update.clear() self.expire_session() self.client.retry(self.client.set, self.path, b'fred') update.wait(25) eq_(data[0], b'fred') def test_func_stops(self): update = threading.Event() data = [True] self.path += "f" fail_through = [] @self.client.DataWatch(self.path) def changed(d, stat): data.pop() data.append(d) update.set() if fail_through: return False update.wait(10) eq_(data, [None]) update.clear() fail_through.append(True) self.client.create(self.path, b'fred') update.wait(10) eq_(data[0], b'fred') update.clear() self.client.set(self.path, b'asdfasdf') update.wait(0.2) eq_(data[0], b'fred') d, stat = self.client.get(self.path) eq_(d, b'asdfasdf') def test_no_such_node(self): args = [] @self.client.DataWatch("/some/path") def changed(d, stat): args.extend([d, stat]) eq_(args, [None, None]) def test_bad_watch_func2(self): counter = 0 @self.client.DataWatch(self.path) def changed(d, stat): if counter > 0: raise Exception("oops") raises(Exception)(changed) counter += 1 self.client.set(self.path, b'asdfasdf') def test_watcher_evaluating_to_false(self): class WeirdWatcher(list): def __call__(self, *args): self.called = True watcher = WeirdWatcher() self.client.DataWatch(self.path, watcher) self.client.set(self.path, b'mwahaha') self.assertTrue(watcher.called) def test_watcher_repeat_delete(self): a = [] ev = threading.Event() self.client.delete(self.path) @self.client.DataWatch(self.path) def changed(val, stat): a.append(val) ev.set() eq_(a, [None]) ev.wait(10) ev.clear() self.client.create(self.path, b'blah') ev.wait(10) eq_(ev.is_set(), True) ev.clear() eq_(a, [None, b'blah']) self.client.delete(self.path) ev.wait(10) eq_(ev.is_set(), True) ev.clear() eq_(a, [None, b'blah', None]) self.client.create(self.path, b'blah') ev.wait(10) eq_(ev.is_set(), True) ev.clear() eq_(a, [None, b'blah', None, b'blah']) def test_watcher_with_closing(self): a = [] ev = threading.Event() self.client.delete(self.path) @self.client.DataWatch(self.path) def changed(val, stat): a.append(val) ev.set() eq_(a, [None]) b = False try: self.client.stop() except: b = True eq_(b, False) class KazooChildrenWatcherTests(KazooTestCase): def setUp(self): super(KazooChildrenWatcherTests, self).setUp() self.path = "/" + uuid.uuid4().hex self.client.ensure_path(self.path) def test_child_watcher(self): update = threading.Event() all_children = ['fred'] @self.client.ChildrenWatch(self.path) def changed(children): while all_children: all_children.pop() all_children.extend(children) update.set() update.wait(10) eq_(all_children, []) update.clear() self.client.create(self.path + '/' + 'smith') update.wait(10) eq_(all_children, ['smith']) update.clear() self.client.create(self.path + '/' + 'george') update.wait(10) eq_(sorted(all_children), ['george', 'smith']) def test_child_watcher_with_event(self): update = threading.Event() events = [True] @self.client.ChildrenWatch(self.path, send_event=True) def changed(children, event): events.pop() events.append(event) update.set() update.wait(10) eq_(events, [None]) update.clear() self.client.create(self.path + '/' + 'smith') update.wait(10) eq_(events[0].type, EventType.CHILD) update.clear() def test_func_style_child_watcher(self): update = threading.Event() all_children = ['fred'] def changed(children): while all_children: all_children.pop() all_children.extend(children) update.set() self.client.ChildrenWatch(self.path, changed) update.wait(10) eq_(all_children, []) update.clear() self.client.create(self.path + '/' + 'smith') update.wait(10) eq_(all_children, ['smith']) update.clear() self.client.create(self.path + '/' + 'george') update.wait(10) eq_(sorted(all_children), ['george', 'smith']) def test_func_stops(self): update = threading.Event() all_children = ['fred'] fail_through = [] @self.client.ChildrenWatch(self.path) def changed(children): while all_children: all_children.pop() all_children.extend(children) update.set() if fail_through: return False update.wait(10) eq_(all_children, []) update.clear() fail_through.append(True) self.client.create(self.path + '/' + 'smith') update.wait(10) eq_(all_children, ['smith']) update.clear() self.client.create(self.path + '/' + 'george') update.wait(0.5) eq_(all_children, ['smith']) def test_child_watch_session_loss(self): update = threading.Event() all_children = ['fred'] @self.client.ChildrenWatch(self.path) def changed(children): while all_children: all_children.pop() all_children.extend(children) update.set() update.wait(10) eq_(all_children, []) update.clear() self.client.create(self.path + '/' + 'smith') update.wait(10) eq_(all_children, ['smith']) update.clear() self.expire_session() self.client.retry(self.client.create, self.path + '/' + 'george') update.wait(20) eq_(sorted(all_children), ['george', 'smith']) def test_child_stop_on_session_loss(self): update = threading.Event() all_children = ['fred'] @self.client.ChildrenWatch(self.path, allow_session_lost=False) def changed(children): while all_children: all_children.pop() all_children.extend(children) update.set() update.wait(10) eq_(all_children, []) update.clear() self.client.create(self.path + '/' + 'smith') update.wait(10) eq_(all_children, ['smith']) update.clear() self.expire_session() self.client.retry(self.client.create, self.path + '/' + 'george') update.wait(4) eq_(update.is_set(), False) eq_(all_children, ['smith']) children = self.client.get_children(self.path) eq_(sorted(children), ['george', 'smith']) def test_bad_children_watch_func(self): counter = 0 @self.client.ChildrenWatch(self.path) def changed(children): if counter > 0: raise Exception("oops") raises(Exception)(changed) counter += 1 self.client.create(self.path + '/' + 'smith') class KazooPatientChildrenWatcherTests(KazooTestCase): def setUp(self): super(KazooPatientChildrenWatcherTests, self).setUp() self.path = "/" + uuid.uuid4().hex def _makeOne(self, *args, **kwargs): from kazoo.recipe.watchers import PatientChildrenWatch return PatientChildrenWatch(*args, **kwargs) def test_watch(self): self.client.ensure_path(self.path) watcher = self._makeOne(self.client, self.path, 0.1) result = watcher.start() children, asy = result.get() eq_(len(children), 0) eq_(asy.ready(), False) self.client.create(self.path + '/' + 'fred') asy.get(timeout=1) eq_(asy.ready(), True) def test_exception(self): from kazoo.exceptions import NoNodeError watcher = self._makeOne(self.client, self.path, 0.1) result = watcher.start() @raises(NoNodeError) def testit(): result.get() testit() def test_watch_iterations(self): self.client.ensure_path(self.path) watcher = self._makeOne(self.client, self.path, 0.5) result = watcher.start() eq_(result.ready(), False) time.sleep(0.08) self.client.create(self.path + '/' + uuid.uuid4().hex) eq_(result.ready(), False) time.sleep(0.08) eq_(result.ready(), False) self.client.create(self.path + '/' + uuid.uuid4().hex) time.sleep(0.08) eq_(result.ready(), False) children, asy = result.get() eq_(len(children), 2) kazoo-1.2.1/kazoo/tests/util.py000066400000000000000000000063521217652145400164670ustar00rootroot00000000000000############################################################################## # # Copyright Zope Foundation and Contributors. # All Rights Reserved. # # This software is subject to the provisions of the Zope Public License, # Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution. # THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED # WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS # FOR A PARTICULAR PURPOSE. # ############################################################################## import logging import time class Handler(logging.Handler): def __init__(self, *names, **kw): logging.Handler.__init__(self) self.names = names self.records = [] self.setLoggerLevel(**kw) def setLoggerLevel(self, level=1): self.level = level self.oldlevels = {} def emit(self, record): self.records.append(record) def clear(self): del self.records[:] def install(self): for name in self.names: logger = logging.getLogger(name) self.oldlevels[name] = logger.level logger.setLevel(self.level) logger.addHandler(self) def uninstall(self): for name in self.names: logger = logging.getLogger(name) logger.setLevel(self.oldlevels[name]) logger.removeHandler(self) def __str__(self): return '\n'.join( [("%s %s\n %s" % (record.name, record.levelname, '\n'.join([line for line in record.getMessage().split('\n') if line.strip()]) ) ) for record in self.records] ) class InstalledHandler(Handler): def __init__(self, *names, **kw): Handler.__init__(self, *names, **kw) self.install() class Wait(object): class TimeOutWaitingFor(Exception): "A test condition timed out" timeout = 9 wait = .01 def __init__(self, timeout=None, wait=None, exception=None, getnow=(lambda: time.time), getsleep=(lambda: time.sleep)): if timeout is not None: self.timeout = timeout if wait is not None: self.wait = wait if exception is not None: self.TimeOutWaitingFor = exception self.getnow = getnow self.getsleep = getsleep def __call__(self, func=None, timeout=None, wait=None, message=None): if func is None: return lambda func: self(func, timeout, wait, message) if func(): return now = self.getnow() sleep = self.getsleep() if timeout is None: timeout = self.timeout if wait is None: wait = self.wait wait = float(wait) deadline = now() + timeout while 1: sleep(wait) if func(): return if now() > deadline: raise self.TimeOutWaitingFor( message or getattr(func, '__doc__') or getattr(func, '__name__') ) wait = Wait() kazoo-1.2.1/requirements.txt000066400000000000000000000001171217652145400161300ustar00rootroot00000000000000coverage==3.6 distribute==0.6.31 mock==1.0.1 nose==1.2.1 zope.interface==4.0.3 kazoo-1.2.1/requirements_gevent.txt000066400000000000000000000000201217652145400174710ustar00rootroot00000000000000greenlet==0.4.0 kazoo-1.2.1/requirements_sphinx.txt000066400000000000000000000001331217652145400175170ustar00rootroot00000000000000Jinja2==2.6 Pygments==1.5 Sphinx==1.1.3 docutils==0.9.1 repoze.sphinx.autointerface==0.7.1 kazoo-1.2.1/run_failure.py000066400000000000000000000010241217652145400155270ustar00rootroot00000000000000import os import sys def test(arg): return os.system('bin/nosetests -s -d -v %s' % arg) def main(args): if not args: print("Run as bin/python run_failure.py , for example: \n" "bin/python run_failure.py " "kazoo.tests.test_watchers:KazooChildrenWatcherTests") return arg = args[0] i = 0 while 1: i += 1 print('Run number: %s' % i) ret = test(arg) if ret != 0: break if __name__ == '__main__': main(sys.argv[1:]) kazoo-1.2.1/setup.cfg000066400000000000000000000001641217652145400144670ustar00rootroot00000000000000[egg_info] tag_build = dev [nosetests] where=kazoo nocapture=1 cover-package=kazoo cover-erase=1 cover-inclusive=1 kazoo-1.2.1/setup.py000066400000000000000000000042321217652145400143600ustar00rootroot00000000000000__version__ = '1.2.1' import os import sys from setuptools import setup, find_packages here = os.path.abspath(os.path.dirname(__file__)) with open(os.path.join(here, 'README.rst')) as f: README = f.read() with open(os.path.join(here, 'CHANGES.rst')) as f: CHANGES = f.read() PYTHON3 = sys.version_info > (3, ) PYPY = getattr(sys, 'pypy_version_info', False) and True or False install_requires = [ 'zope.interface >= 3.8.0', # has zope.interface.registry ] tests_require = install_requires + [ 'coverage', 'mock', 'nose', ] if not (PYTHON3 or PYPY): tests_require += [ 'gevent', ] on_rtd = os.environ.get('READTHEDOCS', None) == 'True' if on_rtd: install_requires.extend([ 'gevent', 'repoze.sphinx.autointerface', ]) setup( name='kazoo', version=__version__, description='Higher Level Zookeeper Client', long_description=README + '\n\n' + CHANGES, classifiers=[ "Development Status :: 5 - Production/Stable", "License :: OSI Approved :: Apache Software License", "Intended Audience :: Developers", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.2", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Topic :: Communications", "Topic :: System :: Distributed Computing", "Topic :: System :: Networking", ], keywords='zookeeper lock leader configuration', author="Kazoo team", author_email="python-zk@googlegroups.com", url="https://kazoo.readthedocs.org", license="Apache 2.0", packages=find_packages(), test_suite="kazoo.tests", include_package_data=True, zip_safe=False, install_requires=install_requires, tests_require=tests_require, extras_require={ 'test': tests_require, }, ) kazoo-1.2.1/sw/000077500000000000000000000000001217652145400132765ustar00rootroot00000000000000kazoo-1.2.1/sw/virtualenv.py000077500000000000000000003372321217652145400160640ustar00rootroot00000000000000#!/usr/bin/env python """Create a "virtual" Python installation """ # If you change the version here, change it in setup.py # and docs/conf.py as well. __version__ = "1.8.4" # following best practices virtualenv_version = __version__ # legacy, again import base64 import sys import os import codecs import optparse import re import shutil import logging import tempfile import zlib import errno import glob import distutils.sysconfig from distutils.util import strtobool import struct import subprocess if sys.version_info < (2, 5): print('ERROR: %s' % sys.exc_info()[1]) print('ERROR: this script requires Python 2.5 or greater.') sys.exit(101) try: set except NameError: from sets import Set as set try: basestring except NameError: basestring = str try: import ConfigParser except ImportError: import configparser as ConfigParser join = os.path.join py_version = 'python%s.%s' % (sys.version_info[0], sys.version_info[1]) is_jython = sys.platform.startswith('java') is_pypy = hasattr(sys, 'pypy_version_info') is_win = (sys.platform == 'win32') is_cygwin = (sys.platform == 'cygwin') is_darwin = (sys.platform == 'darwin') abiflags = getattr(sys, 'abiflags', '') user_dir = os.path.expanduser('~') if is_win: default_storage_dir = os.path.join(user_dir, 'virtualenv') else: default_storage_dir = os.path.join(user_dir, '.virtualenv') default_config_file = os.path.join(default_storage_dir, 'virtualenv.ini') if is_pypy: expected_exe = 'pypy' elif is_jython: expected_exe = 'jython' else: expected_exe = 'python' REQUIRED_MODULES = ['os', 'posix', 'posixpath', 'nt', 'ntpath', 'genericpath', 'fnmatch', 'locale', 'encodings', 'codecs', 'stat', 'UserDict', 'readline', 'copy_reg', 'types', 're', 'sre', 'sre_parse', 'sre_constants', 'sre_compile', 'zlib'] REQUIRED_FILES = ['lib-dynload', 'config'] majver, minver = sys.version_info[:2] if majver == 2: if minver >= 6: REQUIRED_MODULES.extend(['warnings', 'linecache', '_abcoll', 'abc']) if minver >= 7: REQUIRED_MODULES.extend(['_weakrefset']) if minver <= 3: REQUIRED_MODULES.extend(['sets', '__future__']) elif majver == 3: # Some extra modules are needed for Python 3, but different ones # for different versions. REQUIRED_MODULES.extend(['_abcoll', 'warnings', 'linecache', 'abc', 'io', '_weakrefset', 'copyreg', 'tempfile', 'random', '__future__', 'collections', 'keyword', 'tarfile', 'shutil', 'struct', 'copy', 'tokenize', 'token', 'functools', 'heapq', 'bisect', 'weakref', 'reprlib']) if minver >= 2: REQUIRED_FILES[-1] = 'config-%s' % majver if minver == 3: import sysconfig platdir = sysconfig.get_config_var('PLATDIR') REQUIRED_FILES.append(platdir) # The whole list of 3.3 modules is reproduced below - the current # uncommented ones are required for 3.3 as of now, but more may be # added as 3.3 development continues. REQUIRED_MODULES.extend([ #"aifc", #"antigravity", #"argparse", #"ast", #"asynchat", #"asyncore", "base64", #"bdb", #"binhex", #"bisect", #"calendar", #"cgi", #"cgitb", #"chunk", #"cmd", #"codeop", #"code", #"colorsys", #"_compat_pickle", #"compileall", #"concurrent", #"configparser", #"contextlib", #"cProfile", #"crypt", #"csv", #"ctypes", #"curses", #"datetime", #"dbm", #"decimal", #"difflib", #"dis", #"doctest", #"dummy_threading", "_dummy_thread", #"email", #"filecmp", #"fileinput", #"formatter", #"fractions", #"ftplib", #"functools", #"getopt", #"getpass", #"gettext", #"glob", #"gzip", "hashlib", #"heapq", "hmac", #"html", #"http", #"idlelib", #"imaplib", #"imghdr", "imp", "importlib", #"inspect", #"json", #"lib2to3", #"logging", #"macpath", #"macurl2path", #"mailbox", #"mailcap", #"_markupbase", #"mimetypes", #"modulefinder", #"multiprocessing", #"netrc", #"nntplib", #"nturl2path", #"numbers", #"opcode", #"optparse", #"os2emxpath", #"pdb", #"pickle", #"pickletools", #"pipes", #"pkgutil", #"platform", #"plat-linux2", #"plistlib", #"poplib", #"pprint", #"profile", #"pstats", #"pty", #"pyclbr", #"py_compile", #"pydoc_data", #"pydoc", #"_pyio", #"queue", #"quopri", #"reprlib", "rlcompleter", #"runpy", #"sched", #"shelve", #"shlex", #"smtpd", #"smtplib", #"sndhdr", #"socket", #"socketserver", #"sqlite3", #"ssl", #"stringprep", #"string", #"_strptime", #"subprocess", #"sunau", #"symbol", #"symtable", #"sysconfig", #"tabnanny", #"telnetlib", #"test", #"textwrap", #"this", #"_threading_local", #"threading", #"timeit", #"tkinter", #"tokenize", #"token", #"traceback", #"trace", #"tty", #"turtledemo", #"turtle", #"unittest", #"urllib", #"uuid", #"uu", #"wave", #"weakref", #"webbrowser", #"wsgiref", #"xdrlib", #"xml", #"xmlrpc", #"zipfile", ]) if is_pypy: # these are needed to correctly display the exceptions that may happen # during the bootstrap REQUIRED_MODULES.extend(['traceback', 'linecache']) class Logger(object): """ Logging object for use in command-line script. Allows ranges of levels, to avoid some redundancy of displayed information. """ DEBUG = logging.DEBUG INFO = logging.INFO NOTIFY = (logging.INFO+logging.WARN)/2 WARN = WARNING = logging.WARN ERROR = logging.ERROR FATAL = logging.FATAL LEVELS = [DEBUG, INFO, NOTIFY, WARN, ERROR, FATAL] def __init__(self, consumers): self.consumers = consumers self.indent = 0 self.in_progress = None self.in_progress_hanging = False def debug(self, msg, *args, **kw): self.log(self.DEBUG, msg, *args, **kw) def info(self, msg, *args, **kw): self.log(self.INFO, msg, *args, **kw) def notify(self, msg, *args, **kw): self.log(self.NOTIFY, msg, *args, **kw) def warn(self, msg, *args, **kw): self.log(self.WARN, msg, *args, **kw) def error(self, msg, *args, **kw): self.log(self.ERROR, msg, *args, **kw) def fatal(self, msg, *args, **kw): self.log(self.FATAL, msg, *args, **kw) def log(self, level, msg, *args, **kw): if args: if kw: raise TypeError( "You may give positional or keyword arguments, not both") args = args or kw rendered = None for consumer_level, consumer in self.consumers: if self.level_matches(level, consumer_level): if (self.in_progress_hanging and consumer in (sys.stdout, sys.stderr)): self.in_progress_hanging = False sys.stdout.write('\n') sys.stdout.flush() if rendered is None: if args: rendered = msg % args else: rendered = msg rendered = ' '*self.indent + rendered if hasattr(consumer, 'write'): consumer.write(rendered+'\n') else: consumer(rendered) def start_progress(self, msg): assert not self.in_progress, ( "Tried to start_progress(%r) while in_progress %r" % (msg, self.in_progress)) if self.level_matches(self.NOTIFY, self._stdout_level()): sys.stdout.write(msg) sys.stdout.flush() self.in_progress_hanging = True else: self.in_progress_hanging = False self.in_progress = msg def end_progress(self, msg='done.'): assert self.in_progress, ( "Tried to end_progress without start_progress") if self.stdout_level_matches(self.NOTIFY): if not self.in_progress_hanging: # Some message has been printed out since start_progress sys.stdout.write('...' + self.in_progress + msg + '\n') sys.stdout.flush() else: sys.stdout.write(msg + '\n') sys.stdout.flush() self.in_progress = None self.in_progress_hanging = False def show_progress(self): """If we are in a progress scope, and no log messages have been shown, write out another '.'""" if self.in_progress_hanging: sys.stdout.write('.') sys.stdout.flush() def stdout_level_matches(self, level): """Returns true if a message at this level will go to stdout""" return self.level_matches(level, self._stdout_level()) def _stdout_level(self): """Returns the level that stdout runs at""" for level, consumer in self.consumers: if consumer is sys.stdout: return level return self.FATAL def level_matches(self, level, consumer_level): """ >>> l = Logger([]) >>> l.level_matches(3, 4) False >>> l.level_matches(3, 2) True >>> l.level_matches(slice(None, 3), 3) False >>> l.level_matches(slice(None, 3), 2) True >>> l.level_matches(slice(1, 3), 1) True >>> l.level_matches(slice(2, 3), 1) False """ if isinstance(level, slice): start, stop = level.start, level.stop if start is not None and start > consumer_level: return False if stop is not None and stop <= consumer_level: return False return True else: return level >= consumer_level #@classmethod def level_for_integer(cls, level): levels = cls.LEVELS if level < 0: return levels[0] if level >= len(levels): return levels[-1] return levels[level] level_for_integer = classmethod(level_for_integer) # create a silent logger just to prevent this from being undefined # will be overridden with requested verbosity main() is called. logger = Logger([(Logger.LEVELS[-1], sys.stdout)]) def mkdir(path): if not os.path.exists(path): logger.info('Creating %s', path) os.makedirs(path) else: logger.info('Directory %s already exists', path) def copyfileordir(src, dest): if os.path.isdir(src): shutil.copytree(src, dest, True) else: shutil.copy2(src, dest) def copyfile(src, dest, symlink=True): if not os.path.exists(src): # Some bad symlink in the src logger.warn('Cannot find file %s (bad symlink)', src) return if os.path.exists(dest): logger.debug('File %s already exists', dest) return if not os.path.exists(os.path.dirname(dest)): logger.info('Creating parent directories for %s' % os.path.dirname(dest)) os.makedirs(os.path.dirname(dest)) if not os.path.islink(src): srcpath = os.path.abspath(src) else: srcpath = os.readlink(src) if symlink and hasattr(os, 'symlink') and not is_win: logger.info('Symlinking %s', dest) try: os.symlink(srcpath, dest) except (OSError, NotImplementedError): logger.info('Symlinking failed, copying to %s', dest) copyfileordir(src, dest) else: logger.info('Copying to %s', dest) copyfileordir(src, dest) def writefile(dest, content, overwrite=True): if not os.path.exists(dest): logger.info('Writing %s', dest) f = open(dest, 'wb') f.write(content.encode('utf-8')) f.close() return else: f = open(dest, 'rb') c = f.read() f.close() if c != content.encode("utf-8"): if not overwrite: logger.notify('File %s exists with different content; not overwriting', dest) return logger.notify('Overwriting %s with new content', dest) f = open(dest, 'wb') f.write(content.encode('utf-8')) f.close() else: logger.info('Content %s already in place', dest) def rmtree(dir): if os.path.exists(dir): logger.notify('Deleting tree %s', dir) shutil.rmtree(dir) else: logger.info('Do not need to delete %s; already gone', dir) def make_exe(fn): if hasattr(os, 'chmod'): oldmode = os.stat(fn).st_mode & 0xFFF # 0o7777 newmode = (oldmode | 0x16D) & 0xFFF # 0o555, 0o7777 os.chmod(fn, newmode) logger.info('Changed mode of %s to %s', fn, oct(newmode)) def _find_file(filename, dirs): for dir in reversed(dirs): files = glob.glob(os.path.join(dir, filename)) if files and os.path.isfile(files[0]): return True, files[0] return False, filename def _install_req(py_executable, unzip=False, distribute=False, search_dirs=None, never_download=False): if search_dirs is None: search_dirs = file_search_dirs() if not distribute: egg_path = 'setuptools-*-py%s.egg' % sys.version[:3] found, egg_path = _find_file(egg_path, search_dirs) project_name = 'setuptools' bootstrap_script = EZ_SETUP_PY tgz_path = None else: # Look for a distribute egg (these are not distributed by default, # but can be made available by the user) egg_path = 'distribute-*-py%s.egg' % sys.version[:3] found, egg_path = _find_file(egg_path, search_dirs) project_name = 'distribute' if found: tgz_path = None bootstrap_script = DISTRIBUTE_FROM_EGG_PY else: # Fall back to sdist # NB: egg_path is not None iff tgz_path is None # iff bootstrap_script is a generic setup script accepting # the standard arguments. egg_path = None tgz_path = 'distribute-*.tar.gz' found, tgz_path = _find_file(tgz_path, search_dirs) bootstrap_script = DISTRIBUTE_SETUP_PY if is_jython and os._name == 'nt': # Jython's .bat sys.executable can't handle a command line # argument with newlines fd, ez_setup = tempfile.mkstemp('.py') os.write(fd, bootstrap_script) os.close(fd) cmd = [py_executable, ez_setup] else: cmd = [py_executable, '-c', bootstrap_script] if unzip and egg_path: cmd.append('--always-unzip') env = {} remove_from_env = ['__PYVENV_LAUNCHER__'] if logger.stdout_level_matches(logger.DEBUG) and egg_path: cmd.append('-v') old_chdir = os.getcwd() if egg_path is not None and os.path.exists(egg_path): logger.info('Using existing %s egg: %s' % (project_name, egg_path)) cmd.append(egg_path) if os.environ.get('PYTHONPATH'): env['PYTHONPATH'] = egg_path + os.path.pathsep + os.environ['PYTHONPATH'] else: env['PYTHONPATH'] = egg_path elif tgz_path is not None and os.path.exists(tgz_path): # Found a tgz source dist, let's chdir logger.info('Using existing %s egg: %s' % (project_name, tgz_path)) os.chdir(os.path.dirname(tgz_path)) # in this case, we want to be sure that PYTHONPATH is unset (not # just empty, really unset), else CPython tries to import the # site.py that it's in virtualenv_support remove_from_env.append('PYTHONPATH') elif never_download: logger.fatal("Can't find any local distributions of %s to install " "and --never-download is set. Either re-run virtualenv " "without the --never-download option, or place a %s " "distribution (%s) in one of these " "locations: %r" % (project_name, project_name, egg_path or tgz_path, search_dirs)) sys.exit(1) elif egg_path: logger.info('No %s egg found; downloading' % project_name) cmd.extend(['--always-copy', '-U', project_name]) else: logger.info('No %s tgz found; downloading' % project_name) logger.start_progress('Installing %s...' % project_name) logger.indent += 2 cwd = None if project_name == 'distribute': env['DONT_PATCH_SETUPTOOLS'] = 'true' def _filter_ez_setup(line): return filter_ez_setup(line, project_name) if not os.access(os.getcwd(), os.W_OK): cwd = tempfile.mkdtemp() if tgz_path is not None and os.path.exists(tgz_path): # the current working dir is hostile, let's copy the # tarball to a temp dir target = os.path.join(cwd, os.path.split(tgz_path)[-1]) shutil.copy(tgz_path, target) try: call_subprocess(cmd, show_stdout=False, filter_stdout=_filter_ez_setup, extra_env=env, remove_from_env=remove_from_env, cwd=cwd) finally: logger.indent -= 2 logger.end_progress() if cwd is not None: shutil.rmtree(cwd) if os.getcwd() != old_chdir: os.chdir(old_chdir) if is_jython and os._name == 'nt': os.remove(ez_setup) def file_search_dirs(): here = os.path.dirname(os.path.abspath(__file__)) dirs = ['.', here, join(here, 'virtualenv_support')] if os.path.splitext(os.path.dirname(__file__))[0] != 'virtualenv': # Probably some boot script; just in case virtualenv is installed... try: import virtualenv except ImportError: pass else: dirs.append(os.path.join(os.path.dirname(virtualenv.__file__), 'virtualenv_support')) return [d for d in dirs if os.path.isdir(d)] def install_setuptools(py_executable, unzip=False, search_dirs=None, never_download=False): _install_req(py_executable, unzip, search_dirs=search_dirs, never_download=never_download) def install_distribute(py_executable, unzip=False, search_dirs=None, never_download=False): _install_req(py_executable, unzip, distribute=True, search_dirs=search_dirs, never_download=never_download) _pip_re = re.compile(r'^pip-.*(zip|tar.gz|tar.bz2|tgz|tbz)$', re.I) def install_pip(py_executable, search_dirs=None, never_download=False): if search_dirs is None: search_dirs = file_search_dirs() filenames = [] for dir in search_dirs: filenames.extend([join(dir, fn) for fn in os.listdir(dir) if _pip_re.search(fn)]) filenames = [(os.path.basename(filename).lower(), i, filename) for i, filename in enumerate(filenames)] filenames.sort() filenames = [filename for basename, i, filename in filenames] if not filenames: filename = 'pip' else: filename = filenames[-1] easy_install_script = 'easy_install' if is_win: easy_install_script = 'easy_install-script.py' # There's two subtle issues here when invoking easy_install. # 1. On unix-like systems the easy_install script can *only* be executed # directly if its full filesystem path is no longer than 78 characters. # 2. A work around to [1] is to use the `python path/to/easy_install foo` # pattern, but that breaks if the path contains non-ASCII characters, as # you can't put the file encoding declaration before the shebang line. # The solution is to use Python's -x flag to skip the first line of the # script (and any ASCII decoding errors that may have occurred in that line) cmd = [py_executable, '-x', join(os.path.dirname(py_executable), easy_install_script), filename] # jython and pypy don't yet support -x if is_jython or is_pypy: cmd.remove('-x') if filename == 'pip': if never_download: logger.fatal("Can't find any local distributions of pip to install " "and --never-download is set. Either re-run virtualenv " "without the --never-download option, or place a pip " "source distribution (zip/tar.gz/tar.bz2) in one of these " "locations: %r" % search_dirs) sys.exit(1) logger.info('Installing pip from network...') else: logger.info('Installing existing %s distribution: %s' % ( os.path.basename(filename), filename)) logger.start_progress('Installing pip...') logger.indent += 2 def _filter_setup(line): return filter_ez_setup(line, 'pip') try: call_subprocess(cmd, show_stdout=False, filter_stdout=_filter_setup) finally: logger.indent -= 2 logger.end_progress() def filter_ez_setup(line, project_name='setuptools'): if not line.strip(): return Logger.DEBUG if project_name == 'distribute': for prefix in ('Extracting', 'Now working', 'Installing', 'Before', 'Scanning', 'Setuptools', 'Egg', 'Already', 'running', 'writing', 'reading', 'installing', 'creating', 'copying', 'byte-compiling', 'removing', 'Processing'): if line.startswith(prefix): return Logger.DEBUG return Logger.DEBUG for prefix in ['Reading ', 'Best match', 'Processing setuptools', 'Copying setuptools', 'Adding setuptools', 'Installing ', 'Installed ']: if line.startswith(prefix): return Logger.DEBUG return Logger.INFO class UpdatingDefaultsHelpFormatter(optparse.IndentedHelpFormatter): """ Custom help formatter for use in ConfigOptionParser that updates the defaults before expanding them, allowing them to show up correctly in the help listing """ def expand_default(self, option): if self.parser is not None: self.parser.update_defaults(self.parser.defaults) return optparse.IndentedHelpFormatter.expand_default(self, option) class ConfigOptionParser(optparse.OptionParser): """ Custom option parser which updates its defaults by by checking the configuration files and environmental variables """ def __init__(self, *args, **kwargs): self.config = ConfigParser.RawConfigParser() self.files = self.get_config_files() self.config.read(self.files) optparse.OptionParser.__init__(self, *args, **kwargs) def get_config_files(self): config_file = os.environ.get('VIRTUALENV_CONFIG_FILE', False) if config_file and os.path.exists(config_file): return [config_file] return [default_config_file] def update_defaults(self, defaults): """ Updates the given defaults with values from the config files and the environ. Does a little special handling for certain types of options (lists). """ # Then go and look for the other sources of configuration: config = {} # 1. config files config.update(dict(self.get_config_section('virtualenv'))) # 2. environmental variables config.update(dict(self.get_environ_vars())) # Then set the options with those values for key, val in config.items(): key = key.replace('_', '-') if not key.startswith('--'): key = '--%s' % key # only prefer long opts option = self.get_option(key) if option is not None: # ignore empty values if not val: continue # handle multiline configs if option.action == 'append': val = val.split() else: option.nargs = 1 if option.action == 'store_false': val = not strtobool(val) elif option.action in ('store_true', 'count'): val = strtobool(val) try: val = option.convert_value(key, val) except optparse.OptionValueError: e = sys.exc_info()[1] print("An error occured during configuration: %s" % e) sys.exit(3) defaults[option.dest] = val return defaults def get_config_section(self, name): """ Get a section of a configuration """ if self.config.has_section(name): return self.config.items(name) return [] def get_environ_vars(self, prefix='VIRTUALENV_'): """ Returns a generator with all environmental vars with prefix VIRTUALENV """ for key, val in os.environ.items(): if key.startswith(prefix): yield (key.replace(prefix, '').lower(), val) def get_default_values(self): """ Overridding to make updating the defaults after instantiation of the option parser possible, update_defaults() does the dirty work. """ if not self.process_default_values: # Old, pre-Optik 1.5 behaviour. return optparse.Values(self.defaults) defaults = self.update_defaults(self.defaults.copy()) # ours for option in self._get_all_options(): default = defaults.get(option.dest) if isinstance(default, basestring): opt_str = option.get_opt_string() defaults[option.dest] = option.check_value(opt_str, default) return optparse.Values(defaults) def main(): parser = ConfigOptionParser( version=virtualenv_version, usage="%prog [OPTIONS] DEST_DIR", formatter=UpdatingDefaultsHelpFormatter()) parser.add_option( '-v', '--verbose', action='count', dest='verbose', default=0, help="Increase verbosity") parser.add_option( '-q', '--quiet', action='count', dest='quiet', default=0, help='Decrease verbosity') parser.add_option( '-p', '--python', dest='python', metavar='PYTHON_EXE', help='The Python interpreter to use, e.g., --python=python2.5 will use the python2.5 ' 'interpreter to create the new environment. The default is the interpreter that ' 'virtualenv was installed with (%s)' % sys.executable) parser.add_option( '--clear', dest='clear', action='store_true', help="Clear out the non-root install and start from scratch") parser.set_defaults(system_site_packages=False) parser.add_option( '--no-site-packages', dest='system_site_packages', action='store_false', help="Don't give access to the global site-packages dir to the " "virtual environment (default)") parser.add_option( '--system-site-packages', dest='system_site_packages', action='store_true', help="Give access to the global site-packages dir to the " "virtual environment") parser.add_option( '--unzip-setuptools', dest='unzip_setuptools', action='store_true', help="Unzip Setuptools or Distribute when installing it") parser.add_option( '--relocatable', dest='relocatable', action='store_true', help='Make an EXISTING virtualenv environment relocatable. ' 'This fixes up scripts and makes all .pth files relative') parser.add_option( '--distribute', '--use-distribute', # the second option is for legacy reasons here. Hi Kenneth! dest='use_distribute', action='store_true', help='Use Distribute instead of Setuptools. Set environ variable ' 'VIRTUALENV_DISTRIBUTE to make it the default ') parser.add_option( '--setuptools', dest='use_distribute', action='store_false', help='Use Setuptools instead of Distribute. Set environ variable ' 'VIRTUALENV_SETUPTOOLS to make it the default ') # Set this to True to use distribute by default, even in Python 2. parser.set_defaults(use_distribute=False) default_search_dirs = file_search_dirs() parser.add_option( '--extra-search-dir', dest="search_dirs", action="append", default=default_search_dirs, help="Directory to look for setuptools/distribute/pip distributions in. " "You can add any number of additional --extra-search-dir paths.") parser.add_option( '--never-download', dest="never_download", action="store_true", help="Never download anything from the network. Instead, virtualenv will fail " "if local distributions of setuptools/distribute/pip are not present.") parser.add_option( '--prompt', dest='prompt', help='Provides an alternative prompt prefix for this environment') if 'extend_parser' in globals(): extend_parser(parser) options, args = parser.parse_args() global logger if 'adjust_options' in globals(): adjust_options(options, args) verbosity = options.verbose - options.quiet logger = Logger([(Logger.level_for_integer(2 - verbosity), sys.stdout)]) if options.python and not os.environ.get('VIRTUALENV_INTERPRETER_RUNNING'): env = os.environ.copy() interpreter = resolve_interpreter(options.python) if interpreter == sys.executable: logger.warn('Already using interpreter %s' % interpreter) else: logger.notify('Running virtualenv with interpreter %s' % interpreter) env['VIRTUALENV_INTERPRETER_RUNNING'] = 'true' file = __file__ if file.endswith('.pyc'): file = file[:-1] popen = subprocess.Popen([interpreter, file] + sys.argv[1:], env=env) raise SystemExit(popen.wait()) # Force --distribute on Python 3, since setuptools is not available. if majver > 2: options.use_distribute = True if os.environ.get('PYTHONDONTWRITEBYTECODE') and not options.use_distribute: print( "The PYTHONDONTWRITEBYTECODE environment variable is " "not compatible with setuptools. Either use --distribute " "or unset PYTHONDONTWRITEBYTECODE.") sys.exit(2) if not args: print('You must provide a DEST_DIR') parser.print_help() sys.exit(2) if len(args) > 1: print('There must be only one argument: DEST_DIR (you gave %s)' % ( ' '.join(args))) parser.print_help() sys.exit(2) home_dir = args[0] if os.environ.get('WORKING_ENV'): logger.fatal('ERROR: you cannot run virtualenv while in a workingenv') logger.fatal('Please deactivate your workingenv, then re-run this script') sys.exit(3) if 'PYTHONHOME' in os.environ: logger.warn('PYTHONHOME is set. You *must* activate the virtualenv before using it') del os.environ['PYTHONHOME'] if options.relocatable: make_environment_relocatable(home_dir) return create_environment(home_dir, site_packages=options.system_site_packages, clear=options.clear, unzip_setuptools=options.unzip_setuptools, use_distribute=options.use_distribute, prompt=options.prompt, search_dirs=options.search_dirs, never_download=options.never_download) if 'after_install' in globals(): after_install(options, home_dir) def call_subprocess(cmd, show_stdout=True, filter_stdout=None, cwd=None, raise_on_returncode=True, extra_env=None, remove_from_env=None): cmd_parts = [] for part in cmd: if len(part) > 45: part = part[:20]+"..."+part[-20:] if ' ' in part or '\n' in part or '"' in part or "'" in part: part = '"%s"' % part.replace('"', '\\"') if hasattr(part, 'decode'): try: part = part.decode(sys.getdefaultencoding()) except UnicodeDecodeError: part = part.decode(sys.getfilesystemencoding()) cmd_parts.append(part) cmd_desc = ' '.join(cmd_parts) if show_stdout: stdout = None else: stdout = subprocess.PIPE logger.debug("Running command %s" % cmd_desc) if extra_env or remove_from_env: env = os.environ.copy() if extra_env: env.update(extra_env) if remove_from_env: for varname in remove_from_env: env.pop(varname, None) else: env = None try: proc = subprocess.Popen( cmd, stderr=subprocess.STDOUT, stdin=None, stdout=stdout, cwd=cwd, env=env) except Exception: e = sys.exc_info()[1] logger.fatal( "Error %s while executing command %s" % (e, cmd_desc)) raise all_output = [] if stdout is not None: stdout = proc.stdout encoding = sys.getdefaultencoding() fs_encoding = sys.getfilesystemencoding() while 1: line = stdout.readline() try: line = line.decode(encoding) except UnicodeDecodeError: line = line.decode(fs_encoding) if not line: break line = line.rstrip() all_output.append(line) if filter_stdout: level = filter_stdout(line) if isinstance(level, tuple): level, line = level logger.log(level, line) if not logger.stdout_level_matches(level): logger.show_progress() else: logger.info(line) else: proc.communicate() proc.wait() if proc.returncode: if raise_on_returncode: if all_output: logger.notify('Complete output from command %s:' % cmd_desc) logger.notify('\n'.join(all_output) + '\n----------------------------------------') raise OSError( "Command %s failed with error code %s" % (cmd_desc, proc.returncode)) else: logger.warn( "Command %s had error code %s" % (cmd_desc, proc.returncode)) def create_environment(home_dir, site_packages=False, clear=False, unzip_setuptools=False, use_distribute=False, prompt=None, search_dirs=None, never_download=False): """ Creates a new environment in ``home_dir``. If ``site_packages`` is true, then the global ``site-packages/`` directory will be on the path. If ``clear`` is true (default False) then the environment will first be cleared. """ home_dir, lib_dir, inc_dir, bin_dir = path_locations(home_dir) py_executable = os.path.abspath(install_python( home_dir, lib_dir, inc_dir, bin_dir, site_packages=site_packages, clear=clear)) install_distutils(home_dir) if use_distribute: install_distribute(py_executable, unzip=unzip_setuptools, search_dirs=search_dirs, never_download=never_download) else: install_setuptools(py_executable, unzip=unzip_setuptools, search_dirs=search_dirs, never_download=never_download) install_pip(py_executable, search_dirs=search_dirs, never_download=never_download) install_activate(home_dir, bin_dir, prompt) def is_executable_file(fpath): return os.path.isfile(fpath) and os.access(fpath, os.X_OK) def path_locations(home_dir): """Return the path locations for the environment (where libraries are, where scripts go, etc)""" # XXX: We'd use distutils.sysconfig.get_python_inc/lib but its # prefix arg is broken: http://bugs.python.org/issue3386 if is_win: # Windows has lots of problems with executables with spaces in # the name; this function will remove them (using the ~1 # format): mkdir(home_dir) if ' ' in home_dir: import ctypes GetShortPathName = ctypes.windll.kernel32.GetShortPathNameW size = max(len(home_dir)+1, 256) buf = ctypes.create_unicode_buffer(size) try: u = unicode except NameError: u = str ret = GetShortPathName(u(home_dir), buf, size) if not ret: print('Error: the path "%s" has a space in it' % home_dir) print('We could not determine the short pathname for it.') print('Exiting.') sys.exit(3) home_dir = str(buf.value) lib_dir = join(home_dir, 'Lib') inc_dir = join(home_dir, 'Include') bin_dir = join(home_dir, 'Scripts') if is_jython: lib_dir = join(home_dir, 'Lib') inc_dir = join(home_dir, 'Include') bin_dir = join(home_dir, 'bin') elif is_pypy: lib_dir = home_dir inc_dir = join(home_dir, 'include') bin_dir = join(home_dir, 'bin') elif not is_win: lib_dir = join(home_dir, 'lib', py_version) multiarch_exec = '/usr/bin/multiarch-platform' if is_executable_file(multiarch_exec): # In Mageia (2) and Mandriva distros the include dir must be like: # virtualenv/include/multiarch-x86_64-linux/python2.7 # instead of being virtualenv/include/python2.7 p = subprocess.Popen(multiarch_exec, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() # stdout.strip is needed to remove newline character inc_dir = join(home_dir, 'include', stdout.strip(), py_version + abiflags) else: inc_dir = join(home_dir, 'include', py_version + abiflags) bin_dir = join(home_dir, 'bin') return home_dir, lib_dir, inc_dir, bin_dir def change_prefix(filename, dst_prefix): prefixes = [sys.prefix] if is_darwin: prefixes.extend(( os.path.join("/Library/Python", sys.version[:3], "site-packages"), os.path.join(sys.prefix, "Extras", "lib", "python"), os.path.join("~", "Library", "Python", sys.version[:3], "site-packages"), # Python 2.6 no-frameworks os.path.join("~", ".local", "lib","python", sys.version[:3], "site-packages"), # System Python 2.7 on OSX Mountain Lion os.path.join("~", "Library", "Python", sys.version[:3], "lib", "python", "site-packages"))) if hasattr(sys, 'real_prefix'): prefixes.append(sys.real_prefix) if hasattr(sys, 'base_prefix'): prefixes.append(sys.base_prefix) prefixes = list(map(os.path.expanduser, prefixes)) prefixes = list(map(os.path.abspath, prefixes)) # Check longer prefixes first so we don't split in the middle of a filename prefixes = sorted(prefixes, key=len, reverse=True) filename = os.path.abspath(filename) for src_prefix in prefixes: if filename.startswith(src_prefix): _, relpath = filename.split(src_prefix, 1) if src_prefix != os.sep: # sys.prefix == "/" assert relpath[0] == os.sep relpath = relpath[1:] return join(dst_prefix, relpath) assert False, "Filename %s does not start with any of these prefixes: %s" % \ (filename, prefixes) def copy_required_modules(dst_prefix): import imp # If we are running under -p, we need to remove the current # directory from sys.path temporarily here, so that we # definitely get the modules from the site directory of # the interpreter we are running under, not the one # virtualenv.py is installed under (which might lead to py2/py3 # incompatibility issues) _prev_sys_path = sys.path if os.environ.get('VIRTUALENV_INTERPRETER_RUNNING'): sys.path = sys.path[1:] try: for modname in REQUIRED_MODULES: if modname in sys.builtin_module_names: logger.info("Ignoring built-in bootstrap module: %s" % modname) continue try: f, filename, _ = imp.find_module(modname) except ImportError: logger.info("Cannot import bootstrap module: %s" % modname) else: if f is not None: f.close() # special-case custom readline.so on OS X: if modname == 'readline' and sys.platform == 'darwin' and not filename.endswith(join('lib-dynload', 'readline.so')): dst_filename = join(dst_prefix, 'lib', 'python%s' % sys.version[:3], 'readline.so') else: dst_filename = change_prefix(filename, dst_prefix) copyfile(filename, dst_filename) if filename.endswith('.pyc'): pyfile = filename[:-1] if os.path.exists(pyfile): copyfile(pyfile, dst_filename[:-1]) finally: sys.path = _prev_sys_path def subst_path(prefix_path, prefix, home_dir): prefix_path = os.path.normpath(prefix_path) prefix = os.path.normpath(prefix) home_dir = os.path.normpath(home_dir) if not prefix_path.startswith(prefix): logger.warn('Path not in prefix %r %r', prefix_path, prefix) return return prefix_path.replace(prefix, home_dir, 1) def install_python(home_dir, lib_dir, inc_dir, bin_dir, site_packages, clear): """Install just the base environment, no distutils patches etc""" if sys.executable.startswith(bin_dir): print('Please use the *system* python to run this script') return if clear: rmtree(lib_dir) ## FIXME: why not delete it? ## Maybe it should delete everything with #!/path/to/venv/python in it logger.notify('Not deleting %s', bin_dir) if hasattr(sys, 'real_prefix'): logger.notify('Using real prefix %r' % sys.real_prefix) prefix = sys.real_prefix elif hasattr(sys, 'base_prefix'): logger.notify('Using base prefix %r' % sys.base_prefix) prefix = sys.base_prefix else: prefix = sys.prefix mkdir(lib_dir) fix_lib64(lib_dir) stdlib_dirs = [os.path.dirname(os.__file__)] if is_win: stdlib_dirs.append(join(os.path.dirname(stdlib_dirs[0]), 'DLLs')) elif is_darwin: stdlib_dirs.append(join(stdlib_dirs[0], 'site-packages')) if hasattr(os, 'symlink'): logger.info('Symlinking Python bootstrap modules') else: logger.info('Copying Python bootstrap modules') logger.indent += 2 try: # copy required files... for stdlib_dir in stdlib_dirs: if not os.path.isdir(stdlib_dir): continue for fn in os.listdir(stdlib_dir): bn = os.path.splitext(fn)[0] if fn != 'site-packages' and bn in REQUIRED_FILES: copyfile(join(stdlib_dir, fn), join(lib_dir, fn)) # ...and modules copy_required_modules(home_dir) finally: logger.indent -= 2 mkdir(join(lib_dir, 'site-packages')) import site site_filename = site.__file__ if site_filename.endswith('.pyc'): site_filename = site_filename[:-1] elif site_filename.endswith('$py.class'): site_filename = site_filename.replace('$py.class', '.py') site_filename_dst = change_prefix(site_filename, home_dir) site_dir = os.path.dirname(site_filename_dst) writefile(site_filename_dst, SITE_PY) writefile(join(site_dir, 'orig-prefix.txt'), prefix) site_packages_filename = join(site_dir, 'no-global-site-packages.txt') if not site_packages: writefile(site_packages_filename, '') if is_pypy or is_win: stdinc_dir = join(prefix, 'include') else: stdinc_dir = join(prefix, 'include', py_version + abiflags) if os.path.exists(stdinc_dir): copyfile(stdinc_dir, inc_dir) else: logger.debug('No include dir %s' % stdinc_dir) platinc_dir = distutils.sysconfig.get_python_inc(plat_specific=1) if platinc_dir != stdinc_dir: platinc_dest = distutils.sysconfig.get_python_inc( plat_specific=1, prefix=home_dir) if platinc_dir == platinc_dest: # Do platinc_dest manually due to a CPython bug; # not http://bugs.python.org/issue3386 but a close cousin platinc_dest = subst_path(platinc_dir, prefix, home_dir) if platinc_dest: # PyPy's stdinc_dir and prefix are relative to the original binary # (traversing virtualenvs), whereas the platinc_dir is relative to # the inner virtualenv and ignores the prefix argument. # This seems more evolved than designed. copyfile(platinc_dir, platinc_dest) # pypy never uses exec_prefix, just ignore it if sys.exec_prefix != prefix and not is_pypy: if is_win: exec_dir = join(sys.exec_prefix, 'lib') elif is_jython: exec_dir = join(sys.exec_prefix, 'Lib') else: exec_dir = join(sys.exec_prefix, 'lib', py_version) for fn in os.listdir(exec_dir): copyfile(join(exec_dir, fn), join(lib_dir, fn)) if is_jython: # Jython has either jython-dev.jar and javalib/ dir, or just # jython.jar for name in 'jython-dev.jar', 'javalib', 'jython.jar': src = join(prefix, name) if os.path.exists(src): copyfile(src, join(home_dir, name)) # XXX: registry should always exist after Jython 2.5rc1 src = join(prefix, 'registry') if os.path.exists(src): copyfile(src, join(home_dir, 'registry'), symlink=False) copyfile(join(prefix, 'cachedir'), join(home_dir, 'cachedir'), symlink=False) mkdir(bin_dir) py_executable = join(bin_dir, os.path.basename(sys.executable)) if 'Python.framework' in prefix: # OS X framework builds cause validation to break # https://github.com/pypa/virtualenv/issues/322 if os.environ.get('__PYVENV_LAUNCHER__'): os.unsetenv('__PYVENV_LAUNCHER__') if re.search(r'/Python(?:-32|-64)*$', py_executable): # The name of the python executable is not quite what # we want, rename it. py_executable = os.path.join( os.path.dirname(py_executable), 'python') logger.notify('New %s executable in %s', expected_exe, py_executable) pcbuild_dir = os.path.dirname(sys.executable) pyd_pth = os.path.join(lib_dir, 'site-packages', 'virtualenv_builddir_pyd.pth') if is_win and os.path.exists(os.path.join(pcbuild_dir, 'build.bat')): logger.notify('Detected python running from build directory %s', pcbuild_dir) logger.notify('Writing .pth file linking to build directory for *.pyd files') writefile(pyd_pth, pcbuild_dir) else: pcbuild_dir = None if os.path.exists(pyd_pth): logger.info('Deleting %s (not Windows env or not build directory python)' % pyd_pth) os.unlink(pyd_pth) if sys.executable != py_executable: ## FIXME: could I just hard link? executable = sys.executable if is_cygwin and os.path.exists(executable + '.exe'): # Cygwin misreports sys.executable sometimes executable += '.exe' py_executable += '.exe' logger.info('Executable actually exists in %s' % executable) shutil.copyfile(executable, py_executable) make_exe(py_executable) if is_win or is_cygwin: pythonw = os.path.join(os.path.dirname(sys.executable), 'pythonw.exe') if os.path.exists(pythonw): logger.info('Also created pythonw.exe') shutil.copyfile(pythonw, os.path.join(os.path.dirname(py_executable), 'pythonw.exe')) python_d = os.path.join(os.path.dirname(sys.executable), 'python_d.exe') python_d_dest = os.path.join(os.path.dirname(py_executable), 'python_d.exe') if os.path.exists(python_d): logger.info('Also created python_d.exe') shutil.copyfile(python_d, python_d_dest) elif os.path.exists(python_d_dest): logger.info('Removed python_d.exe as it is no longer at the source') os.unlink(python_d_dest) # we need to copy the DLL to enforce that windows will load the correct one. # may not exist if we are cygwin. py_executable_dll = 'python%s%s.dll' % ( sys.version_info[0], sys.version_info[1]) py_executable_dll_d = 'python%s%s_d.dll' % ( sys.version_info[0], sys.version_info[1]) pythondll = os.path.join(os.path.dirname(sys.executable), py_executable_dll) pythondll_d = os.path.join(os.path.dirname(sys.executable), py_executable_dll_d) pythondll_d_dest = os.path.join(os.path.dirname(py_executable), py_executable_dll_d) if os.path.exists(pythondll): logger.info('Also created %s' % py_executable_dll) shutil.copyfile(pythondll, os.path.join(os.path.dirname(py_executable), py_executable_dll)) if os.path.exists(pythondll_d): logger.info('Also created %s' % py_executable_dll_d) shutil.copyfile(pythondll_d, pythondll_d_dest) elif os.path.exists(pythondll_d_dest): logger.info('Removed %s as the source does not exist' % pythondll_d_dest) os.unlink(pythondll_d_dest) if is_pypy: # make a symlink python --> pypy-c python_executable = os.path.join(os.path.dirname(py_executable), 'python') if sys.platform in ('win32', 'cygwin'): python_executable += '.exe' logger.info('Also created executable %s' % python_executable) copyfile(py_executable, python_executable) if is_win: for name in 'libexpat.dll', 'libpypy.dll', 'libpypy-c.dll', 'libeay32.dll', 'ssleay32.dll', 'sqlite.dll': src = join(prefix, name) if os.path.exists(src): copyfile(src, join(bin_dir, name)) if os.path.splitext(os.path.basename(py_executable))[0] != expected_exe: secondary_exe = os.path.join(os.path.dirname(py_executable), expected_exe) py_executable_ext = os.path.splitext(py_executable)[1] if py_executable_ext == '.exe': # python2.4 gives an extension of '.4' :P secondary_exe += py_executable_ext if os.path.exists(secondary_exe): logger.warn('Not overwriting existing %s script %s (you must use %s)' % (expected_exe, secondary_exe, py_executable)) else: logger.notify('Also creating executable in %s' % secondary_exe) shutil.copyfile(sys.executable, secondary_exe) make_exe(secondary_exe) if '.framework' in prefix: if 'Python.framework' in prefix: logger.debug('MacOSX Python framework detected') # Make sure we use the the embedded interpreter inside # the framework, even if sys.executable points to # the stub executable in ${sys.prefix}/bin # See http://groups.google.com/group/python-virtualenv/ # browse_thread/thread/17cab2f85da75951 original_python = os.path.join( prefix, 'Resources/Python.app/Contents/MacOS/Python') if 'EPD' in prefix: logger.debug('EPD framework detected') original_python = os.path.join(prefix, 'bin/python') shutil.copy(original_python, py_executable) # Copy the framework's dylib into the virtual # environment virtual_lib = os.path.join(home_dir, '.Python') if os.path.exists(virtual_lib): os.unlink(virtual_lib) copyfile( os.path.join(prefix, 'Python'), virtual_lib) # And then change the install_name of the copied python executable try: mach_o_change(py_executable, os.path.join(prefix, 'Python'), '@executable_path/../.Python') except: e = sys.exc_info()[1] logger.warn("Could not call mach_o_change: %s. " "Trying to call install_name_tool instead." % e) try: call_subprocess( ["install_name_tool", "-change", os.path.join(prefix, 'Python'), '@executable_path/../.Python', py_executable]) except: logger.fatal("Could not call install_name_tool -- you must " "have Apple's development tools installed") raise if not is_win: # Ensure that 'python', 'pythonX' and 'pythonX.Y' all exist py_exe_version_major = 'python%s' % sys.version_info[0] py_exe_version_major_minor = 'python%s.%s' % ( sys.version_info[0], sys.version_info[1]) py_exe_no_version = 'python' required_symlinks = [ py_exe_no_version, py_exe_version_major, py_exe_version_major_minor ] py_executable_base = os.path.basename(py_executable) if py_executable_base in required_symlinks: # Don't try to symlink to yourself. required_symlinks.remove(py_executable_base) for pth in required_symlinks: full_pth = join(bin_dir, pth) if os.path.exists(full_pth): os.unlink(full_pth) os.symlink(py_executable_base, full_pth) if is_win and ' ' in py_executable: # There's a bug with subprocess on Windows when using a first # argument that has a space in it. Instead we have to quote # the value: py_executable = '"%s"' % py_executable # NOTE: keep this check as one line, cmd.exe doesn't cope with line breaks cmd = [py_executable, '-c', 'import sys;out=sys.stdout;' 'getattr(out, "buffer", out).write(sys.prefix.encode("utf-8"))'] logger.info('Testing executable with %s %s "%s"' % tuple(cmd)) try: proc = subprocess.Popen(cmd, stdout=subprocess.PIPE) proc_stdout, proc_stderr = proc.communicate() except OSError: e = sys.exc_info()[1] if e.errno == errno.EACCES: logger.fatal('ERROR: The executable %s could not be run: %s' % (py_executable, e)) sys.exit(100) else: raise e proc_stdout = proc_stdout.strip().decode("utf-8") proc_stdout = os.path.normcase(os.path.abspath(proc_stdout)) norm_home_dir = os.path.normcase(os.path.abspath(home_dir)) if hasattr(norm_home_dir, 'decode'): norm_home_dir = norm_home_dir.decode(sys.getfilesystemencoding()) if proc_stdout != norm_home_dir: logger.fatal( 'ERROR: The executable %s is not functioning' % py_executable) logger.fatal( 'ERROR: It thinks sys.prefix is %r (should be %r)' % (proc_stdout, norm_home_dir)) logger.fatal( 'ERROR: virtualenv is not compatible with this system or executable') if is_win: logger.fatal( 'Note: some Windows users have reported this error when they ' 'installed Python for "Only this user" or have multiple ' 'versions of Python installed. Copying the appropriate ' 'PythonXX.dll to the virtualenv Scripts/ directory may fix ' 'this problem.') sys.exit(100) else: logger.info('Got sys.prefix result: %r' % proc_stdout) pydistutils = os.path.expanduser('~/.pydistutils.cfg') if os.path.exists(pydistutils): logger.notify('Please make sure you remove any previous custom paths from ' 'your %s file.' % pydistutils) ## FIXME: really this should be calculated earlier fix_local_scheme(home_dir) if site_packages: if os.path.exists(site_packages_filename): logger.info('Deleting %s' % site_packages_filename) os.unlink(site_packages_filename) return py_executable def install_activate(home_dir, bin_dir, prompt=None): home_dir = os.path.abspath(home_dir) if is_win or is_jython and os._name == 'nt': files = { 'activate.bat': ACTIVATE_BAT, 'deactivate.bat': DEACTIVATE_BAT, 'activate.ps1': ACTIVATE_PS, } # MSYS needs paths of the form /c/path/to/file drive, tail = os.path.splitdrive(home_dir.replace(os.sep, '/')) home_dir_msys = (drive and "/%s%s" or "%s%s") % (drive[:1], tail) # Run-time conditional enables (basic) Cygwin compatibility home_dir_sh = ("""$(if [ "$OSTYPE" "==" "cygwin" ]; then cygpath -u '%s'; else echo '%s'; fi;)""" % (home_dir, home_dir_msys)) files['activate'] = ACTIVATE_SH.replace('__VIRTUAL_ENV__', home_dir_sh) else: files = {'activate': ACTIVATE_SH} # suppling activate.fish in addition to, not instead of, the # bash script support. files['activate.fish'] = ACTIVATE_FISH # same for csh/tcsh support... files['activate.csh'] = ACTIVATE_CSH files['activate_this.py'] = ACTIVATE_THIS if hasattr(home_dir, 'decode'): home_dir = home_dir.decode(sys.getfilesystemencoding()) vname = os.path.basename(home_dir) for name, content in files.items(): content = content.replace('__VIRTUAL_PROMPT__', prompt or '') content = content.replace('__VIRTUAL_WINPROMPT__', prompt or '(%s)' % vname) content = content.replace('__VIRTUAL_ENV__', home_dir) content = content.replace('__VIRTUAL_NAME__', vname) content = content.replace('__BIN_NAME__', os.path.basename(bin_dir)) writefile(os.path.join(bin_dir, name), content) def install_distutils(home_dir): distutils_path = change_prefix(distutils.__path__[0], home_dir) mkdir(distutils_path) ## FIXME: maybe this prefix setting should only be put in place if ## there's a local distutils.cfg with a prefix setting? home_dir = os.path.abspath(home_dir) ## FIXME: this is breaking things, removing for now: #distutils_cfg = DISTUTILS_CFG + "\n[install]\nprefix=%s\n" % home_dir writefile(os.path.join(distutils_path, '__init__.py'), DISTUTILS_INIT) writefile(os.path.join(distutils_path, 'distutils.cfg'), DISTUTILS_CFG, overwrite=False) def fix_local_scheme(home_dir): """ Platforms that use the "posix_local" install scheme (like Ubuntu with Python 2.7) need to be given an additional "local" location, sigh. """ try: import sysconfig except ImportError: pass else: if sysconfig._get_default_scheme() == 'posix_local': local_path = os.path.join(home_dir, 'local') if not os.path.exists(local_path): os.mkdir(local_path) for subdir_name in os.listdir(home_dir): if subdir_name == 'local': continue os.symlink(os.path.abspath(os.path.join(home_dir, subdir_name)), \ os.path.join(local_path, subdir_name)) def fix_lib64(lib_dir): """ Some platforms (particularly Gentoo on x64) put things in lib64/pythonX.Y instead of lib/pythonX.Y. If this is such a platform we'll just create a symlink so lib64 points to lib """ if [p for p in distutils.sysconfig.get_config_vars().values() if isinstance(p, basestring) and 'lib64' in p]: logger.debug('This system uses lib64; symlinking lib64 to lib') assert os.path.basename(lib_dir) == 'python%s' % sys.version[:3], ( "Unexpected python lib dir: %r" % lib_dir) lib_parent = os.path.dirname(lib_dir) top_level = os.path.dirname(lib_parent) lib_dir = os.path.join(top_level, 'lib') lib64_link = os.path.join(top_level, 'lib64') assert os.path.basename(lib_parent) == 'lib', ( "Unexpected parent dir: %r" % lib_parent) if os.path.lexists(lib64_link): return os.symlink('lib', lib64_link) def resolve_interpreter(exe): """ If the executable given isn't an absolute path, search $PATH for the interpreter """ if os.path.abspath(exe) != exe: paths = os.environ.get('PATH', '').split(os.pathsep) for path in paths: if os.path.exists(os.path.join(path, exe)): exe = os.path.join(path, exe) break if not os.path.exists(exe): logger.fatal('The executable %s (from --python=%s) does not exist' % (exe, exe)) raise SystemExit(3) if not is_executable(exe): logger.fatal('The executable %s (from --python=%s) is not executable' % (exe, exe)) raise SystemExit(3) return exe def is_executable(exe): """Checks a file is executable""" return os.access(exe, os.X_OK) ############################################################ ## Relocating the environment: def make_environment_relocatable(home_dir): """ Makes the already-existing environment use relative paths, and takes out the #!-based environment selection in scripts. """ home_dir, lib_dir, inc_dir, bin_dir = path_locations(home_dir) activate_this = os.path.join(bin_dir, 'activate_this.py') if not os.path.exists(activate_this): logger.fatal( 'The environment doesn\'t have a file %s -- please re-run virtualenv ' 'on this environment to update it' % activate_this) fixup_scripts(home_dir) fixup_pth_and_egg_link(home_dir) ## FIXME: need to fix up distutils.cfg OK_ABS_SCRIPTS = ['python', 'python%s' % sys.version[:3], 'activate', 'activate.bat', 'activate_this.py'] def fixup_scripts(home_dir): # This is what we expect at the top of scripts: shebang = '#!%s/bin/python' % os.path.normcase(os.path.abspath(home_dir)) # This is what we'll put: new_shebang = '#!/usr/bin/env python%s' % sys.version[:3] if is_win: bin_suffix = 'Scripts' else: bin_suffix = 'bin' bin_dir = os.path.join(home_dir, bin_suffix) home_dir, lib_dir, inc_dir, bin_dir = path_locations(home_dir) for filename in os.listdir(bin_dir): filename = os.path.join(bin_dir, filename) if not os.path.isfile(filename): # ignore subdirs, e.g. .svn ones. continue f = open(filename, 'rb') try: try: lines = f.read().decode('utf-8').splitlines() except UnicodeDecodeError: # This is probably a binary program instead # of a script, so just ignore it. continue finally: f.close() if not lines: logger.warn('Script %s is an empty file' % filename) continue if not lines[0].strip().startswith(shebang): if os.path.basename(filename) in OK_ABS_SCRIPTS: logger.debug('Cannot make script %s relative' % filename) elif lines[0].strip() == new_shebang: logger.info('Script %s has already been made relative' % filename) else: logger.warn('Script %s cannot be made relative (it\'s not a normal script that starts with %s)' % (filename, shebang)) continue logger.notify('Making script %s relative' % filename) script = relative_script([new_shebang] + lines[1:]) f = open(filename, 'wb') f.write('\n'.join(script).encode('utf-8')) f.close() def relative_script(lines): "Return a script that'll work in a relocatable environment." activate = "import os; activate_this=os.path.join(os.path.dirname(os.path.realpath(__file__)), 'activate_this.py'); execfile(activate_this, dict(__file__=activate_this)); del os, activate_this" # Find the last future statement in the script. If we insert the activation # line before a future statement, Python will raise a SyntaxError. activate_at = None for idx, line in reversed(list(enumerate(lines))): if line.split()[:3] == ['from', '__future__', 'import']: activate_at = idx + 1 break if activate_at is None: # Activate after the shebang. activate_at = 1 return lines[:activate_at] + ['', activate, ''] + lines[activate_at:] def fixup_pth_and_egg_link(home_dir, sys_path=None): """Makes .pth and .egg-link files use relative paths""" home_dir = os.path.normcase(os.path.abspath(home_dir)) if sys_path is None: sys_path = sys.path for path in sys_path: if not path: path = '.' if not os.path.isdir(path): continue path = os.path.normcase(os.path.abspath(path)) if not path.startswith(home_dir): logger.debug('Skipping system (non-environment) directory %s' % path) continue for filename in os.listdir(path): filename = os.path.join(path, filename) if filename.endswith('.pth'): if not os.access(filename, os.W_OK): logger.warn('Cannot write .pth file %s, skipping' % filename) else: fixup_pth_file(filename) if filename.endswith('.egg-link'): if not os.access(filename, os.W_OK): logger.warn('Cannot write .egg-link file %s, skipping' % filename) else: fixup_egg_link(filename) def fixup_pth_file(filename): lines = [] prev_lines = [] f = open(filename) prev_lines = f.readlines() f.close() for line in prev_lines: line = line.strip() if (not line or line.startswith('#') or line.startswith('import ') or os.path.abspath(line) != line): lines.append(line) else: new_value = make_relative_path(filename, line) if line != new_value: logger.debug('Rewriting path %s as %s (in %s)' % (line, new_value, filename)) lines.append(new_value) if lines == prev_lines: logger.info('No changes to .pth file %s' % filename) return logger.notify('Making paths in .pth file %s relative' % filename) f = open(filename, 'w') f.write('\n'.join(lines) + '\n') f.close() def fixup_egg_link(filename): f = open(filename) link = f.readline().strip() f.close() if os.path.abspath(link) != link: logger.debug('Link in %s already relative' % filename) return new_link = make_relative_path(filename, link) logger.notify('Rewriting link %s in %s as %s' % (link, filename, new_link)) f = open(filename, 'w') f.write(new_link) f.close() def make_relative_path(source, dest, dest_is_directory=True): """ Make a filename relative, where the filename is dest, and it is being referred to from the filename source. >>> make_relative_path('/usr/share/something/a-file.pth', ... '/usr/share/another-place/src/Directory') '../another-place/src/Directory' >>> make_relative_path('/usr/share/something/a-file.pth', ... '/home/user/src/Directory') '../../../home/user/src/Directory' >>> make_relative_path('/usr/share/a-file.pth', '/usr/share/') './' """ source = os.path.dirname(source) if not dest_is_directory: dest_filename = os.path.basename(dest) dest = os.path.dirname(dest) dest = os.path.normpath(os.path.abspath(dest)) source = os.path.normpath(os.path.abspath(source)) dest_parts = dest.strip(os.path.sep).split(os.path.sep) source_parts = source.strip(os.path.sep).split(os.path.sep) while dest_parts and source_parts and dest_parts[0] == source_parts[0]: dest_parts.pop(0) source_parts.pop(0) full_parts = ['..']*len(source_parts) + dest_parts if not dest_is_directory: full_parts.append(dest_filename) if not full_parts: # Special case for the current directory (otherwise it'd be '') return './' return os.path.sep.join(full_parts) ############################################################ ## Bootstrap script creation: def create_bootstrap_script(extra_text, python_version=''): """ Creates a bootstrap script, which is like this script but with extend_parser, adjust_options, and after_install hooks. This returns a string that (written to disk of course) can be used as a bootstrap script with your own customizations. The script will be the standard virtualenv.py script, with your extra text added (your extra text should be Python code). If you include these functions, they will be called: ``extend_parser(optparse_parser)``: You can add or remove options from the parser here. ``adjust_options(options, args)``: You can change options here, or change the args (if you accept different kinds of arguments, be sure you modify ``args`` so it is only ``[DEST_DIR]``). ``after_install(options, home_dir)``: After everything is installed, this function is called. This is probably the function you are most likely to use. An example would be:: def after_install(options, home_dir): subprocess.call([join(home_dir, 'bin', 'easy_install'), 'MyPackage']) subprocess.call([join(home_dir, 'bin', 'my-package-script'), 'setup', home_dir]) This example immediately installs a package, and runs a setup script from that package. If you provide something like ``python_version='2.5'`` then the script will start with ``#!/usr/bin/env python2.5`` instead of ``#!/usr/bin/env python``. You can use this when the script must be run with a particular Python version. """ filename = __file__ if filename.endswith('.pyc'): filename = filename[:-1] f = codecs.open(filename, 'r', encoding='utf-8') content = f.read() f.close() py_exe = 'python%s' % python_version content = (('#!/usr/bin/env %s\n' % py_exe) + '## WARNING: This file is generated\n' + content) return content.replace('##EXT' 'END##', extra_text) ##EXTEND## def convert(s): b = base64.b64decode(s.encode('ascii')) return zlib.decompress(b).decode('utf-8') ##file site.py SITE_PY = convert(""" eJzFPf1z2zaWv/OvwMqToeTKdOJ0OztO3RsncVrvuYm3SWdz63q0lARZrCmSJUjL6s3d337vAwAB kpLtTXdO04klEnh4eHhfeHgPHQwGp0Uhs7lY5fM6lULJuJwtRRFXSyUWeSmqZVLOD4q4rDbwdHYb 30glqlyojYqwVRQE+1/4CfbFp2WiDArwLa6rfBVXySxO041IVkVeVnIu5nWZZDciyZIqidPkd2iR Z5HY/3IMgvNMwMzTRJbiTpYK4CqRL8TlplrmmRjWBc75RfTn+OVoLNSsTIoKGpQaZ6DIMq6CTMo5 oAktawWkTCp5oAo5SxbJzDZc53U6F0Uaz6T45z95atQ0DAOVr+R6KUspMkAGYEqAVSAe8DUpxSyf y0iI13IW4wD8vCFWwNDGuGYKyZjlIs2zG5hTJmdSqbjciOG0rggQoSzmOeCUAAZVkqbBOi9v1QiW lNZjDY9EzOzhT4bZA+aJ43c5B3D8kAU/Z8n9mGED9yC4aslsU8pFci9iBAs/5b2cTfSzYbIQ82Sx ABpk1QibBIyAEmkyPSxoOb7VK/TdIWFluTKGMSSizI35JfWIgvNKxKkCtq0LpJEizN/KaRJnQI3s DoYDiEDSoG+ceaIqOw7NTuQAoMR1rEBKVkoMV3GSAbP+GM8I7b8n2TxfqxFRAFZLiV9rVbnzH/YQ AFo7BBgHuFhmNessTW5luhkBAp8A+1KqOq1QIOZJKWdVXiZSEQBAbSPkPSA9FnEpNQmZM43cjon+ RJMkw4VFAUOBx5dIkkVyU5ckYWKRAOcCV7z78JN4e/b6/PS95jEDjGX2ZgU4AxRaaAcnGEAc1qo8 THMQ6Ci4wD8ins9RyG5wfMCraXD44EoHQ5h7EbX7OAsOZNeLq4eBOVagTGisgPr9N3QZqyXQ538e WO8gON1GFZo4f1svc5DJLF5JsYyZv5Azgm81nO+iolq+Am5QCKcCUilcHEQwQXhAEpdmwzyTogAW S5NMjgKg0JTa+qsIrPA+zw5orVucABDKIIOXzrMRjZhJmGgX1ivUF6bxhmammwR2nVd5SYoD+D+b kS5K4+yWcFTEUPxtKm+SLEOEkBeCcC+kgdVtApw4j8QFtSK9YBqJkLUXt0SRqIGXkOmAJ+V9vCpS OWbxRd26W43QYLISZq1T5jhoWZF6pVVrptrLe0fR5xbXEZrVspQAvJ56QrfI87GYgs4mbIp4xeJV rXPinKBHnqgT8gS1hL74HSh6qlS9kvYl8gpoFmKoYJGnab4Gkh0HgRB72MgYZZ854S28g38BLv6b ymq2DAJnJAtYg0Lkt4FCIGASZKa5WiPhcZtm5baSSTLWFHk5lyUN9ThiHzLij2yMcw3e55U2ajxd XOV8lVSokqbaZCZs8bKwYv34iucN0wDLrYhmpmlDpxVOLy2W8VQal2QqFygJepFe2WWHMYOeMckW V2LFVgbeAVlkwhakX7Gg0llUkpwAgMHCF2dJUafUSCGDiRgGWhUEfxWjSc+1swTszWY5QIXE5nsG 9gdw+x3EaL1MgD4zgAAaBrUULN80qUp0EBp9FPhG3/Tn8YFTzxfaNvGQizhJtZWPs+CcHp6VJYnv TBbYa6yJoWCGWYWu3U0GdEQxHwwGQWDcoY0yX3MVVOXmGFhBmHEmk2mdoOGbTNDU6x8q4FGEM7DX zbaz8EBDmE7vgUpOl0WZr/C1ndtHUCYwFvYI9sQlaRnJDrLHia+QfK5KL0xTtN0OOwvUQ8HlT2fv zj+ffRQn4qpRaeO2PruGMc+yGNiaLAIwVWvYRpdBS1R8Ceo+8Q7MOzEF2DPqTeIr46oG3gXUP5U1 vYZpzLyXwdn709cXZ5OfP579NPl4/ukMEAQ7I4M9mjKaxxocRhWBcABXzlWk7WvQ6UEPXp9+tA+C SaImxabYwAMwlMDC5RDmOxYhPpxoGzxJskUejqjxr+yEn7Ba0R7X1fHX1+LkRIS/xndxGIDX0zTl RfyRBODTppDQtYI/w1yNgmAuFyAstxJFarhPnuyIOwARoWWuLeuveZKZ98xH7hAk8UPqAThMJrM0 VgobTyYhkJY69HygQ8TuMMrJEDoWG7frSKOCn1LCUmTYZYz/9KAYT6kfosEoul1MIxCw1SxWklvR 9KHfZIJaZjIZ6gFB/IjHwUVixREK0wS1TJmAJ0q8glpnqvIUfyJ8lFsSGdwMoV7DRdKbneguTmup hs6kgIjDYYuMqBoTRRwETsUQbGezdKNRm5qGZ6AZkC/NQe+VLcrhZw88FFAwZtuFWzPeLTHNENO/ 8t6AcAAnMUQFrVQLCuszcXl2KV4+PzpABwR2iXNLHa852tQkq6V9uIDVupGVgzD3CsckDCOXLgvU jPj0eDfMVWRXpssKC73EpVzld3IO2CIDO6ssfqI3sJeGecxiWEXQxGTBWekZTy/GnSPPHqQFrT1Q b0VQzPqbpd/j7bvMFKgO3goTqfU+nY1XUeZ3CboH041+CdYN1BvaOOOKBM7CeUyGRgw0BPitGVJq LUNQYGXNLibhjSBRw88bVRgRuAvUrdf09TbL19mE964nqCaHI8u6KFiaebFBswR74h3YDUAyh61Y QzSGAk66QNk6AORh+jBdoCztBgAQmGZFGywHltmc0RR5n4fDIozRK0HCW0q08HdmCNocGWI4kOht ZB8YLYGQYHJWwVnVoJkMZc00g4EdkvhcdxHxptEH0KJiBIZuqKFxI0O/q2NQzuLCVUpOP7Shnz9/ ZrZRS4qIIGJTnDQa/QWZt6jYgClMQCcYH4rjK8QGa3BHAUytNGuKg48iL9h/gvW81LINlhv2Y1VV HB8ertfrSMcD8vLmUC0O//yXb775y3PWifM58Q9Mx5EWHRyLDukd+qDRt8YCfWdWrsWPSeZzI8Ea SvKjyHlE/L6vk3kujg9GVn8iFzeGFf81zgcokIkZlKkMtB00GD1TB8+il2ognomh23Y4Yk9Cm1Rr xXyrCz2qHGw3eBqzvM6q0FGkSnwF1g321HM5rW9CO7hnI80PmCrK6dDywMGLa8TA5wzDV8YUT1BL EFugxXdI/xOzTUz+jNYQSF40UZ397qZfixnizh8v79Y7dITGzDBRyB0oEX6TRwugbdyVHPxoZxTt nuOMmo9nCIylDwzzaldwiIJDuOBajF2pc7gafVSQpjWrZlAwrmoEBQ1u3ZSprcGRjQwRJHo3ZnvO C6tbAJ1asT6zozerAC3ccTrWrs0KjieEPHAiXtATCU7tcefdc17aOk0pBNPiUY8qDNhbaLTTOfDl 0AAYi0H584Bbmo3Fh9ai8Br0AMs5aoMMtugwE75xfcDB3qCHnTpWf1tvpnEfCFykIUePHgWdUD7h EUoF0lQM/Z7bWNwStzvYTotDTGWWiURabRGutvLoFaqdhmmRZKh7nUWKZmkOXrHVisRIzXvfWaCd Cz7uM2ZaAjUZGnI4jU7I2/MEMNTtMOB1U2NowI2cIEarRJF1QzIt4R9wKygiQeEjoCVBs2AeK2X+ xP4AmbPz1V+2sIclNDKE23SbG9KxGBqOeb8nkIw6fgJSkAEJu8JIriOrgxQ4zFkgT7jhtdwq3QQj UiBnjgUhNQO400tvg4NPIjyzIAlFyPeVkoX4Sgxg+dqi+jjd/YdyqQkbDJ0G5CroeMOJG4tw4hAn rbiEz9B+RIJON4ocOHgKLo8bmnfZ3DCtDZOAs+4rbosUaGSKnAxGLqrXhjBu+PdPJ06LhlhmEMNQ 3kDeIYwZaRTY5dagYcENGG/N22Ppx27EAvsOw1wdydU97P/CMlGzXIW4we3ELtyP5ooubSy2F8l0 AH+8BRiMrj1IMtXxC4yy/AuDhB70sA+6N1kMi8zjcp1kISkwTb8Tf2k6eFhSekbu8CNtpw5hohij PHxXgoDQYeUhiBNqAtiVy1Bpt78LducUBxYudx94bvPV8cvrLnHH2yI89tO/VGf3VRkrXK2UF42F Alera8BR6cLk4myjjxv1cTRuE8pcwS5SfPj4WSAhOBK7jjdPm3rD8IjNg3PyPgZ10GsPkqs1O2IX QAS1IjLKYfh0jnw8sk+d3I6JPQHIkxhmx6IYSJpP/hU4uxYKxjiYbzKMo7VVBn7g9TdfT3oioy6S 33w9eGCUFjH6xH7Y8gTtyJQGIHqnbbqUMk7J13A6UVQxa3jHtilGrNBp/6eZ7LrH6dR4UTwzvlfJ 71J8J472918e9bfFj4GH8XAJ7sLzcUPB7qzx43tWW+Fpk7UDWGfjaj57NAXY5ufTX2GzrHR87S5O UjoUADIcHKCeNft8Dl30KxIP0k5d45Cgbyumrp4DY4QcWBh1p6P9slMTe+7ZEJtPEasuKns6AaA5 v/IO9d2zyy5UveyGh5/zScNRj5byZtznV3yJhsXPH6KMLDCPBoM+sm9lx/+PWT7/90zykVMxx85/ oGF8IqA/aiZsRxiatiM+rP5ld02wAfYIS7XFA93hIXaH5oPGhfHzWCUpsY+6a1+sKdeAwqx4aARQ 5uwC9sDBZdQn1m/qsuRzZ1KBhSwP8Cx1LDDNyjiBlL3VBXP4XlaIiW02o7C1k5ST96mRUAei7UzC ZgvRL2fL3ISvZHaXlNAXFO4w/OHDj2dhvwnBkC50erwVebwLgXCfwLShJk74lD5Moad0+delqr2L 8QlqjvNNcFiTrdc++DFhE1LoX4MHgkPe2S2fkeNmfbaUs9uJpHN/ZFPs6sTH3+BrxMSmA/jJWype UAYazGSW1kgr9sExdXBRZzM6KqkkuFo6zxfzfug0nyOBizS+EUPqPMcolOZGClTdxaV2RIsyx8xS USfzw5tkLuRvdZziDl8uFoALnmPpVxEPT8Eo8ZYTEjjjUMlZXSbVBkgQq1wfA1LugtNwuuGJDj0k +cSHCYjZDMfiI04b3zPh5oZcJk7gH37gJHELjh3MOS1yFz2H91k+wVEnlKA7ZqS6R/T0OGiPkAOA AQCF+Q9GOojnv5H0yj1rpDV3iYpa0iOlG3TIyRlDKMMRBj34N/30GdHlrS1Y3mzH8mY3ljdtLG96 sbzxsbzZjaUrEriwNn5lJKEvhtU+4ehNlnHDTzzMWTxbcjtM3MQETYAoCrPXNjLF+ctekIuP+ggI qW3n7JkeNskvCWeEljlHwzVI5H48z9L7epN57nSmVBrdmadi3NltCUB+38MoojyvKXVneZvHVRx5 cnGT5lMQW4vuuAEwFu1cIA6bZneTKQd6W5ZqcPlfn3748B6bI6iByXSgbriIaFhwKsP9uLxRXWlq 9oEFsCO19HNyqJsGuPfIIBuPssf/vKVkD2QcsaZkhVwU4AFQSpZt5iYuhWHruc5w0s+Zyfnc6UQM smrQTGoLkU4vL9+efjodUPRv8L8DV2AMbX3pcPExLWyDrv/mNrcUxz4g1DrM1Rg/d04erRuOeNjG GrAdH7714OgxBrs3YuDP8t9KKVgSIFSk48BPIdSj90BftE3o0McwYidzzz1kY2fFvnNkz3FRHNHv O4FoD+Cfe+IeYwIE0C7U0OwMms1US+lb87qDog7QR/p6X7wFa2+92jsZn6J2Ej0OoENZ22y7++cd 2bDRU7J6ffb9+fuL89eXp59+cFxAdOU+fDw8Emc/fhaUKoIGjH2iGLMkKkxKAsPiVimJeQ7/1Rj5 mdcVx4uh19uLC31os8I6FUxcRpsTwXPOaLLQOHzGAWn7UKciIUap3iA5BUGUuUMFQ7hfWnExisp1 cjPVGU3RWa311ksXepmCMDrijkD6oLFLCgbB2WbwilLQK7MrLPkwUBdJ9SClbbTNEUkpPNjJHHCO wsxBixczpc7wpOmsFf1V6OIaXkeqSBPYyb0KrSzpbpgp0zCOfmjPuhmvPg3odIeRdUOe9VYs0Gq9 Cnluuv+oYbTfasCwYbC3MO9MUqYIpU9jnpsIsREf6oTyHr7apddroGDB8MyvwkU0TJfA7GPYXItl AhsI4MklWF/cJwCE1kr4ZwPHTnRA5pioEb5ZzQ/+FmqC+K1/+aWneVWmB/8QBeyCBGcVhT3EdBu/ hY1PJCNx9uHdKGTkKEtX/K3G3H5wSCgA6kg7pTLxYfpkqGS60Kkmvj7AF9pPoNet7qUsSt293zUO UQKeqSF5Dc+UoV+ImV8W9hinMmqBxrIFixmW/7kZCeazJz4uZZrqZPXztxdn4DtiJQVKEB/BncFw HC/B03Sdh8fliS1QeNYOr0tk4xJdWMq3mEdes96gNYoc9fZSNOw6UWC426sTBS7jRLloD3HaDMvU AkTIyrAWZlmZtVttkMJuG6I4ygyzxOSypFxWnyeAl+lpzFsi2CthnYaJwPOBcpJVJnkxTWagR0Hl gkIdg5AgcbEYkTgvzzgGnpfK1DDBw2JTJjfLCs85oHNE9RPY/MfTzxfn76mm4Ohl43X3MOeYdgJj zic5wWxBjHbAFzcDELlqMunjWf0KYaD2gT/tV5yocsIDdPpxYBH/tF9xEdmJsxPkGYCCqou2eOAG wOnWJzeNLDCudh+MHzcbsMHMB0OxSKxZ0Tkf7vy6nGhbtkwJxX3Myycc4CwKm52mO7vZae2PnuOi wBOv+bC/Ebztky3zmULX286bbXlw7qcjhVjPChh1W/tjmESxTlM9HYfZtnELbWu1jf0lc2KlTrtZ hqIMRBy6nUcuk/UrYd2cOdDLqO4AE99qdI0k9qrywS/ZQHsYHiaW2J19iulIFS1kBDCSIXXhTg0+ FFoEUCCUCDx0JHc82j/y5uhYg4fnqHUX2MYfQBHqtFwq98hL4ET48hs7jvyK0EI9eixCx1PJZJbb lDH8rJfoVb7w59grAxTERLEr4+xGDhnW2MD8yif2lhAsaVuP1FfJdZ9hEefgnN5v4fCuXPQfnBjU WozQaXcrN2115JMHG/RWhewkmA++jNeg+4u6GvJKbjmH7i2E2w71YYiYiAhN9Tn8MMRwzG/hlvVp APdSQ8NCD++3LaewvDbGkbX2sVXgFNoX2oOdlbA1qxQdyziVhcYXtV5AY3BPGpM/sE91zpD93VMy 5sSELFAe3AXpzW2gG7TCCQOuXOKyz4Qy45vCGv1uLu9kCkYDjOwQCx9+tYUPo8iGU3pTwr4Yu8vN 5aYfN3rTYHZsKjPQM1MFrF+UyeoQ0emN+OzCrEEGl/oXvSWJs1vykt/8/Xws3rz/Cf59LT+AKcXK xbH4B6Ah3uQl7C+59JbuRMCijoo3jnmtsLyRoNFRBV8fgW7bpUdnPBbR1SZ+mYnVlAITbMsV31kC KPIEqRy98RNMDQX8NkVeLW/UeIp9izLQL5EG2+tesFbkULeMltUqRXvhREma1bwaXJy/OXv/8Syq 7pHDzc+BE0Xxc7NwOvqMuMTzsLGwT2Y1Prl2HOcfZFr0+M1602lqaHDTKULYlxR2o8n3YcR2cxGX GDkQxWaezyJsCSzPZXvVGhzpkbO/fNDQe1YWYQ1H+hSt8ebxMVBD/NJWRANoSH30nKgnIRRPsX6M H0eDflM8FhTahj/7t+u5GxnXhUA0wTamzayHfnerC5dMZw3PchLhdWKXwdSGpkmsVtOZWzP4IRP6 OhPQcnTOIRdxnVZCZiC5tMmneyVA07tlfiwhzCpszqj2jcI06TreKCcJKVZigKMOqDQeD2QoYgh7 8B/jW7YHWH8oai5kBuiEKO2fcqerqmdLlmDeEhH1ehIP1kn20s3n0RTmQXmHPGscWZgnuo2M0Y2s 9Pz5wXB09aLJdKCo9Mwr8p0VYPVcNtkD1Vns7+8PxH887P0wKlGa57fglgHsXq/lgl5vsdx6cna1 up69eRMBP86W8goeXFP03D6vMwpN7uhKCyLtXwMjxLUJLTOa9i27zEG7kg+auQUfWGnL8XOW0KVF GFqSqGz13U8YdjLSRCwJiiGM1SxJQg5TwHps8hrr8zDMqPlF3gPHJwhmjG/xhIy32kv0MCmX1nKP RedEDAjwgHLLeDQqcKYKNcBzcrnRaE7Os6RqSkueu4enupC/sncRab4S8Rolw8yjRQyn1NNj1cbD zneyqLdjyWdXbsCxNUt+/RDuwNogafliYTCFh2aRZrksZ8ac4ools6RywJh2CIc70xVMZH2ioAel Aah3sgpzK9H27Z/suriYfqBz5AMzkk4fquy1VhwcirNWgmEUNeNTGMoS0vKt+TKCUd5TWFt7At5Y 4k86qIp1Bd7tG26JY53pWzU4f6O5agPg0E1OVkFadvR0hHN9mIXPTLvlLgz80BadcLtLyqqO04m+ vGGCDtvEHqxrPG1p3M6iT+utgJOfgwd8oLP4wXEwWTZIT0zCNVUaJ2KhQxSRW23mF2YVOXp5R+wr gU+BlJlPTI20CSJdWXa1xac6Z9NR8QjqK1PQtMUzN5U0nSIUF/Mx5TmZEogtXrTBpX2nhfjuRAxf jMWfWxuhWbHBW5kA5Wfz6Nk89H0y6np1fNTYme7GswVhK5CX10+ebppMaXphX/r5w3110iFuAFcg O4tEzg+eKcSOcf5SqBpKM6/tnEIzxur0PZv1pAuzm3IVqkqbgle/bhSKo1qM/2kHMRXfWg9wcSwK LVsgW9BvEk9ayX/20jVMDNTo+SuLnsuk73AKv+HFKfBeE9R1dLYeWuoMewu2Z0+uyyj5CKpp2HD8 gx7Vk0SpnSPeaYXHk43Euaz/BB4O6ZIZYpqvWsfC/07m4aT9bYeLHSy/+XoXnq6C6a2Y6FnQx1Yx 8KK3SxeahTef/qCXxzJ9Xf940dkqGE9d/kdkBTwsZY+XsF3S9WQq6V79tMING6ZLL2N+g4a3Lo5t QMMoHjxwGrpJdPipbnsrf1jpoAaubsNd0+f+u+auWwR25uYMuTN3v8LPpYHuu51f+mjAm0lNiEdl pjdqoV/juMpirFMX+gOj+oPkdzvhTLfonofAmEQJDLMSm2rsjW1YxTP3O+bhHPAltm5BZ69Fak27 o1jaHP8Yc8I5B/jc1nhTIslccyB7p3Qr2YRTEyfy5kZNYrwRb0JbGkqj6fiqxkl+RxeayVhtjG+L 18YACMNNOuHRzWkGxoBtE9/My1kozv0ggoamXE0n+VMlc45TaUcawEUcn6L+Jv7J2ZuDVGJYUdVl UcLeY6Dvb+X0iL6M0gaoCZesYnVrUDc9xvo6TxyCc3JMEShHxXg/41EHCME63rmcitOhJxP7Dvjl eVPsnowtQ8isXskyrpqLXvzD2ATsSzMClf7iAjsBkUYyW5ziIpZY/nCQwpCE/f6VduW9rcyOCveR 1XqPZyvqoQNtTymed2yP4ebk3l705l4wNKdrgV1XwjZruM9ebgNLYW4tI12pIxT8Vt+kxPdzcvwU nRGHj0Du3cI3PwnYqjV2hSwazjNXMXSvzsHabbLFfTfidbige/ddaztjx/f1hmWWjhOypbGlonbg ehVPM9qo2bdjvt4D+3Y/J/uJ+3YP/iP37fr+QjA4Gh+tD3qztB/Y4LOacC8DbBgB+kyASHh+2LpK zpjMoZvzDJvr5H5gL+NlnekUUhkzgRzZvSWKQPClf8pNEPUu5dq1b/elix5/f/Hh9ekF0WJyefrm P0+/p5wYDFK3bNajAxtZfsDUPvCyb90gh85j6Bu8wbbndk0uIdEQOu87R8A9EPrLhfoWtK3I3Nfb OnTKLrqdAPHd025B3aayeyF3/DOd4u9mL7TSZAP9lHMazS/nYNg8MucjLA7N+Yd534SstYx2Inrb Fs7JLuyqE+236vsYt0QbRzbHlVYAI9XIXzZQbQoWbDiUHZX2/yKBEnOx2MvcZQJSOJPOnXp0nR6D qvz/F0MJyi7G0zZ2GMf2XmNqx0F5ZS/sxhO3mYwMQbxqq0F3fq6wz2W6hQpBwApP3xjHiBj9p4+x 7KHvMyWuDqiu8wCVzbX9hWumndy/J3i0W9mblxTnh/DhFjRe1Kl7XGv7dDqQ80dnAPnCKSQAzXcI dG7EUwF7o8/ECnG6ESFsJPWxJOYmEh31tWkO8mg3HewNrZ6Lg21Vf27VmxAvtjectwrrdI8j7qEe 6KFqU1vlWGBMkttWzie+I8h8iCToqiXP+cCTS33DL3y9u3pxbEO6yO/42lEklMwzcAz7lZMMt/N6 P6c7MUs5pmwp3LM5xaC6xbUDlX2CbXucTkXAln2QOV1mSAPvfX9UxfTwri0ftDG1rHcMUxLDZ2pE 03JqKDTu9smoO91GbXWBcD3II4B0VCDAQjAd3ejk5204yXb4XO8KpzVdjOrG9UNHKihXx+cI7mF8 vwa/dneq43xUd0bR9OcGbQ7USw7Czb4Dtxp5IZHtJqE99YYPtrgAXBLb3//FI/p3s8hs96NdfrVt 9bK3DIt9WUw8xHyMFonM4wiMDOjNIWlrzFY3go63gDR0dBmqmRvyBTp+lMyI1x7TBoOc2Yn2AKxR CP4PNIke9w== """) ##file ez_setup.py EZ_SETUP_PY = convert(""" eJzNWmmP20YS/a5fwSgYSIJlDu9DhrzIJg5gIMgGuYCFPavpc8SYIhWS8li7yH/f181DJDWcJIt8 WAbOzJDN6qpXVa+qWvr8s+O52ufZbD6f/z3Pq7IqyNEoRXU6VnmelkaSlRVJU1IlWDR7K41zfjIe SVYZVW6cSjFcq54WxpGwD+RBLMr6oXk8r41fTmWFBSw9cWFU+6ScySQV6pVqDyHkIAyeFIJVeXE2 HpNqbyTV2iAZNwjn+gW1oVpb5Ucjl/VOrfzNZjYzcMkiPxji3zt930gOx7yolJa7i5Z63fDWcnVl WSF+PUEdgxjlUbBEJsz4KIoSIKi9L6+u1e9YxfPHLM0Jnx2SosiLtZEXGh2SGSStRJGRSnSLLpau 9aYMq3hulLlBz0Z5Oh7Tc5I9zJSx5Hgs8mORqNfzo3KCxuH+fmzB/b05m/2oYNK4Mr2xkiiM4oTf S2UKK5KjNq/xqtby+FAQ3vejqYJh1oBXnsvZV2++/uKnb37c/fzm+x/e/uNbY2vMLTNgtj3vHv30 /TcKV/VoX1XHze3t8XxMzDq4zLx4uG2Cory9KW/xX7fb7dy4UbuYDb7vNu7dbHbg/o6TikDgf7TH Fpc3XmJzar88nh3TNcXDw2JjLKLIcRiRsWU7vsUjL6JxHNBQOj4LRMDIYv2MFK+VQsOYRMSzXOH5 liMpjXwhXGnHnh26PqMTUpyhLn7gh6Ef84gEPJLM86zQIjG3Qid0eBw/L6XTxYMBJOJ2EHOHiiCw JXEdEgjfEZ6MnCmL3KEulLo2syQL3TgmgeuHcRz6jPBY+sQK7OhZKZ0ubkQihrs8EIw7juOF0g5j GXISBLEkbEKKN9QlcCzPJ44nuCdsQVkYSmG5MSGeCGQo/GelXHBh1CF25EOPiBMmJXW4DX0sl7rU Zt7TUtgoXqgrHer7bswD+DWUoUd4GNsOBJHYiiYsYuN4gT1ccCAZhNzhjpTC9iwrdgNPOsSb8DSz raEyDHA4hPrcJZbjB54fwD/MdiPLIqEVW8+L6bTxQ44X4aOYRlYYOsyPie+SyHNd4nM+iUwtxm/F cOEFhEXAMg5ZFPt+6AhfRD7CUdCIhc+LCTptIoFMIkJaAQBymAg824M0B0YC8Alvg1SG2DiUCIIc tl2O95FGTiRCSnzqE2jExfNiLp7igRvLmFoQ5jHP8eLQcj0umCOYxZxJT9lDbAKPxZ50qQxJiCh0 BYtcYVEH7g69mDrPi+mwoZLEjm1ZlMNNHDkBSYJzF44PPCsKJsSMeEZaVuBRGRDi0JBbUAvIeghs K7JD5kw5asQzgR3YsSMEc33phQJeswPGA2I7kOqEU1JGPCPtCAQF8uUSoUIcP2YxpEibhzSM5ARb sRHPCEvw0Asih8VxRCUNgXRkIXot+Dy0p5ztDp1EqJB2IDmHYb7v217k2SwEf/E4igN/SsqIrahF Y9u1CSPUdSyAAZ4LpecxH0QR2vJZKZ1FCBKJPQPuSSpdZBSVsRcwC1CB9cRUwHhDiyLF1iB+12Gc xix0KJMe6MsJpBMROcVW/tAiIWLJIwvqICERsdIV4HQ/BGHwyA6mPO0PLSISXMUlqoodWrYQADdE cfIpQ8EjwRTL+CMfRdyVAQjBY4yQKLQ9BA53Q8oYd7nPJ6QEQ4uQMBGqfGTbASpRFHmhAxGomL4X I7WniDMYVTfmB0T6IQW+6B6QDYEFQzzPRYL5ZIobgqFF1JERCX0HxR60S10UaQuu5sKXaCV8d0JK OKI7Cz6SMeHMJYHtC9+2faQhWooIFDgZL+GoEpBIxr6HKsDB5ZakQcikLR24AY+cqQwIhxZ5qLEE fCvRMiABPdezbVtyEbk2/oVTukSjbshSvZATA5GYo36oEASBR66lGivreSmdRYwSNwI3oOfwIpdZ KmYRbQCbobJMloFoaJEdOnYIkoOjY85s3/Jji/gRdQXyPPanPB0PLYLuzLPQzNgKYerFgfCYpMKK YCuzpjwdj5gBQYbGDrXVjSIegJ2IEFYA8mKB6031d42UziIp4FpX+MQOqe0wuIn5nk1D1F5UfjFV SeJhPWIEaWNLxZrEERzEZMcuKltI/dhBjwMpv816EwHGm3JWFedNPXDtSblPE9rOW+jdZ+ITExg1 3uo7b9RI1KzFw/66GRfS2H0kaYJuX+xwawmddhnmwbWhBoDVRhuQSKO9r2bGdjyoH6qLJ5gtKowL SoR+0dyLT/VdzHftMshpVn627aS8a0XfXeSpC3MXpsHXr9V0UlZcFJjrloMV6porkxoLmvnwBlMY wRjGPzOM5Xd5WSY07Y1/GOnw9+Fvq/mVsJvOzMGj1eAvpY/4lFRLp75fwLlFpuGqAR0Nh3pRM15t R8PculNrR0kptr2Bbo1JcYdRdZuXJjsV+K0Opu4FLlJy3tr+rHESxsYvTlV+AA4M0+UZo2jGbzuz eycFaq4/kA/wJYbnj4CKKIAAnjLtSKp9Pc7fN0rfG+U+P6VcTbOkxrovrZ3Ms9OBisKo9qQyMAh3 grUsNQFnCl1DYurtlDplXL8ijPsBEPeGGmmXj/uE7dvdBbRWRxO1PGNxu1iZULJG6V5tqeT0jjH2 ohgckDwmmLnpJRIEXyMi6wDXKmc58EgLQfj5oj72eCt76mnY9XbN2YQWUzVaamlUaFUaQPSJBcsz XtbYtGocCQJFgQpEVFolVQLXZQ+984za4439eSb0eUJ9NsJrvQBqnioMnzwfUVo2hw2iEabPcor8 hJ1ErUqdZ8Q4iLIkD6I+4Lgk3f29jpeCJKUwfjiXlTi8+aTwympHZAapcK8+2SBUUYsyXoWgMqY+ 9TDbCNU/H0m5q1kI9m+NxfHDw64QZX4qmCgXimHU9oecn1JRqlOSHoGOH9c5gazjiIMGtuXqwiQq 5LaXpOnlZYPYKAXbtFuPEu3CAW2SmEBWFNXSWqtNeiTXEHW306v+6Q5tj/l2jWN2mpi3SkbtIBD7 WNYAIP3wCYbvXmoJqQ9I8+h6h4Foswmu5fyi8evt/EUD1epVI7uvwlDAz/XKL/NMpgmrAM2mz/59 z/9Ztp//uL9E/0S8L19vb8pVl8ttDuujzPfZkPDnjGSLSqVUlyLgDHV8p3OkOa5T2XLKMoSyaXyX CkRIu/xKnsohlcogIAFbWg1lUpQA4lSqdFhAwrl1vfHyp57yC3Mk7332Plt+eSoKSAOd1wJuilHd WqFqXWJZmKR4KN9Zd8/XrCd991WCwEzoSdXRb/Pq6xzs3AsUUpazJtvS4ZvrfkK+G6XznXrlc4Ci CT//MKiZ/RCti+dTmfpXV1CVz8i4Qen86ok6qTOTXHjeSHNWdxmaEWsbkqo+9NVdw/9p3axZVx3r t3Xz98qmuqd2va6ZNZXfX8rgRKnL6wLX1jdVJ1h1IunFiKZuDGtD+6lBgfJBHUTWHvGY1kHbtqBb o8dPL29KtNM3peqm5/1cGJ1q14EPuf1yoDAzXgy7vpJ8FNB+iy675vlf8iRbtlWhXVqLKwumxOnW 91sU6LZbVuzTvo68K6tyWYtdbVQyfPExT1QAHQVRJbBVp+ySbUDR6tKhyCFIoVG2KKX5w2CV6q+V X4bvqgsrzUdSZEuF88u/7qo/9Gi4siHn8qkov9EhoT4MWYqPIlN/wJwjlJ3tRXpUrdzbOtp67UQX Kug3VPyrj2uWCooZWH5tgKpm6tYB6ZwJAIlXkIeqmQXpikdFsQQTalnqt/u0rknZnDVbgo2btuWy I1TmbTSbs9kSjCg2CmEt5kDYXnVQPBd1rdnDvVCiesyLD82ma+NYF4ycVqT5qE0xhWaJG5CpYhEg wHQjrhdA8iUTm8wpRFOA+gaYq7/SiwiK9VXI9Ej3qkfSUbZW2XT1GpoEHaxVoobFphdKhTi+qn8s R+3UMDpbGtalrpzrLUalTKdcww8mfuZHkS2vln1ufI8+/vaxSCqQD3wMfHUHDQ7/sFaf9j0q76kO gBUqDUGNLC+Kkw6OVIyEab/3w0M11pXQ61tObK/mk7OpuRoGmGrGWK6GGtcsoq2puWI9f6RzwIkH prajnqy7lzDfqTlvM6YAbLDRu7A0L8VydUURZbXRQvvPm2rWkhYUTNUvLW3N/sil6vcBkb5ED/Jx PVWxLzX37XOfg+oa+wbdUrOqLRBP9cejz5efa47reaDj6iuJlzXPzwx6+Lauu6zhZDAYDLTPVGr0 xgGWHw4w1By0he0JDWlmrPZqfKQhTlELNM6rF+oA5W6lw/RRLAod1sJQZfx3Q0VZqnAe1Sql9nUN waJThqHuw7IzS6TlsMHvmbbbNWjtdsYWU55lWqa9+NNd/z9B8Jpc1ahLyzwVyNWJabft41FM6l79 qkcvxCH/qPlWe6L+GoMealE5KlBv+ju8O2q+J7vsJql+HTYrvWGq3+1cz3d/YEbDz2ea+dEgtpmO 9v85JJ9Ls07w70q5iuan8q5Nt7vhGK7BtlYIfFilqj8cx3SkqCdPR6ja5S8CoFNfa37BZbCldqAO 8/kPV23RfN0yyhwk+KALUaFOdBGEaJIuAT1/Qt5i+T3aqXn7hRvzeB4OlPP6qzTX3zYxV4vmpPLY 1ad2hCkv9PyTfmqoFKGnJK1e1ke/EPmgJsWzYuR+FBfN/KN6rfaouBN7AUT33JfuWv2pViwvXbUW 0tZCXTQXBV1cnnUnx+rdu+bUWbZF9cmTZ9kVu3oErEv0u7n646bY4N8aXIHxoek064as3chE8T2U y9Vd97JZwuKudB7VUDGf15NCXaT7wMADGCGrdmLQXxHatnfNB1HVSavuL/uT9E53DLtdE/UdJI2M taFhedW0RC0Ar8bGHkiFaXALPc1SkILtl/P3Wf8rPu+z5bt//Xb3YvXbXLcnq/4Yo9/ucdETjI1C rr9klRpCscBn8+skbRmxVhX/f7fRgk3dei/t1R3GMA3kC/20fojRFY82d0+bv3hsYkI27VGneg+A GcxocdxuF7udStjdbtF9sJEqiVBT5/BrR5fD9u939h3eefkSYNWp0itfvdzpljubu6fqouaIi0y1 qL7+C1AkCcw= """) ##file distribute_from_egg.py DISTRIBUTE_FROM_EGG_PY = convert(""" eJw9j8tqAzEMRfcG/4MgmxQyptkGusonZBmGoGTUGYFfWPKE6dfXTkM3gqt7rh47OKP3NMF3SQFW LlrRU1zhybpAxoKBlIqcrNnBdRjQP3GTocYfzmNrrCPQPN9iwzpxSQfQhWBi0cL3qtRtYIG/4Mv0 KApY5hooqrOGQ05FQTaxptF9Fnx16Rq0XofjaE1XGXVxHIWK7j8P8EY/rHndLqQ1a0pe3COFgHFy hLLdWkDbi/DeEpCjNb3u/zccT2Ob8gtnwVyI """) ##file distribute_setup.py DISTRIBUTE_SETUP_PY = convert(""" eJztPF1z2ziS7/oVOLlcpHISE2fm5q5cp6nKTDyzrs0mqTjZfUhcMkRCEsf8GpC0ov31190ACICk ZOdm9uGqzrtjS0Sj0ejvboA5+7fq0OzKYjKdTn8qy6ZuJK9YksLfdN02gqVF3fAs400KQJPrDTuU LdvzomFNydpasFo0bdWUZVYDLI5KVvH4nm9FUKvBqDrM2W9t3QBAnLWJYM0urSebNEP08AWQ8FzA qlLETSkPbJ82O5Y2c8aLhPEkoQm4IMI2ZcXKjVrJ4L+8nEwY/GxkmTvUr2icpXlVygapXVlqCd5/ FM4GO5Ti9xbIYpzVlYjTTRqzByFrYAbSYKfO8TNAJeW+yEqeTPJUylLOWSmJS7xgPGuELDjw1ADZ Hc9p0RigkpLVJVsfWN1WVXZIi+0EN82rSpaVTHF6WaEwiB93d/0d3N1Fk8lHZBfxN6aFEaNgsoXP NW4llmlF29PSJSqrreSJK88IlWKimVfW5lO9a5s0674duoEmzYX5vCly3sS7bkjkFdLTfefS/Qo7 qrisxWTSCRDXqI3ksnI7mTTycGmFXKeonGr4083Vh9XN9cerifgaC9jZNT2/QgmoKR0EW7K3ZSEc bGYf7Ro4HIu6VpqUiA1bKdtYxXkSPuNyW8/UFPzBr4AshP1H4quI24avMzGfsX+noQ5OAjtl4aCP YmB4SNjYcsleTI4SfQZ2ALIByYGQE7YBISmC2Mvouz+VyDP2e1s2oGv4uM1F0QDrN7B8AapqweAR YqrAGwAxOZIfAMx3LwO7pCELEQrc5swf03gC+B/YPowPhx22BdPzehqwcwQcwGmY/pDe9GdLAbEO PugV69u+dMo6qisORhnCp/erf7y6/jhnPaaxZ67MXl/98urTm4+rv199uLl+9xbWm76Ifoi+u5h2 Q58+vMHHu6apLp8/rw5VGilRRaXcPtc+sn5egx+LxfPkuXVbz6eTm6uPn95/fPfuzc3ql1d/vXrd Wyi+gIVcoPd//XV1/faXdzg+nX6Z/E00POENX/xdeatLdhG9mLwFN3vpWPikGz2vJzdtnnOwCvYV fiZ/KXOxqIBC+j551QLl0v28EDlPM/XkTRqLotagr4XyL4QXHwBBIMFjO5pMJqTG2hWF4BrW8Hdu fNMK2b4MZzNjFOIrxKiYtJXCgYKnwSavwKUCD4y/ifL7BD+DZ8dx8CPRnssiDK4sElCK8zqY68kK sMyS1T4BRKAPW9HE+0Rj6NwGQYEx72BO6E4lKE5EKCcXlZUozLYszErvQ+/ZmxzFWVkLDEfWQrel JhY33QWODgAcjNo6EFXxZhf9BvCasDk+zEC9HFo/v7idDTeisNgBy7C35Z7tS3nvcsxAO1RqoWHY GuK47gbZ607Zg5nrX4qy8TxaYCI8LBdo5PDxmascPQ9j17sBHYbMAZbbg0tje1nCx6SVRnXc3CZy 6OhhEYKgBXpmloMLB6tgfF0+iP4kVM60iUsIo8Z1v/QAtL9RDzdpAauP6ZNSP4tbhdxI5o0UotM2 bTjrNgVwsd2G8N+cdfbTlCsE+3+z+T9gNiRDir8FAymOIPqpg3BsB2GtIJS8LaeOmdHid/y9xniD akOPFvgNfkkH0Z+ipGp/Su+N7klRt1njqxYQooC1EzDyAIOqm5qGLQ2Sp5BTX7+jZCkMfi7bLKFZ xEdlrdstWqe2kQS2pJPuUOfv8y4NX615Lcy2nceJyPhBr4qM7iuJhg9s4F6c14vqcJ5E8H/k7Ghq Az/nzFKBaYb+AjFwU4KGjTy8uJ09nT3aaIDgbi9OiXBk/8do7f0c4ZLVukfcEQFSFonkgwcWsglf zJmVv87H/ULNqUrWpkw1KcOKCoIlGY6Sd68o0jte9pK2HgeWTuI2yg21gyUaQCtHmLC8+I85CGe1 4fdi+VG2ovO9OScHULdQSe4pnScd5eu6zNCMkRcTu4SjaQCCf0OXe3terxSXBPraoLrfrsCkKI+s Ka1G/uZl0maixtLuS7ebwHKlDzj0094XRzTeej6AUs4dr3nTyNADBENZJU7UHy0LcLbm4HhdQEN+ yd4H0c7BVlMdxLFCq5upovMf8RbHmecxI9J9hXBqWfLjcgp1mV5vNkJYfx8+Rp3K/1wWmyyNG39x AXqi6pmY/Ek4A4/SF52rV0Pu43QIhZAFRXsJxXc4gJh+JN9OG0vcNonTTgp/XJ5DEZXWJGr+ACUE VVdfiukQH3Z/Yl4EDSZS2tgB836HnQ1qCelOBnySbYHxJWLvMwECGsVnuh2c5aVEUmNMCw2hm1TW zRyME9CMTg8A8cE4Hbb45OwriEbgvxRfivDnVkpYJTsoxOxczgC5FwFEhFksZhZDZVZCS5vwpT8m snrEQkAHWc/oHAv/3PMUtzgFYzP1osr7YwX2t9jDk6LIMZsZ1esu24FV35bNL2VbJH/YbB8lc4zE QSp0ymGtYil4I/r+aoWbIwvssiyKWCcC9R8NW/QzErt0yNKOGIr017Yt2dkrhdau+QnGl5Ux1UvU mtWcTxvVbSx4LlTWeKdpv4OskJKzNbZQH3iWetiN6RVtvhYSTJqTLXdugXBhy5KyYmrjdL1TUAOa Itidx487ho2XEJxEvDOriyJRkRP7ypwFz4NZxO4UT+5wRa84AAcjpDBZZFfJmVVEEqk9Ege76XoP 1BWOyyKh/mzFMdavxQb9DbZi46blme0S0/4aLLWayIjhX5IzeOGIhNpKqMTXFIgEtuZ1j1xmWHdN HHMcDZcOipdjc5vtP1eoDtiP8vLjCOu07T/RA2rpq0a89NJVFCQEQ4NFpYD8QQBLj2ThBlQnmDJG dLAv3e91zLWXOiu0s0vk+auHMkWtrtB0k44cm+QMonpXv3TWQ06+ns5xS77PVkRpLoWD4TP2QfDk OQVXhhEG8jMgna3B5O7neCqwRyXEcKh8C2hyXEoJ7oKsr4cMdktabewlxfOZRhC8UWHzg51CzBBk DPrAk15SpdhIRCtmzdl0v54OgHRegMjs2MBpaknAWiM5BhBgavgePOAfiXewqAtv27kkYdhLRpag ZWyqQXDYNbivdfk13LRFjO5Me0Eadsep6Ttnz57d72cnMmN1JGFrFD3dWMZr41pu1PNTSXMfFvNm KLXHEmak9iEtVQNr0Px3fype14OB/koRrgOSHj7vFnkCjg4WMB2fV+HpEJUvWCg9IbWxE37hAPDk nL4/77gMtfIYjfBE/6g662WGdJ9m0KgIRtO6cUhX6129NZpOZK3QO4RoCHNwGOADisYG/X9QdOPx fVuRv9io3FoUaksQ201IIn8J3m2lcRifgIhnrt8Adgxhl2Zpy6Iz8HI47WC4N9L2euVDuA1XvW2r DnbWe4TGaiAyEyChxOiwIndAFKuUzt0EWNo+GAuX2rEZ3o0ng5sxT0TKPXHEAOu57sUZ6bwTnoUb vo1KzXi5PvMdJhtcg10rDIXYm+iMTyHSBtG7N6+j8xrP2vAcN8Jfg/bvB0SnAhxmN9R2VBQajLoP jAUufg3HRjX95qGlNS8fIGEG41i5nfmwyngsdqDuwnSze5E8rbEfOQTzif9U3EMs9Jr+kHvpTThz jyvYBmsPzwNhRmruMTjN4nFSgGp9LB7pvyHOnbtdmWfYN1xggdB3+Gbxgb9cg/TvXbZs/BLJcsD2 SSmLd8/63XV7DJj0lOBv5QOqgMiEOigu2wazXnQee36wJmcqnX7G5jBnzpTma+J78tTzHT5YZ64N B4heebDKU3kRZDBJuUM9Y85GTlF171vzc+DbLS/ADnjfQ82ZT82oKp0B5j3LRBPUDNW+8719fnZq pvmNmha6bbx5rwGom/x4PwI/OtwzGE7JQ8N4Z3L9XrMG6dW7rqsZYBnG9DGtBJ+qmvfAVkOs5sSR VnpwY28fJU6jIOjtxHfHxzxN3zkfg+tcNd9AQt2dXCMBmitOAEOQ7p5N17vujMQyHwsWwIAHZ+D+ 8xyoWJXr38Lu2HMWmYZ3BUUhVF4qsj3WaPB8myb8W+Z4LtelF5RypJ56zA2PiNtwx/QWhi6IWHV4 ICaB0elAFT757EQVhXajOhQ7dqSPbmrrB2GBL57WhceuMMwVbd/g9nqkDDyg4eXQBY76HgV+wvP0 ffjPKH8VyAez/NynS5A6f9klSTr1vioeUlkWaGy9/NstjrFs3UEZxioh87SuzQ02Ve6eY6fyPq0q oGl6YhtD+nRuNurECeB4nqbE1XSJ2XFxOXoSwYSgnxf12NnsHKlaDurHj6WZHhlOw66vM4/v7zEz 7/m7J7mTycyvLboIbLPLMx3XIBzG96jVKX4by/WP2orKxq9+/XWBksR4BlJVn7/BVtJBNn0y6B8L UE8N8lZPnUB/pPAA4vP7jm/+o5OsmD3iZR7l3CmL/tNMy2GFVwJpbRmvgvSgvdhCbdMuvA5C60+q rXo0to6cFWrM1DteVVJs0q+hiTo20HURl8KUPiblcvtw2fNHNhnXlw4N4GfzAUJ2Ir46MRxqrYvL 2y6ro+G5uZwoijYXkqtri24vB0HVtV+V/y0WEnarbm6obfTLBdgG4IhgVdnU2PdGPV5iUFN4RhpF TVlp4dDMKkubMMB1lsHs86J3XugwwTDQXUzj6h9aKaqwUFVUjB4CZ6Cc6q7lj4o/4z0tj9z6M0Ei d4d0fiutlkpgb1sLGdBph71ErI8vsbM82kMaW6WbPWIdSisH6tpX+JuY0yGncxZqrpGOGfDR4/pT PbMzthcBWFUMJIwkHU6+DSrp3ERKSqGYUguRY2B3j2yHbRv6ukeT8YsXfVcK2TDckBOOMFOGyfs6 wizSP4v2MX5QB9KYnkR0ybxXPUlBoR7Hl+S2fZ31Up2Ph0oM+IVNU+dM69X7638lwZY6W6T2lwH1 9FXTvY/mvrDhlkyqbTAuqDOWiEboe38Yz/GuQBcUUW+TfobdnRMu++RFZqiv3e6LJE5RppYGXTfN mpFVNC/o1EP5RlRP8o3pVyK2kuVDmohEvVOSbjS8+/ZK7bRGEn1lMJ/bUxfTEHXrIT+UjFE2LgWN DRg67xMMiNRhzdhl2aFvU/fogZYdVEfHKygvMwMbVXKs3QuHeksjm4hEkeggQvfajmyqWKj7iFZ4 Hh1o7ce7fKNSNZM1aYBjzN+ONH2cK6vHSTqWRI2Qcjqn0iSGx1JS1Dm/W/INaenRvPREb7zHG3/e sDvu6kZ3tohmTQfgykPSYbTj/QvRF61fEPxReQ7phZiUV0CkcJr6GW+LeGczO/ukHzw/6BFv4xjt VFlK73opCOpJmJeBFFSVVizn8h5vHJSM0zExtxPW7VYXT3lyge+eBIvYv7AOiQRe/8nEQrcmFuIr vQ4GCfQi5wXE8CS47ZC8PIZEiriUBlK/j0MJ5+V3t5iwKArAlYwNvHRCqRl+cdv1QbBd6Cazn/03 YG4huTLTJgYH3U0afbmpE4lzYbsW2UadGCynEdT5ucA7E/USo5U9ktKXzOkMXEOoA1a6/yBBhEpe +DVW16vMHWuzP3uXA709vppX7gus5PMywZf4VGTBMw4CcHsS9rDSIElBvanTB4qU1BG7ww0E3Z0Y fKMOkG4EETK4Yg6Eag7AR5isdxSgj1dJMM+IiBzfkKR7MsBPIplanwYPni1o+4DotD6wrWg0rnDm Xx7RiV9cVgf3O1R9UFvo+5CKoeqqvQHQjLeXJl0OgD7cdhmHEcsg0zADGPWzzaSrc2Al8rQQqzSI V6brYd3573m8M0OYR4++y1PzjUCpit6NBgsZ8QrK3STUa/hO0tC1JG5F+OskIN6lw17R99//l0qL 4jQH+VF9BgS++M8XL5zsL9tEWvYGqdL+Ll35INAdCFYj+12aXft2m5nsv1n4cs6+d1iERobzhQwB w8Uc8bycjdYlcV4RTIQtCQUY2XO5Pt8QaagwjwNIRX04duoyQHQvDkujgRHedAD9RZoDJCCYYSJO 2NTNacMgSArpkgvg6ky4M1vUXZIHZol95vW0zhn3iKTzz9EmipG4z6DBtQGScrwD4qyMNd7ZELCl c9UnAMY72NkJQNN8dUz2f3HlV6koTs6A+xkU3BfDYpsuVPcK+bErGoRslay3ISjhVPsWfLUQL3uJ 3vtK7gtcoX6j2YYA+vtT9zKHfSsVvGmgX4I1MYt13ZrSvOXTFWO6PPa9o7Oy8mqaGZqKCCt+Q5/n pY4Y4w/HMrSp6h6YO9E1e29e3/0BQzTko0L2rlGpy+s3h7oR+RXG1gsnaXIIN07NNCi8poIL2DVr wbQUs3tcfo8jKpaqQyeINIVwOk61B06I6Lahfmc7ekdQhEZqV6CAIp4kK4XD1ruGYLyAWjfLwGU2 POR092YZ1A22/hpwBQS54W2my3N7x3Unsmpp0iO0cWI2vRiu5c7CU6yfBU+h1lygW+CdxI5s76Zi gJlMwx+4XE4/fXgztSQaykfv6Cr6zT8LgEkN3lylwKxvoJb2+t64YusdaEHNTeamd+QK3SSyJfBH 5xydUXHsom4L4HjiqpERP2lQzsExHrmRbDXq+tS/J0A++4rXBw1lVMr8ewZLX01V/+fkq0z+RWhj v95TzzCGLxmf8kbgsVK6Doi12oragasV8mG10i+8dxkwcQcm/A9nRa43 """) ##file activate.sh ACTIVATE_SH = convert(""" eJytVVFvokAQfudXTLEPtTlLeo9tvMSmJpq02hSvl7u2wRUG2QR2DSxSe7n/frOACEVNLlceRHa+ nfl25pvZDswCnoDPQ4QoTRQsENIEPci4CsBMZBq7CAsuLOYqvmYKTTj3YxnBgiXBudGBjUzBZUJI BXEqgCvweIyuCjeG4eF2F5x14bcB9KQiQQWrjSddI1/oQIx6SYYeoFjzWIoIhYI1izlbhJjkKO7D M/QEmKfO9O7WeRo/zr4P7pyHwWxkwitcgwpQ5Ej96OX+PmiFwLeVjFUOrNYKaq1Nud3nR2n8nI2m k9H0friPTGVsUdptaxGrTEfpNVFEskxpXtUkkCkl1UNF9cgLBkx48J4EXyALuBtAwNYIjF5kcmUU abMKmMq1ULoiRbgsDEkTSsKSGFCJ6Z8vY/2xYiSacmtyAfCDdCNTVZoVF8vSTQOoEwSnOrngBkws MYGMBMg8/bMBLSYKS7pYEXP0PqT+ZmBT0Xuy+Pplj5yn4aM9nk72JD8/Wi+Gr98sD9eWSMOwkapD BbUv91XSvmyVkICt2tmXR4tWmrcUCsjWOpw87YidEC8i0gdTSOFhouJUNxR+4NYBG0MftoCTD9F7 2rTtxG3oPwY1b2HncYwhrlmj6Wq924xtGDWqfdNxap+OYxplEurnMVo9RWks+rH8qKEtx7kZT5zJ 4H7oOFclrN6uFe+d+nW2aIUsSgs/42EIPuOhXq+jEo3S6tX6w2ilNkDnIpHCWdEQhFgwj9pkk7FN l/y5eQvRSIQ5+TrL05lewxWpt/Lbhes5cJF3mLET1MGhcKCF+40tNWnUulxrpojwDo2sObdje3Bz N3QeHqf3D7OjEXMVV8LN3ZlvuzoWHqiUcNKHtwNd0IbvPGKYYM31nPKCgkUILw3KL+Y8l7aO1ArS Ad37nIU0fCj5NE5gQCuC5sOSu+UdI2NeXg/lFkQIlFpdWVaWZRfvqGiirC9o6liJ9FXGYrSY9mI1 D/Ncozgn13vJvsznr7DnkJWXsyMH7e42ljdJ+aqNDF1bFnKWFLdj31xtaJYK6EXFgqmV/ymD/ROG +n8O9H8f5vsGOWXsL1+1k3g= """) ##file activate.fish ACTIVATE_FISH = convert(""" eJyVVWFv2jAQ/c6vuBoqQVWC9nVSNVGVCaS2VC2rNLWVZZILWAs2s52wVvvxsyEJDrjbmgpK7PP5 3bt3d22YLbmGlGcIq1wbmCPkGhPYcLMEEsGciwGLDS+YwSjlekngLFVyBe73GXSXxqw/DwbuTS8x yyKpFr1WG15lDjETQhpQuQBuIOEKY5O9tlppLqxHKSDByjVAPwEy+mXtCq5MzjIUBTCRgEKTKwFG gpBqxTLYXgN2myspVigMaYF92tZSowGZJf4mFExxNs9Qb614CgZtmH0BpEOn11f0cXI/+za8pnfD 2ZjA1sg9zlV/8QvcMhxbNu0QwgYokn/d+n02nt6Opzcjcnx1vXcIoN74O4ymWQXmHURfJw9jenc/ vbmb0enj6P5+cuVhqlKm3S0u2XRtRbA2QQAhV7VhBF0rsgUX9Ur1rBUXJgVSy8O751k8mzY5OrKH RW3eaQhYGTr8hrXO59ALhxQ83mCsDLAid3T72CCSdJhaFE+fXgicXAARUiR2WeVO37gH3oYHzFKo 9k7CaPZ1UeNwH1tWuXA4uFKYYcEa8vaKqXl7q1UpygMPhFLvlVKyNzsSM3S2km7UBOl4xweUXk5u 6e3wZmQ9leY1XE/Ili670tr9g/5POBBpGIJXCCF79L1siarl/dbESa8mD8PL61GpzqpzuMS7tqeB 1YkALrRBloBMbR9yLcVx7frQAgUqR7NZIuzkEu110gbNit1enNs82Rx5utq7Z3prU78HFRgulqNC OTwbqJa9vkJFclQgZSjbKeBgSsUtCtt9D8OwAbIVJuewQdfvQRaoFE9wd1TmCuRG7OgJ1bVXGHc7 z5WDL/WW36v2oi37CyVBak61+yPBA9C1qqGxzKQqZ0oPuocU9hpud0PIp8sDHkXR1HKkNlzjuUWA a0enFUyzOWZA4yXGP+ZMI3Tdt2OuqU/SO4q64526cPE0A7ZyW2PMbWZiZ5HamIZ2RcCKLXhcDl2b vXL+eccQoRzem80mekPDEiyiWK4GWqZmwxQOmPM0eIfgp1P9cqrBsewR2p/DPMtt+pfcYM+Ls2uh hALufTAdmGl8B1H3VPd2af8fQAc4PgqjlIBL9cGQqNpXaAwe3LrtVn8AkZTUxg== """) ##file activate.csh ACTIVATE_CSH = convert(""" eJx9VG1P2zAQ/u5fcYQKNgTNPtN1WxlIQ4KCUEGaxuQ6yYVYSuzKdhqVX7+zk3bpy5YPUXL3PPfc ne98DLNCWshliVDV1kGCUFvMoJGugMjq2qQIiVSxSJ1cCofD1BYRnOVGV0CfZ0N2DD91DalQSjsw tQLpIJMGU1euvPe7QeJlkKzgWixlhnAt4aoUVsLnLBiy5NtbJWQ5THX1ZciYKKWwkOFaE04dUm6D r/zh7pq/3D7Nnid3/HEy+wFHY/gEJydg0aFaQrBFgz1c5DG1IhTs+UZgsBC2GMFBlaeH+8dZXwcW VPvCjXdlAvCfQsE7al0+07XjZvrSCUevR5dnkVeKlFYZmUztG4BdzL2u9KyLVabTU0bdfg7a0hgs cSmUg6UwUiQl2iHrcbcVGNvPCiLOe7+cRwG13z9qRGgx2z6DHjfm/Op2yqeT+xvOLzs0PTKHDz2V tkckFHoQfQRXoGJAj9el0FyJCmEMhzgMS4sB7KPOE2ExoLcSieYwDvR+cP8cg11gKkVJc2wRcm1g QhYFlXiTaTfO2ki0fQoiFM4tLuO4aZrhOzqR4dIPcWx17hphMBY+Srwh7RTyN83XOWkcSPh1Pg/k TXX/jbJTbMtUmcxZ+/bbqOsy82suFQg/BhdSOTRhMNBHlUarCpU7JzBhmkKmRejKOQzayQe6MWoa n1wqWmuh6LZAaHxcdeqIlVLhIBJdO9/kbl0It2oEXQj+eGjJOuvOIR/YGRqvFhttUB2XTvLXYN2H 37CBdbW2W7j2r2+VsCn0doVWcFG1/4y1VwBjfwAyoZhD """) ##file activate.bat ACTIVATE_BAT = convert(""" eJx9UdEKgjAUfW6wfxjiIH+hEDKUFHSKLCMI7kNOEkIf9P9pTJ3OLJ/03HPPPed4Es9XS9qqwqgT PbGKKOdXL4aAFS7A4gvAwgijuiKlqOpGlATS2NeMLE+TjJM9RkQ+SmqAXLrBo1LLIeLdiWlD6jZt r7VNubWkndkXaxg5GO3UaOOKS6drO3luDDiO5my3iA0YAKGzPRV1ack8cOdhysI0CYzIPzjSiH5X 0QcvC8Lfaj0emsVKYF2rhL5L3fCkVjV76kShi59NHwDniAHzkgDgqBcwOgTMx+gDQQqXCw== """) ##file deactivate.bat DEACTIVATE_BAT = convert(""" eJxzSE3OyFfIT0vj4spMU0hJTcvMS01RiPf3cYkP8wwKCXX0iQ8I8vcNCFHQ4FIAguLUEgUliIit KhZlqkpcnCA1WKRsuTTxWBIZ4uHv5+Hv64piEVwU3TK4BNBCmHIcKvDb6xjigWIjkI9uF1AIu7dA akGGW7n6uXABALCXXUI= """) ##file activate.ps1 ACTIVATE_PS = convert(""" eJylWdmS40Z2fVeE/oHT6rCloNUEAXDThB6wAyQAEjsB29GBjdgXYiWgmC/zgz/Jv+AEWNVd3S2N xuOKYEUxM+/Jmzfvcm7W//zXf/+wUMOoXtyi1F9kbd0sHH/hFc2iLtrK9b3FrSqyxaVQwr8uhqJd uHaeg9mqzRdR8/13Pyy8qPLdJh0+LMhi0QCoXxYfFh9WtttEnd34H8p6/f1300KauwrULws39e18 0ZaLNm9rgN/ZVf3h++/e124Vlc0vKsspHy+Yyi5+XbzPhijvCtduoiL/kA1ukWV27n0o7Sb8LIFj CvWR5GQgUJdp1Pw8TS9+rPy6SDv/+e3d+0+4qw8f3v20+PliV37efEYBAB9FTKC+RHn/Cfxn3rdv 00Fube5O+iyCtHDs9BfPfz3q4sfFv9d91Ljhfy7ei0VO+nVTtdOkv/jpt0l2AX6iG1jXgKnnDuD4 ke2k/i8fzzz5UedkVcP4pwF+Wvz2FJl+3vt598urXf5Y6LNA5WcFOP7r0sW7b9a+W/xcu0Xpv5zk Kfq3P9Dz9di/fCxS72MXVU1rpx9L4Bxl85Wmn5a+zP76Zuh3pL9ROWr87PN+//GHIl+oOtvn9XSU qH+p0gQBFnx1uV+JLH5O5zv+PXW+WepXVVHZT0+oQezkIATcIm+ivPV/z5J/+cYj3ir4w0Lx09vC e5n/y5/Y5LPPfdrqb88ga/PabxZRVfmp39l588m/6u+/e+OpP+dF7n1WZpJ9//Z4v372fDDz9eHB 7Juvs/BLMHzrxL9+9twXpJfhd1/DrpQ5Euu/vlss3wp9HXC/54C/Ld69m6zwdx3tC0d8daSv0V8B n4b9YYF53sJelJV/ix6LZspw/sJtqyl5LJ5r/23htA1Imfm/gt9R7dqVB1LjhydAX4Gb+zksQF59 9+P7H//U+376afFuvh2/T6P85Xr/5c8C6OXyFY4BGuN+EE0+GeR201b+wkkLN5mmBY5TfMw8ngqL CztXxCSXKMCYrRIElWkEJlEPYsSOeKBVZCAQTKBhApMwRFQzmCThE0YQu2CdEhgjbgmk9GluHpfR /hhwJCZhGI5jt5FsAkOrObVyE6g2y1snyhMGFlDY1x+BoHpCMulTj5JYWNAYJmnKpvLxXgmQ8az1 4fUGxxcitMbbhDFcsiAItg04E+OSBIHTUYD1HI4FHH4kMREPknuYRMyhh3AARWMkfhCketqD1CWJ mTCo/nhUScoQcInB1hpFhIKoIXLo5jLpwFCgsnLCx1QlEMlz/iFEGqzH3vWYcpRcThgWnEKm0QcS rA8ek2a2IYYeowUanOZOlrbWSJUC4c7y2EMI3uJPMnMF/SSXdk6E495VLhzkWHps0rOhKwqk+xBI DhJirhdUCTamMfXz2Hy303hM4DFJ8QL21BcPBULR+gcdYxoeiDqOFSqpi5B5PUISfGg46gFZBPo4 jdh8lueaWuVSMTURfbAUnLINr/QYuuYoMQV6l1aWxuZVTjlaLC14UzqZ+ziTGDzJzhiYoPLrt3uI tXkVR47kAo09lo5BD76CH51cTt1snVpMOttLhY93yxChCQPI4OBecS7++h4p4Bdn4H97bJongtPk s9gQnXku1vzsjjmX4/o4YUDkXkjHwDg5FXozU0fW4y5kyeYW0uJWlh536BKr0kMGjtzTkng6Ep62 uTWnQtiIqKnEsx7e1hLtzlXs7Upw9TwEnp0t9yzCGgUJIZConx9OHJArLkRYW0dW42G9OeR5Nzwk yk1mX7du5RGHT7dka7N3AznmSif7y6tuKe2N1Al/1TUPRqH6E2GLVc27h9IptMLkCKQYRqPQJgzV 2m6WLsSipS3v3b1/WmXEYY1meLEVIU/arOGVkyie7ZsH05ZKpjFW4cpY0YkjySpSExNG2TS8nnJx nrQmWh2WY3cP1eISP9wbaVK35ZXc60yC3VN/j9n7UFoK6zvjSTE2+Pvz6Mx322rnftfP8Y0XKIdv Qd7AfK0nexBTMqRiErvCMa3Hegpfjdh58glW2oNMsKeAX8x6YJLZs9K8/ozjJkWL+JmECMvhQ54x 9rsTHwcoGrDi6Y4I+H7yY4/rJVPAbYymUH7C2D3uiUS3KQ1nrCAUkE1dJMneDQIJMQQx5SONxoEO OEn1/Ig1eBBUeEDRuOT2WGGGE4bNypBLFh2PeIg3bEbg44PHiqNDbGIQm50LW6MJU62JHCGBrmc9 2F7WBJrrj1ssnTAK4sxwRgh5LLblhwNAclv3Gd+jC/etCfyfR8TMhcWQz8TBIbG8IIyAQ81w2n/C mHWAwRzxd3WoBY7BZnsqGOWrOCKwGkMMNfO0Kci/joZgEocLjNnzgcmdehPHJY0FudXgsr+v44TB I3jnMGnsK5veAhgi9iXGifkHMOC09Rh9cAw9sQ0asl6wKMk8mpzFYaaDSgG4F0wisQDDBRpjCINg FIxhlhQ31xdSkkk6odXZFpTYOQpOOgw9ugM2cDQ+2MYa7JsEirGBrOuxsQy5nPMRdYjsTJ/j1iNw FeSt1jY2+dd5yx1/pzZMOQXUIDcXeAzR7QlDRM8AMkUldXOmGmvYXPABjxqkYKO7VAY6JRU7kpXr +Epu2BU3qFFXClFi27784LrDZsJwbNlDw0JzhZ6M0SMXE4iBHehCpHVkrQhpTFn2dsvsZYkiPEEB GSEAwdiur9LS1U6P2U9JhGp4hnFpJo4FfkdJHcwV6Q5dV1Q9uNeeu7rV8PAjwdFg9RLtroifOr0k uOiRTo/obNPhQIf42Fr4mtThWoSjitEdAmFW66UCe8WFjPk1YVNpL9srFbond7jrLg8tqAasIMpy zkH0SY/6zVAwJrEc14zt14YRXdY+fcJ4qOd2XKB0/Kghw1ovd11t2o+zjt+txndo1ZDZ2T+uMVHT VSXhedBAHoJIID9xm6wPQI3cXY+HR7vxtrJuCKh6kbXaW5KkVeJsdsjqsYsOwYSh0w5sMbu7LF8J 5T7U6LJdiTx+ca7RKlulGgS5Z1JSU2Llt32cHFipkaurtBrvNX5UtvNZjkufZ/r1/XyLl6yOpytL Km8Fn+y4wkhlqZP5db0rooqy7xdL4wxzFVTX+6HaxuQJK5E5B1neSSovZ9ALB8091dDbbjVxhWNY Ve5hn1VnI9OF0wpvaRm7SZuC1IRczwC7GnkhPt3muHV1YxUJfo+uh1sYnJy+vI0ZwuPV2uqWJYUH bmBsi1zmFSxHrqwA+WIzLrHkwW4r+bad7xbOzJCnKIa3S3YvrzEBK1Dc0emzJW+SqysQfdEDorQG 9ZJlbQzEHQV8naPaF440YXzJk/7vHGK2xwuP+Gc5xITxyiP+WQ4x18oXHjFzCBy9kir1EFTAm0Zq LYwS8MpiGhtfxiBRDXpxDWxk9g9Q2fzPPAhS6VFDAc/aiNGatUkPtZIStZFQ1qD0IlJa/5ZPAi5J ySp1ETDomZMnvgiysZSBfMikrSDte/K5lqV6iwC5q7YN9I1dBZXUytDJNqU74MJsUyNNLAPopWK3 tzmLkCiDyl7WQnj9sm7Kd5kzgpoccdNeMw/6zPVB3pUwMgi4C7hj4AMFAf4G27oXH8NNT9zll/sK S6wVlQwazjxWKWy20ZzXb9ne8ngGalPBWSUSj9xkc1drsXkZ8oOyvYT3e0rnYsGwx85xZB9wKeKg cJKZnamYwiaMymZvzk6wtDUkxmdUg0mPad0YHtvzpjEfp2iMxvORhnx0kCVLf5Qa43WJsVoyfEyI pzmf8ruM6xBr7dnBgzyxpqXuUPYaKahOaz1LrxNkS/Q3Ae5AC+xl6NbxAqXXlzghZBZHmOrM6Y6Y ctAkltwlF7SKEsShjVh7QHuxMU0a08/eiu3x3M+07OijMcKFFltByXrpk8w+JNnZpnp3CfgjV1Ax gUYCnWwYow42I5wHCcTzLXK0hMZN2DrPM/zCSqe9jRSlJnr70BPE4+zrwbk/xVIDHy2FAQyHoomT Tt5jiM68nBQut35Y0qLclLiQrutxt/c0OlSqXAC8VrxW97lGoRWzhOnifE2zbF05W4xuyhg7JTUL aqJ7SWDywhjlal0b+NLTpERBgnPW0+Nw99X2Ws72gOL27iER9jgzj7Uu09JaZ3n+hmCjjvZpjNst vOWWTbuLrg+/1ltX8WpPauEDEvcunIgTxuMEHweWKCx2KQ9DU/UKdO/3za4Szm2iHYL+ss9AAttm gZHq2pkUXFbV+FiJCKrpBms18zH75vax5jSo7FNunrVWY3Chvd8KKnHdaTt/6ealwaA1x17yTlft 8VBle3nAE+7R0MScC3MJofNCCkA9PGKBgGMYEwfB2QO5j8zUqa8F/EkWKCzGQJ5EZ05HTly1B01E z813G5BY++RZ2sxbQS8ZveGPJNabp5kXAeoign6Tlt5+L8i5ZquY9+S+KEUHkmYMRFBxRrHnbl2X rVemKnG+oB1yd9+zT+4c43jQ0wWmQRR6mTCkY1q3VG05Y120ZzKOMBe6Vy7I5Vz4ygPB3yY4G0FP 8RxiMx985YJPXsgRU58EuHj75gygTzejP+W/zKGe78UQN3yOJ1aMQV9hFH+GAfLRsza84WlPLAI/ 9G/5JdcHftEfH+Y3/fHUG7/o8bv98dzzy3e8S+XCvgqB+VUf7sH0yDHpONdbRE8tAg9NWOzcTJ7q TuAxe/AJ07c1Rs9okJvl1/0G60qvbdDzz5zO0FuPFQIHNp9y9Bd1CufYVx7dB26mAxwa8GMNrN/U oGbNZ3EQ7inLzHy5tRg9AXJrN8cB59cCUBeCiVO7zKM0jU0MamhnRThkg/NMmBOGb6StNeD9tDfA 7czsAWopDdnGoXUHtA+s/k0vNPkBcxEI13jVd/axp85va3LpwGggXXWw12Gwr/JGAH0b8CPboiZd QO1l0mk/UHukud4C+w5uRoNzpCmoW6GbgbMyaQNkga2pQINB18lOXOCJzSWPFOhZcwzdgrsQnne7 nvjBi+7cP2BbtBeDOW5uOLGf3z94FasKIguOqJl+8ss/6Kumns4cuWbqq5592TN/RNIbn5Qo6qbi O4F0P9txxPAwagqPlftztO8cWBzdN/jz3b7GD6JHYP/Zp4ToAMaA74M+EGSft3hEGMuf8EwjnTk/ nz/P7SLipB/ogQ6xNX0fDqNncMCfHqGLCMM0ZzFa+6lPJYQ5p81vW4HkCvidYf6kb+P/oB965g8K C6uR0rdjX1DNKc5pOSTquI8uQ6KXxYaKBn+30/09tK4kMpJPgUIQkbENEPbuezNPPje2Um83SgyX GTCJb6MnGVIpgncdQg1qz2bvPfxYD9fewCXDomx9S+HQJuX6W3VAL+v5WZMudRQZk9ZdOk6GIUtC PqEb/uwSIrtR7/edzqgEdtpEwq7p2J5OQV+RLrmtTvFwFpf03M/VrRyTZ73qVod7v7Jh2Dwe5J25 JqFOU2qEu1sP+CRotklediycKfLjeIZzjJQsvKmiGSNQhxuJpKa+hoWUizaE1PuIRGzJqropwgVB oo1hr870MZLgnXF5ZIpr6mF0L8aSy2gVnTAuoB4WEd4d5NPVC9TMotYXERKlTcwQ2KiB/C48AEfH Qbyq4CN8xTFnTvf/ebOc3isnjD95s0QF0nx9s+y+zMmz782xL0SgEmRpA3x1w1Ff9/74xcxKEPdS IEFTz6GgU0+BK/UZ5Gwbl4gZwycxEw+Kqa5QmMkh4OzgzEVPnDAiAOGBFaBW4wkDmj1G4RyElKgj NlLCq8zsp085MNh/+R4t1Q8yxoSv8PUpTt7izZwf2BTHZZ3pIZpUIpuLkL1nNL6sYcHqcKm237wp T2+RCjgXweXd2Zp7ZM8W6dG5bZsqo0nrJBTx8EC0+CQQdzEGnabTnkzofu1pYkWl4E7XSniECdxy vLYavPMcL9LW5SToJFNnos+uqweOHriUZ1ntIYZUonc7ltEQ6oTRtwOHNwez2sVREskHN+bqG3ua eaEbJ8XpyO8CeD9QJc8nbLP2C2R3A437ISUNyt5Yd0TbDNcl11/DSsOzdbi/VhCC0KE6v1vqVNkq 45ZnG6fiV2NwzInxCNth3BwL0+8814jE6+1W1EeWtpWbSZJOJNYXmWRXa7vLnAljE692eHjZ4y5u y1u63De0IzKca7As48Z3XshVF+3XiLNz0JIMh/JOpbiNLlMi672uO0wYzOCZjRxcxj3D+gVenGIE MvFUGGXuRps2RzMcgWIRolHXpGUP6sMsQt1hspUBnVKUn/WQj2u6j3SXd9Xz0QtEzoM7qTu5y7gR q9gNNsrlEMLdikBt9bFvBnfbUIh6voTw7eDsyTmPKUvF0bHqWLbHe3VRHyRZnNeSGKsB73q66Vsk taxWYmwz1tYVFG/vOQhlM0gUkyvIab3nv2caJ1udU1F3pDMty7stubTE4OJqm0i0ECfrJIkLtraC HwRWKzlqpfhEIqYH09eT9WrOhQyt8YEoyBlnXtAT37WHIQ03TIuEHbnRxZDdLun0iok9PUC79prU m5beZzfQUelEXnhzb/pIROKx3F7qCttYIFGh5dXNzFzID7u8vKykA8Uejf7XXz//S4nKvW//ofS/ QastYw== """) ##file distutils-init.py DISTUTILS_INIT = convert(""" eJytV1uL4zYUfvevOE0ottuMW9q3gVDa3aUMXXbLMlDKMBiNrSTqOJKRlMxkf33PkXyRbGe7Dw2E UXTu37lpxLFV2oIyifAncxmOL0xLIfcG+gv80x9VW6maw7o/CANSWWBwFtqeWMPlGY6qPjV8A0bB C4eKSTgZ5LRgFeyErMEeOBhbN+Ipgeizhjtnhkn7DdyjuNLPoCS0l/ayQTG0djwZC08cLXozeMss aG5EzQ0IScpnWtHSTXuxByV/QCmxE7y+eS0uxWeoheaVVfqSJHiU7Mhhi6gULbOHorshkrEnKxpT 0n3A8Y8SMpuwZx6aoix3ouFlmW8gHRSkeSJ2g7hU+kiHLDaQw3bmRDaTGfTnty7gPm0FHbIBg9U9 oh1kZzAFLaue2R6htPCtAda2nGlDSUJ4PZBgCJBGVcwKTAMz/vJiLD+Oin5Z5QlvDPdulC6EsiyE NFzb7McNTKJzbJqzphx92VKRFY1idenzmq3K0emRcbWBD0ryqc4NZGmKOOOX9Pz5x+/l27tP797c f/z0d+4NruGNai8uAM0bfsYaw8itFk8ny41jsfpyO+BWlpqfhcG4yxLdi/0tQqoT4a8Vby382mt8 p7XSo7aWGdPBc+b6utaBmCQ7rQKQoWtAuthQCiold2KfJIPTT8xwg9blPumc+YDZC/wYGdAyHpJk vUbHbHWAp5No6pK/WhhLEWrFjUwtPEv1Agf8YmnsuXUQYkeZoHm8ogP16gt2uHoxcEMdf2C6pmbw hUMsWGhanboh4IzzmsIpWs134jVPqD/c74bZHdY69UKKSn/+KfVhxLgUlToemayLMYQOqfEC61bh cbhwaqoGUzIyZRFHPmau5juaWqwRn3mpWmoEA5nhzS5gog/5jbcFQqOZvmBasZtwYlG93k5GEiyw buHhMWLjDarEGpMGB2LFs5nIJkhp/nUmZneFaRth++lieJtHepIvKgx6PJqIlD9X2j6pG1i9x3pZ 5bHuCPFiirGHeO7McvoXkz786GaKVzC9DSpnOxJdc4xm6NSVq7lNEnKdVlnpu9BNYoKX2Iq3wvgh gGEUM66kK6j4NiyoneuPLSwaCWDxczgaolEWpiMyDVDb7dNuLAbriL8ig8mmeju31oNvQdpnvEPC 1vAXbWacGRVrGt/uXN/gU0CDDwgooKRrHfTBb1/s9lYZ8ZqOBU0yLvpuP6+K9hLFsvIjeNhBi0KL MlOuWRn3FRwx5oHXjl0YImUx0+gLzjGchrgzca026ETmYJzPD+IpuKzNi8AFn048Thd63OdD86M6 84zE8yQm0VqXdbbgvub2pKVnS76icBGdeTHHXTKspUmr4NYo/furFLKiMdQzFjHJNcdAnMhltBJK 0/IKX3DVFqvPJ2dLE7bDBkH0l/PJ29074+F0CsGYOxsb7U3myTUncYfXqnLLfa6sJybX4g+hmcjO kMRBfA1JellfRRKJcyRpxdS4rIl6FdmQCWjo/o9Qz7yKffoP4JHjOvABcRn4CZIT2RH4jnxmfpVG qgLaAvQBNfuO6X0/Ux02nb4FKx3vgP+XnkX0QW9pLy/NsXgdN24dD3LxO2Nwil7Zlc1dqtP3d7/h kzp1/+7hGBuY4pk0XD/0Ao/oTe/XGrfyM773aB7iUhgkpy+dwAMalxMP0DrBcsVw/6p25+/hobP9 GBknrWExDhLJ1bwt1NcCNblaFbMKCyvmX0PeRaQ= """) ##file distutils.cfg DISTUTILS_CFG = convert(""" eJxNj00KwkAMhfc9xYNuxe4Ft57AjYiUtDO1wXSmNJnK3N5pdSEEAu8nH6lxHVlRhtDHMPATA4uH xJ4EFmGbvfJiicSHFRzUSISMY6hq3GLCRLnIvSTnEefN0FIjw5tF0Hkk9Q5dRunBsVoyFi24aaLg 9FDOlL0FPGluf4QjcInLlxd6f6rqkgPu/5nHLg0cXCscXoozRrP51DRT3j9QNl99AP53T2Q= """) ##file activate_this.py ACTIVATE_THIS = convert(""" eJyNU01v2zAMvetXEB4K21jmDOstQA4dMGCHbeihlyEIDMWmG62yJEiKE//7kXKdpN2KzYBt8euR fKSyLPs8wiEo8wh4wqZTGou4V6Hm0wJa1cSiTkJdr8+GsoTRHuCotBayiWqQEYGtMCgfD1KjGYBe 5a3p0cRKiAe2NtLADikftnDco0ko/SFEVgEZ8aRC5GLux7i3BpSJ6J1H+i7A2CjiHq9z7JRZuuQq siwTIvpxJYCeuWaBpwZdhB+yxy/eWz+ZvVSU8C4E9FFZkyxFsvCT/ZzL8gcz9aXVE14Yyp2M+2W0 y7n5mp0qN+avKXvbsyyzUqjeWR8hjGE+2iCE1W1tQ82hsCZN9UzlJr+/e/iab8WfqsmPI6pWeUPd FrMsd4H/55poeO9n54COhUs+sZNEzNtg/wanpjpuqHJaxs76HtZryI/K3H7KJ/KDIhqcbJ7kI4ar XL+sMgXnX0D+Te2Iy5xdP8yueSlQB/x/ED2BTAtyE3K4SYUN6AMNfbO63f4lBW3bUJPbTL+mjSxS PyRfJkZRgj+VbFv+EzHFi5pKwUEepa4JslMnwkowSRCXI+m5XvEOvtuBrxHdhLalG0JofYBok6qj YdN2dEngUlbC4PG60M1WEN0piu7Nq7on0mgyyUw3iV1etLo6r/81biWdQ9MWHFaePWZYaq+nmp+t s3az+sj7eA0jfgPfeoN1 """) MH_MAGIC = 0xfeedface MH_CIGAM = 0xcefaedfe MH_MAGIC_64 = 0xfeedfacf MH_CIGAM_64 = 0xcffaedfe FAT_MAGIC = 0xcafebabe BIG_ENDIAN = '>' LITTLE_ENDIAN = '<' LC_LOAD_DYLIB = 0xc maxint = majver == 3 and getattr(sys, 'maxsize') or getattr(sys, 'maxint') class fileview(object): """ A proxy for file-like objects that exposes a given view of a file. Modified from macholib. """ def __init__(self, fileobj, start=0, size=maxint): if isinstance(fileobj, fileview): self._fileobj = fileobj._fileobj else: self._fileobj = fileobj self._start = start self._end = start + size self._pos = 0 def __repr__(self): return '' % ( self._start, self._end, self._fileobj) def tell(self): return self._pos def _checkwindow(self, seekto, op): if not (self._start <= seekto <= self._end): raise IOError("%s to offset %d is outside window [%d, %d]" % ( op, seekto, self._start, self._end)) def seek(self, offset, whence=0): seekto = offset if whence == os.SEEK_SET: seekto += self._start elif whence == os.SEEK_CUR: seekto += self._start + self._pos elif whence == os.SEEK_END: seekto += self._end else: raise IOError("Invalid whence argument to seek: %r" % (whence,)) self._checkwindow(seekto, 'seek') self._fileobj.seek(seekto) self._pos = seekto - self._start def write(self, bytes): here = self._start + self._pos self._checkwindow(here, 'write') self._checkwindow(here + len(bytes), 'write') self._fileobj.seek(here, os.SEEK_SET) self._fileobj.write(bytes) self._pos += len(bytes) def read(self, size=maxint): assert size >= 0 here = self._start + self._pos self._checkwindow(here, 'read') size = min(size, self._end - here) self._fileobj.seek(here, os.SEEK_SET) bytes = self._fileobj.read(size) self._pos += len(bytes) return bytes def read_data(file, endian, num=1): """ Read a given number of 32-bits unsigned integers from the given file with the given endianness. """ res = struct.unpack(endian + 'L' * num, file.read(num * 4)) if len(res) == 1: return res[0] return res def mach_o_change(path, what, value): """ Replace a given name (what) in any LC_LOAD_DYLIB command found in the given binary with a new name (value), provided it's shorter. """ def do_macho(file, bits, endian): # Read Mach-O header (the magic number is assumed read by the caller) cputype, cpusubtype, filetype, ncmds, sizeofcmds, flags = read_data(file, endian, 6) # 64-bits header has one more field. if bits == 64: read_data(file, endian) # The header is followed by ncmds commands for n in range(ncmds): where = file.tell() # Read command header cmd, cmdsize = read_data(file, endian, 2) if cmd == LC_LOAD_DYLIB: # The first data field in LC_LOAD_DYLIB commands is the # offset of the name, starting from the beginning of the # command. name_offset = read_data(file, endian) file.seek(where + name_offset, os.SEEK_SET) # Read the NUL terminated string load = file.read(cmdsize - name_offset).decode() load = load[:load.index('\0')] # If the string is what is being replaced, overwrite it. if load == what: file.seek(where + name_offset, os.SEEK_SET) file.write(value.encode() + '\0'.encode()) # Seek to the next command file.seek(where + cmdsize, os.SEEK_SET) def do_file(file, offset=0, size=maxint): file = fileview(file, offset, size) # Read magic number magic = read_data(file, BIG_ENDIAN) if magic == FAT_MAGIC: # Fat binaries contain nfat_arch Mach-O binaries nfat_arch = read_data(file, BIG_ENDIAN) for n in range(nfat_arch): # Read arch header cputype, cpusubtype, offset, size, align = read_data(file, BIG_ENDIAN, 5) do_file(file, offset, size) elif magic == MH_MAGIC: do_macho(file, 32, BIG_ENDIAN) elif magic == MH_CIGAM: do_macho(file, 32, LITTLE_ENDIAN) elif magic == MH_MAGIC_64: do_macho(file, 64, BIG_ENDIAN) elif magic == MH_CIGAM_64: do_macho(file, 64, LITTLE_ENDIAN) assert(len(what) >= len(value)) do_file(open(path, 'r+b')) if __name__ == '__main__': main() ## TODO: ## Copy python.exe.manifest ## Monkeypatch distutils.sysconfig