pax_global_header00006660000000000000000000000064147553324740014530gustar00rootroot0000000000000052 comment=2ec056dae2cd91f0e5735af0d9b3e6cd37d7ca0d PySyncObj-0.3.14/000077500000000000000000000000001475533247400134755ustar00rootroot00000000000000PySyncObj-0.3.14/.coveragerc000066400000000000000000000003721475533247400156200ustar00rootroot00000000000000[report] exclude_lines = # Have to re-enable the standard pragma pragma: no cover # Don't complain if tests don't hit defensive assertion code: raise NotImplementedError show_missing = 1 [run] omit = pysyncobj/win_inet_pton.py PySyncObj-0.3.14/.github/000077500000000000000000000000001475533247400150355ustar00rootroot00000000000000PySyncObj-0.3.14/.github/workflows/000077500000000000000000000000001475533247400170725ustar00rootroot00000000000000PySyncObj-0.3.14/.github/workflows/tests.yaml000066400000000000000000000005231475533247400211200ustar00rootroot00000000000000name: Tests on: pull_request: push: branches: - master tags: - '*' jobs: run_tests: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: run_tests run: > ls -la && python3 -m pip install -U pytest && python3 -m pytest -v -s test_syncobj.py PySyncObj-0.3.14/.gitignore000066400000000000000000000001411475533247400154610ustar00rootroot00000000000000*.pyc .idea/ MANIFEST dist/ *.bak *.bin build/ docs/build* .DS_Store .cache/ pysyncobj.egg-info/ PySyncObj-0.3.14/LICENSE.txt000066400000000000000000000020701475533247400153170ustar00rootroot00000000000000The MIT License (MIT) Copyright (c) 2016 Filipp Ozinov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. PySyncObj-0.3.14/README.md000066400000000000000000000146431475533247400147640ustar00rootroot00000000000000# PySyncObj [![Build Status][tests-image]][tests] [![Windows Build Status][appveyor-image]][appveyor] [![Coverage Status][coverage-image]][coverage] [![Release][release-image]][releases] [![License][license-image]][license] [![gitter][gitter-image]][gitter] [![docs][docs-image]][docs] [tests-image]: https://github.com/bakwc/PySyncObj/actions/workflows/tests.yaml/badge.svg [tests]: https://github.com/bakwc/PySyncObj/actions/workflows/tests.yaml [appveyor-image]: https://ci.appveyor.com/api/projects/status/github/bakwc/pysyncobj?branch=master&svg=true [appveyor]: https://ci.appveyor.com/project/bakwc/pysyncobj [coverage-image]: https://coveralls.io/repos/github/bakwc/PySyncObj/badge.svg?branch=master [coverage]: https://coveralls.io/github/bakwc/PySyncObj?branch=master [release-image]: https://img.shields.io/badge/release-0.3.14-blue.svg?style=flat [releases]: https://github.com/bakwc/PySyncObj/releases [license-image]: https://img.shields.io/badge/license-MIT-blue.svg?style=flat [license]: LICENSE.txt [gitter-image]: https://badges.gitter.im/bakwc/PySyncObj.svg [gitter]: https://gitter.im/bakwc/PySyncObj?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge [docs-image]: https://readthedocs.org/projects/pysyncobj/badge/?version=latest [docs]: http://pysyncobj.readthedocs.io/en/latest/ PySyncObj is a python library for building fault-tolerant distributed systems. It provides the ability to replicate your application data between multiple servers. It has following features: - [raft protocol](http://raft.github.io/) for leader election and log replication - Log compaction - it use fork for copy-on-write while serializing data on disk - Dynamic membership changes - you can do it with [syncobj_admin](https://github.com/bakwc/PySyncObj/wiki/syncobj_admin) utility or [directly from your code](https://github.com/bakwc/PySyncObj/wiki/Dynamic-membership-change) - [Zero downtime deploy](https://github.com/bakwc/PySyncObj/wiki/Zero-downtime-deploy) - no need to stop cluster to update nodes - In-memory and on-disk serialization - you can use in-memory mode for small data and on-disk for big one - Encryption - you can set password and use it in external network - Python2 and Python3 on linux, macos and windows - no dependencies required (only optional one, eg. cryptography) - Configurable event loop - it can works in separate thread with it's own event loop - or you can call onTick function inside your own one - Convenient interface - you can easily transform arbitrary class into a replicated one (see example below). ## Content * [Install](#install) * [Basic Usage](#usage) * ["Batteries"](#batteries) * [API Documentation](http://pysyncobj.readthedocs.io) * [Performance](#performance) * [Publications](#publications) ## Install PySyncObj itself: ```bash pip install pysyncobj ``` Cryptography for encryption (optional): ```bash pip install cryptography ``` ## Usage Consider you have a class that implements counter: ```python class MyCounter(object): def __init__(self): self.__counter = 0 def incCounter(self): self.__counter += 1 def getCounter(self): return self.__counter ``` So, to transform your class into a replicated one: - Inherit it from SyncObj - Initialize SyncObj with a self address and a list of partner addresses. Eg. if you have `serverA`, `serverB` and `serverC` and want to use 4321 port, you should use self address `serverA:4321` with partners `[serverB:4321, serverC:4321]` for your application, running at `serverA`; self address `serverB:4321` with partners `[serverA:4321, serverC:4321]` for your application at `serverB`; self address `serverC:4321` with partners `[serverA:4321, serverB:4321]` for app at `serverC`. - Mark all your methods that modifies your class fields with `@replicated` decorator. So your final class will looks like: ```python class MyCounter(SyncObj): def __init__(self): super(MyCounter, self).__init__('serverA:4321', ['serverB:4321', 'serverC:4321']) self.__counter = 0 @replicated def incCounter(self): self.__counter += 1 def getCounter(self): return self.__counter ``` And thats all! Now you can call `incCounter` on `serverA`, and check counter value on `serverB` - they will be synchronized. ## Batteries If you just need some distributed data structures - try built-in "batteries". Few examples: ### Counter & Dict ```python from pysyncobj import SyncObj from pysyncobj.batteries import ReplCounter, ReplDict counter1 = ReplCounter() counter2 = ReplCounter() dict1 = ReplDict() syncObj = SyncObj('serverA:4321', ['serverB:4321', 'serverC:4321'], consumers=[counter1, counter2, dict1]) counter1.set(42, sync=True) # set initial value to 42, 'sync' means that operation is blocking counter1.add(10, sync=True) # add 10 to counter value counter2.inc(sync=True) # increment counter value by one dict1.set('testKey1', 'testValue1', sync=True) dict1['testKey2'] = 'testValue2' # this is basically the same as previous, but asynchronous (non-blocking) print(counter1, counter2, dict1['testKey1'], dict1.get('testKey2')) ``` ### Lock ```python from pysyncobj import SyncObj from pysyncobj.batteries import ReplLockManager lockManager = ReplLockManager(autoUnlockTime=75) # Lock will be released if connection dropped for more than 75 seconds syncObj = SyncObj('serverA:4321', ['serverB:4321', 'serverC:4321'], consumers=[lockManager]) if lockManager.tryAcquire('testLockName', sync=True): # do some actions lockManager.release('testLockName') ``` You can look at [batteries implementation](https://github.com/bakwc/PySyncObj/blob/master/pysyncobj/batteries.py), [examples](https://github.com/bakwc/PySyncObj/tree/master/examples) and [unit-tests](https://github.com/bakwc/PySyncObj/blob/master/test_syncobj.py) for more use-cases. Also there is an [API documentation](http://pysyncobj.readthedocs.io). Feel free to create proposals and/or pull requests with new batteries, features, etc. Join our [gitter chat](https://gitter.im/bakwc/PySyncObj) if you have any questions. ## Performance ![15K rps on 3 nodes; 14K rps on 7 nodes;](http://pastexen.com/i/Ge3lnrM1OY.png "RPS vs Cluster Size") ![22K rps on 10 byte requests; 5K rps on 20Kb requests;](http://pastexen.com/i/0RIsrKxJsV.png "RPS vs Request Size") ## Publications - [Adventures in fault tolerant alerting with Python](https://blog.hostedgraphite.com/2017/05/05/adventures-in-fault-tolerant-alerting-with-python/) - [Строим распределенную систему c PySyncObj](https://habrahabr.ru/company/wargaming/blog/301398/) PySyncObj-0.3.14/appveyor.yml000066400000000000000000000006151475533247400160670ustar00rootroot00000000000000environment: matrix: - PYTHON: "C:\\Python27" # - PYTHON: "C:\\Python34" - PYTHON: "C:\\Python35" - PYTHON: "C:\\Python38" install: - "%PYTHON%\\python.exe -m pip install --upgrade pip" - "%PYTHON%\\python.exe -m pip install pytest" - "%PYTHON%\\python.exe -m pip install cryptography" build: off test_script: - "%PYTHON%\\python.exe -m pytest -v -l test_syncobj.py" PySyncObj-0.3.14/benchmarks/000077500000000000000000000000001475533247400156125ustar00rootroot00000000000000PySyncObj-0.3.14/benchmarks/benchmarks.py000066400000000000000000000060431475533247400203040ustar00rootroot00000000000000from __future__ import print_function import sys import pickle from functools import wraps from subprocess import Popen, PIPE import os DEVNULL = open(os.devnull, 'wb') START_PORT = 4321 MIN_RPS = 10 MAX_RPS = 40000 def memoize(fileName): def doMemoize(func): if os.path.exists(fileName): with open(fileName) as f: cache = pickle.load(f) else: cache = {} @wraps(func) def wrap(*args): if args not in cache: cache[args] = func(*args) with open(fileName, 'wb') as f: pickle.dump(cache, f) return cache[args] return wrap return doMemoize def singleBenchmark(requestsPerSecond, requestSize, numNodes, numNodesReadonly = 0, delay = False): rpsPerNode = requestsPerSecond / (numNodes + numNodesReadonly) cmd = [sys.executable, 'testobj_delay.py' if delay else 'testobj.py', str(rpsPerNode), str(requestSize)] #cmd = 'python2.7 -m cProfile -s time testobj.py %d %d' % (rpsPerNode, requestSize) processes = [] allAddrs = [] for i in range(numNodes): allAddrs.append('localhost:%d' % (START_PORT + i)) for i in range(numNodes): addrs = list(allAddrs) selfAddr = addrs.pop(i) p = Popen(cmd + [selfAddr] + addrs, stdin=PIPE) processes.append(p) for i in range(numNodesReadonly): p = Popen(cmd + ['readonly'] + allAddrs, stdin=PIPE) processes.append(p) errRates = [] for p in processes: p.communicate() errRates.append(float(p.returncode) / 100.0) avgRate = sum(errRates) / len(errRates) print('average success rate:', avgRate) if delay: return avgRate return avgRate >= 0.9 def doDetectMaxRps(requestSize, numNodes): a = MIN_RPS b = MAX_RPS numIt = 0 while b - a > MIN_RPS: c = a + (b - a) / 2 res = singleBenchmark(c, requestSize, numNodes) if res: a = c else: b = c print('subiteration %d, current max %d' % (numIt, a)) numIt += 1 return a @memoize('maxRpsCache.bin') def detectMaxRps(requestSize, numNodes): results = [] for i in range(0, 5): res = doDetectMaxRps(requestSize, numNodes) print('iteration %d, current max %d' % (i, res)) results.append(res) return sorted(results)[len(results) / 2] def printUsage(): print('Usage: %s mode(delay/rps/custom)' % sys.argv[0]) sys.exit(-1) if __name__ == '__main__': if len(sys.argv) != 2: printUsage() mode = sys.argv[1] if mode == 'delay': print('Average delay:', singleBenchmark(50, 10, 5, delay=True)) elif mode == 'rps': for i in range(10, 2100, 500): res = detectMaxRps(i, 3) print('request size: %d, rps: %d' % (i, int(res))) for i in range(3, 8): res = detectMaxRps(200, i) print('nodes number: %d, rps: %d' % (i, int(res))) elif mode == 'custom': singleBenchmark(25000, 10, 3) else: printUsage() PySyncObj-0.3.14/benchmarks/testobj.py000066400000000000000000000042011475533247400176330ustar00rootroot00000000000000from __future__ import print_function import sys import time import random from collections import defaultdict sys.path.append("../") from pysyncobj import SyncObj, replicated, SyncObjConf, FAIL_REASON class TestObj(SyncObj): def __init__(self, selfNodeAddr, otherNodeAddrs): super(TestObj, self).__init__(selfNodeAddr, otherNodeAddrs) self.__appliedCommands = 0 @replicated def testMethod(self, value): self.__appliedCommands += 1 def getNumCommandsApplied(self): return self.__appliedCommands _g_sent = 0 _g_success = 0 _g_error = 0 _g_errors = defaultdict(int) def clbck(res, err): global _g_error, _g_success if err == FAIL_REASON.SUCCESS: _g_success += 1 else: _g_error += 1 _g_errors[err] += 1 def getRandStr(l): f = '%0' + str(l) + 'x' return f % random.randrange(16 ** l) if __name__ == '__main__': if len(sys.argv) < 5: print('Usage: %s RPS requestSize selfHost:port partner1Host:port partner2Host:port ...' % sys.argv[0]) sys.exit(-1) numCommands = int(sys.argv[1]) cmdSize = int(sys.argv[2]) selfAddr = sys.argv[3] if selfAddr == 'readonly': selfAddr = None partners = sys.argv[4:] maxCommandsQueueSize = int(0.9 * SyncObjConf().commandsQueueSize / len(partners)) obj = TestObj(selfAddr, partners) while obj._getLeader() is None: time.sleep(0.5) time.sleep(4.0) startTime = time.time() while time.time() - startTime < 25.0: st = time.time() for i in xrange(0, numCommands): obj.testMethod(getRandStr(cmdSize), callback=clbck) _g_sent += 1 delta = time.time() - st assert delta <= 1.0 time.sleep(1.0 - delta) time.sleep(4.0) successRate = float(_g_success) / float(_g_sent) print('SUCCESS RATE:', successRate) if successRate < 0.9: print('LOST RATE:', 1.0 - float(_g_success + _g_error) / float(_g_sent)) print('ERRORS STATS: %d' % len(_g_errors)) for err in _g_errors: print(err, float(_g_errors[err]) / float(_g_error)) sys.exit(int(successRate * 100)) PySyncObj-0.3.14/benchmarks/testobj_delay.py000066400000000000000000000047761475533247400210320ustar00rootroot00000000000000from __future__ import print_function import sys import time import random from collections import defaultdict sys.path.append("../") from pysyncobj import SyncObj, replicated, SyncObjConf, FAIL_REASON class TestObj(SyncObj): def __init__(self, selfNodeAddr, otherNodeAddrs): cfg = SyncObjConf( appendEntriesUseBatch=False, ) super(TestObj, self).__init__(selfNodeAddr, otherNodeAddrs, cfg) self.__appliedCommands = 0 @replicated def testMethod(self, val, callTime): self.__appliedCommands += 1 return (callTime, time.time()) def getNumCommandsApplied(self): return self.__appliedCommands _g_sent = 0 _g_success = 0 _g_error = 0 _g_errors = defaultdict(int) _g_delays = [] def clbck(res, err): global _g_error, _g_success, _g_delays if err == FAIL_REASON.SUCCESS: _g_success += 1 callTime, recvTime = res delay = time.time() - callTime _g_delays.append(delay) else: _g_error += 1 _g_errors[err] += 1 def getRandStr(l): f = '%0' + str(l) + 'x' return f % random.randrange(16 ** l) if __name__ == '__main__': if len(sys.argv) < 5: print('Usage: %s RPS requestSize selfHost:port partner1Host:port partner2Host:port ...' % sys.argv[0]) sys.exit(-1) numCommands = int(sys.argv[1]) cmdSize = int(sys.argv[2]) selfAddr = sys.argv[3] if selfAddr == 'readonly': selfAddr = None partners = sys.argv[4:] maxCommandsQueueSize = int(0.9 * SyncObjConf().commandsQueueSize / len(partners)) obj = TestObj(selfAddr, partners) while obj._getLeader() is None: time.sleep(0.5) time.sleep(4.0) startTime = time.time() while time.time() - startTime < 25.0: st = time.time() for i in xrange(0, numCommands): obj.testMethod(getRandStr(cmdSize), time.time(), callback=clbck) _g_sent += 1 delta = time.time() - st assert delta <= 1.0 time.sleep(1.0 - delta) time.sleep(4.0) successRate = float(_g_success) / float(_g_sent) print('SUCCESS RATE:', successRate) delays = sorted(_g_delays) avgDelay = _g_delays[len(_g_delays) / 2] print('AVG DELAY:', avgDelay) if successRate < 0.9: print('LOST RATE:', 1.0 - float(_g_success + _g_error) / float(_g_sent)) print('ERRORS STATS: %d' % len(_g_errors)) for err in _g_errors: print(err, float(_g_errors[err]) / float(_g_error)) sys.exit(int(avgDelay * 100)) PySyncObj-0.3.14/docs/000077500000000000000000000000001475533247400144255ustar00rootroot00000000000000PySyncObj-0.3.14/docs/Makefile000066400000000000000000000167131475533247400160750ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " applehelp to make an Apple Help Book" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " epub3 to make an epub3" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " coverage to run coverage check of the documentation (if enabled)" @echo " dummy to check syntax errors of document sources" .PHONY: clean clean: rm -rf $(BUILDDIR)/* .PHONY: html html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." .PHONY: dirhtml dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." .PHONY: singlehtml singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." .PHONY: pickle pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." .PHONY: json json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." .PHONY: htmlhelp htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." .PHONY: qthelp qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/PySyncObj.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/PySyncObj.qhc" .PHONY: applehelp applehelp: $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp @echo @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." @echo "N.B. You won't be able to view it unless you put it in" \ "~/Library/Documentation/Help or install it in your application" \ "bundle." .PHONY: devhelp devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/PySyncObj" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/PySyncObj" @echo "# devhelp" .PHONY: epub epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." .PHONY: epub3 epub3: $(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3 @echo @echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3." .PHONY: latex latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." .PHONY: latexpdf latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." .PHONY: latexpdfja latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." .PHONY: text text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." .PHONY: man man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." .PHONY: texinfo texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." .PHONY: info info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." .PHONY: gettext gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." .PHONY: changes changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." .PHONY: linkcheck linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." .PHONY: doctest doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." .PHONY: coverage coverage: $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage @echo "Testing of coverage in the sources finished, look at the " \ "results in $(BUILDDIR)/coverage/python.txt." .PHONY: xml xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." .PHONY: pseudoxml pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." .PHONY: dummy dummy: $(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy @echo @echo "Build finished. Dummy builder generates no files." PySyncObj-0.3.14/docs/make.bat000066400000000000000000000171031475533247400160340ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source set I18NSPHINXOPTS=%SPHINXOPTS% source if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. epub3 to make an epub3 echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. xml to make Docutils-native XML files echo. pseudoxml to make pseudoxml-XML files for display purposes echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled echo. coverage to run coverage check of the documentation if enabled echo. dummy to check syntax errors of document sources goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) REM Check if sphinx-build is available and fallback to Python version if any %SPHINXBUILD% 1>NUL 2>NUL if errorlevel 9009 goto sphinx_python goto sphinx_ok :sphinx_python set SPHINXBUILD=python -m sphinx.__init__ %SPHINXBUILD% 2> nul if errorlevel 9009 ( echo. echo.The 'sphinx-build' command was not found. Make sure you have Sphinx echo.installed, then set the SPHINXBUILD environment variable to point echo.to the full path of the 'sphinx-build' executable. Alternatively you echo.may add the Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.http://sphinx-doc.org/ exit /b 1 ) :sphinx_ok if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\PySyncObj.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\PySyncObj.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "epub3" ( %SPHINXBUILD% -b epub3 %ALLSPHINXOPTS% %BUILDDIR%/epub3 if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub3 file is in %BUILDDIR%/epub3. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdf" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf cd %~dp0 echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdfja" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf-ja cd %~dp0 echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) if "%1" == "coverage" ( %SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage if errorlevel 1 exit /b 1 echo. echo.Testing of coverage in the sources finished, look at the ^ results in %BUILDDIR%/coverage/python.txt. goto end ) if "%1" == "xml" ( %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml if errorlevel 1 exit /b 1 echo. echo.Build finished. The XML files are in %BUILDDIR%/xml. goto end ) if "%1" == "pseudoxml" ( %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml if errorlevel 1 exit /b 1 echo. echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. goto end ) if "%1" == "dummy" ( %SPHINXBUILD% -b dummy %ALLSPHINXOPTS% %BUILDDIR%/dummy if errorlevel 1 exit /b 1 echo. echo.Build finished. Dummy builder generates no files. goto end ) :end PySyncObj-0.3.14/docs/source/000077500000000000000000000000001475533247400157255ustar00rootroot00000000000000PySyncObj-0.3.14/docs/source/batteries.rst000066400000000000000000000012241475533247400204400ustar00rootroot00000000000000pysyncobj.batteries package =========================== ReplCounter ----------- .. autoclass:: pysyncobj.batteries.ReplCounter :members: ReplList -------- .. autoclass:: pysyncobj.batteries.ReplList :members: ReplDict -------- .. autoclass:: pysyncobj.batteries.ReplDict :members: ReplSet ------- .. autoclass:: pysyncobj.batteries.ReplSet :members: ReplQueue --------- .. autoclass:: pysyncobj.batteries.ReplQueue :members: ReplPriorityQueue ----------------- .. autoclass:: pysyncobj.batteries.ReplPriorityQueue :members: ReplLockManager --------------- .. autoclass:: pysyncobj.batteries.ReplLockManager :members: PySyncObj-0.3.14/docs/source/conf.py000066400000000000000000000233201475533247400172240ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # PySyncObj documentation build configuration file, created by # sphinx-quickstart on Sat Sep 17 17:25:17 2016. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import os import sys import sphinx_rtd_theme sys.path.insert(0, os.path.abspath('../..')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', ] html_theme = "sphinx_rtd_theme" html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The encoding of source files. # # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'PySyncObj' copyright = u'2021, Filipp Ozinov' author = u'Filipp Ozinov' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = u'0.3.14' # The full version, including alpha/beta/rc tags. release = u'0.3.14' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # # today = '' # # Else, today_fmt is used as the format for a strftime call. # # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. # # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # #html_theme = 'alabaster' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. # " v documentation" by default. # # html_title = u'PySyncObj v0.2.3' # A shorter title for the navigation bar. Default is the same as html_title. # # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # # html_logo = None # The name of an image file (relative to this directory) to use as a favicon of # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # # html_extra_path = [] # If not None, a 'Last updated on:' timestamp is inserted at every page # bottom, using the given strftime format. # The empty string is equivalent to '%b %d, %Y'. # # html_last_updated_fmt = None # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # # html_additional_pages = {} # If false, no module index is generated. # # html_domain_indices = True # If false, no index is generated. # # html_use_index = True # If true, the index is split into individual pages for each letter. # # html_split_index = False # If true, links to the reST sources are added to the pages. # # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Language to be used for generating the HTML full-text search index. # Sphinx supports the following languages: # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh' # # html_search_language = 'en' # A dictionary with options for the search language support, empty by default. # 'ja' uses this config value. # 'zh' user can custom change `jieba` dictionary path. # # html_search_options = {'type': 'default'} # The name of a javascript file (relative to the configuration directory) that # implements a search results scorer. If empty, the default will be used. # # html_search_scorer = 'scorer.js' # Output file base name for HTML help builder. htmlhelp_basename = 'PySyncObjdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'PySyncObj.tex', u'PySyncObj Documentation', u'Filipp Ozinov', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # # latex_use_parts = False # If true, show page references after internal links. # # latex_show_pagerefs = False # If true, show URL addresses after external links. # # latex_show_urls = False # Documents to append as an appendix to all manuals. # # latex_appendices = [] # It false, will not define \strong, \code, itleref, \crossref ... but only # \sphinxstrong, ..., \sphinxtitleref, ... To help avoid clash with user added # packages. # # latex_keep_old_macro_names = True # If false, no module index is generated. # # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'pysyncobj', u'PySyncObj Documentation', [author], 1) ] # If true, show URL addresses after external links. # # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'PySyncObj', u'PySyncObj Documentation', author, 'PySyncObj', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # # texinfo_appendices = [] # If false, no module index is generated. # # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # # texinfo_no_detailmenu = False autoclass_content = "both" PySyncObj-0.3.14/docs/source/index.rst000066400000000000000000000004721475533247400175710ustar00rootroot00000000000000PySyncObj API documentation =========================== * The code is available on GitHub at `bakwc/PySyncObj`_ .. _bakwc/PySyncObj: https://github.com/bakwc/PySyncObj Contents: .. toctree:: :maxdepth: 2 pysyncobj batteries Indices and tables ================== * :ref:`genindex` * :ref:`search` PySyncObj-0.3.14/docs/source/pysyncobj.rst000066400000000000000000000007511475533247400205020ustar00rootroot00000000000000pysyncobj package ================= SyncObj ------- .. autoclass:: pysyncobj.SyncObj :members: replicated ---------- .. autofunction:: pysyncobj.replicated replicated_sync --------------- .. autofunction:: pysyncobj.replicated_sync SyncObjConf ----------- .. autoclass:: pysyncobj.SyncObjConf :members: FAIL_REASON ----------- .. autoclass:: pysyncobj.FAIL_REASON :members: SERIALIZER_STATE ---------------- .. autoclass:: pysyncobj.SERIALIZER_STATE :members: PySyncObj-0.3.14/examples/000077500000000000000000000000001475533247400153135ustar00rootroot00000000000000PySyncObj-0.3.14/examples/counter.py000077500000000000000000000030021475533247400173420ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import print_function import sys import time from functools import partial sys.path.append("../") from pysyncobj import SyncObj, replicated class TestObj(SyncObj): def __init__(self, selfNodeAddr, otherNodeAddrs): super(TestObj, self).__init__(selfNodeAddr, otherNodeAddrs) self.__counter = 0 @replicated def incCounter(self): self.__counter += 1 return self.__counter @replicated def addValue(self, value, cn): self.__counter += value return self.__counter, cn def getCounter(self): return self.__counter def onAdd(res, err, cnt): print('onAdd %d:' % cnt, res, err) if __name__ == '__main__': if len(sys.argv) < 3: print('Usage: %s self_port partner1_port partner2_port ...' % sys.argv[0]) sys.exit(-1) port = int(sys.argv[1]) partners = ['localhost:%d' % int(p) for p in sys.argv[2:]] o = TestObj('localhost:%d' % port, partners) n = 0 old_value = -1 while True: # time.sleep(0.005) time.sleep(0.5) if o.getCounter() != old_value: old_value = o.getCounter() print(old_value) if o._getLeader() is None: continue # if n < 2000: if n < 20: o.addValue(10, n, callback=partial(onAdd, cnt=n)) n += 1 # if n % 200 == 0: # if True: # print('Counter value:', o.getCounter(), o._getLeader(), o._getRaftLogSize(), o._getLastCommitIndex()) PySyncObj-0.3.14/examples/kvstorage.py000066400000000000000000000027341475533247400177000ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import print_function import sys sys.path.append("../") from pysyncobj import SyncObj, SyncObjConf, replicated class KVStorage(SyncObj): def __init__(self, selfAddress, partnerAddrs): cfg = SyncObjConf(dynamicMembershipChange = True) super(KVStorage, self).__init__(selfAddress, partnerAddrs, cfg) self.__data = {} @replicated def set(self, key, value): self.__data[key] = value @replicated def pop(self, key): self.__data.pop(key, None) def get(self, key): return self.__data.get(key, None) _g_kvstorage = None def main(): if len(sys.argv) < 2: print('Usage: %s selfHost:port partner1Host:port partner2Host:port ...') sys.exit(-1) selfAddr = sys.argv[1] if selfAddr == 'readonly': selfAddr = None partners = sys.argv[2:] global _g_kvstorage _g_kvstorage = KVStorage(selfAddr, partners) def get_input(v): if sys.version_info >= (3, 0): return input(v) else: return raw_input(v) while True: cmd = get_input(">> ").split() if not cmd: continue elif cmd[0] == 'set': _g_kvstorage.set(cmd[1], cmd[2]) elif cmd[0] == 'get': print(_g_kvstorage.get(cmd[1])) elif cmd[0] == 'pop': print(_g_kvstorage.pop(cmd[1])) else: print('Wrong command') if __name__ == '__main__': main() PySyncObj-0.3.14/examples/kvstorage_http.py000066400000000000000000000042501475533247400207320ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import print_function import sys try: from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer except ImportError: from http.server import BaseHTTPRequestHandler, HTTPServer sys.path.append("../") from pysyncobj import SyncObj, SyncObjConf, replicated class KVStorage(SyncObj): def __init__(self, selfAddress, partnerAddrs, dumpFile): conf = SyncObjConf( fullDumpFile=dumpFile, ) super(KVStorage, self).__init__(selfAddress, partnerAddrs, conf) self.__data = {} @replicated def set(self, key, value): self.__data[key] = value @replicated def pop(self, key): self.__data.pop(key, None) def get(self, key): return self.__data.get(key, None) _g_kvstorage = None class KVRequestHandler(BaseHTTPRequestHandler): def do_GET(self): try: value = _g_kvstorage.get(self.path) if value is None: self.send_response(404) self.send_header("Content-type", "text/plain") self.end_headers() return self.send_response(200) self.send_header("Content-type", "text/plain") self.end_headers() self.wfile.write(value.encode('utf-8')) except: pass def do_POST(self): try: key = self.path value = self.rfile.read(int(self.headers.get('content-length'))).decode('utf-8') _g_kvstorage.set(key, value) self.send_response(201) self.send_header("Content-type", "text/plain") self.end_headers() except: pass def main(): if len(sys.argv) < 5: print('Usage: %s http_port dump_file.bin selfHost:port partner1Host:port partner2Host:port ...' % sys.argv[0]) sys.exit(-1) httpPort = int(sys.argv[1]) dumpFile = sys.argv[2] selfAddr = sys.argv[3] partners = sys.argv[4:] global _g_kvstorage _g_kvstorage = KVStorage(selfAddr, partners, dumpFile) httpServer = HTTPServer(('', httpPort), KVRequestHandler) httpServer.serve_forever() if __name__ == '__main__': main() PySyncObj-0.3.14/examples/lock.py000066400000000000000000000111311475533247400166120ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import print_function import sys import threading import weakref import time sys.path.append("../") from pysyncobj import SyncObj, replicated class LockImpl(SyncObj): def __init__(self, selfAddress, partnerAddrs, autoUnlockTime): super(LockImpl, self).__init__(selfAddress, partnerAddrs) self.__locks = {} self.__autoUnlockTime = autoUnlockTime @replicated def acquire(self, lockPath, clientID, currentTime): existingLock = self.__locks.get(lockPath, None) # Auto-unlock old lock if existingLock is not None: if currentTime - existingLock[1] > self.__autoUnlockTime: existingLock = None # Acquire lock if possible if existingLock is None or existingLock[0] == clientID: self.__locks[lockPath] = (clientID, currentTime) return True # Lock already acquired by someone else return False @replicated def ping(self, clientID, currentTime): for lockPath in self.__locks.keys(): lockClientID, lockTime = self.__locks[lockPath] if currentTime - lockTime > self.__autoUnlockTime: del self.__locks[lockPath] continue if lockClientID == clientID: self.__locks[lockPath] = (clientID, currentTime) @replicated def release(self, lockPath, clientID): existingLock = self.__locks.get(lockPath, None) if existingLock is not None and existingLock[0] == clientID: del self.__locks[lockPath] def isAcquired(self, lockPath, clientID, currentTime): existingLock = self.__locks.get(lockPath, None) if existingLock is not None: if existingLock[0] == clientID: if currentTime - existingLock[1] < self.__autoUnlockTime: return True return False class Lock(object): def __init__(self, selfAddress, partnerAddrs, autoUnlockTime): self.__lockImpl = LockImpl(selfAddress, partnerAddrs, autoUnlockTime) self.__selfID = selfAddress self.__autoUnlockTime = autoUnlockTime self.__mainThread = threading.current_thread() self.__initialised = threading.Event() self.__thread = threading.Thread(target=Lock._autoAcquireThread, args=(weakref.proxy(self),)) self.__thread.start() while not self.__initialised.is_set(): pass def _autoAcquireThread(self): self.__initialised.set() try: while True: if not self.__mainThread.is_alive(): break time.sleep(float(self.__autoUnlockTime) / 4.0) if self.__lockImpl._getLeader() is not None: self.__lockImpl.ping(self.__selfID, time.time()) except ReferenceError: pass def tryAcquireLock(self, path): self.__lockImpl.acquire(path, self.__selfID, time.time()) def isAcquired(self, path): return self.__lockImpl.isAcquired(path, self.__selfID, time.time()) def release(self, path): self.__lockImpl.release(path, self.__selfID) def printStatus(self): self.__lockImpl._printStatus() def printHelp(): print('') print(' Available commands:') print('') print('help print this help') print('check lockPath check if lock with lockPath path is ackquired or released') print('acquire lockPath try to ackquire lock with lockPath') print('release lockPath try to release lock with lockPath') print('') print('') def main(): if len(sys.argv) < 3: print('Usage: %s selfHost:port partner1Host:port partner2Host:port ...' % sys.argv[0]) sys.exit(-1) selfAddr = sys.argv[1] partners = sys.argv[2:] lock = Lock(selfAddr, partners, 10.0) def get_input(v): if sys.version_info >= (3, 0): return input(v) else: return raw_input(v) printHelp() while True: cmd = get_input(">> ").split() if not cmd: continue elif cmd[0] == 'help': printHelp() elif cmd[0] == 'check': print('acquired' if lock.isAcquired(cmd[1]) else 'released') elif cmd[0] == 'acquire': lock.tryAcquireLock(cmd[1]) time.sleep(1.5) print('acquired' if lock.isAcquired(cmd[1]) else 'failed') elif cmd[0] == 'release': lock.release(cmd[1]) time.sleep(1.5) print('acquired' if lock.isAcquired(cmd[1]) else 'released') if __name__ == '__main__': main() PySyncObj-0.3.14/pysyncobj/000077500000000000000000000000001475533247400155155ustar00rootroot00000000000000PySyncObj-0.3.14/pysyncobj/__init__.py000066400000000000000000000003451475533247400176300ustar00rootroot00000000000000from .syncobj import SyncObj, SyncObjException, SyncObjConf, replicated, replicated_sync,\ FAIL_REASON, _COMMAND_TYPE, createJournal, HAS_CRYPTO, SERIALIZER_STATE, SyncObjConsumer, _RAFT_STATE from .utility import TcpUtility PySyncObj-0.3.14/pysyncobj/atomic_replace.py000066400000000000000000000022141475533247400210350ustar00rootroot00000000000000import os import sys import ctypes if hasattr(ctypes, 'windll'): # pragma: no cover CreateTransaction = ctypes.windll.ktmw32.CreateTransaction CommitTransaction = ctypes.windll.ktmw32.CommitTransaction MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW CloseHandle = ctypes.windll.kernel32.CloseHandle MOVEFILE_REPLACE_EXISTING = 0x1 MOVEFILE_WRITE_THROUGH = 0x8 if sys.version_info >= (3, 0): unicode = str def atomicReplace(oldPath, newPath): if not isinstance(oldPath, unicode): oldPath = unicode(oldPath, sys.getfilesystemencoding()) if not isinstance(newPath, unicode): newPath = unicode(newPath, sys.getfilesystemencoding()) ta = CreateTransaction(None, 0, 0, 0, 0, 1000, 'atomic_replace') if ta == -1: return False res = MoveFileTransacted(oldPath, newPath, None, None, MOVEFILE_REPLACE_EXISTING | MOVEFILE_WRITE_THROUGH, ta) if not res: CloseHandle(ta) return False res = CommitTransaction(ta) CloseHandle(ta) return bool(res) else: atomicReplace = os.rename PySyncObj-0.3.14/pysyncobj/batteries.py000066400000000000000000000403021475533247400200500ustar00rootroot00000000000000import threading import weakref import time import socket import os import collections import heapq from .syncobj import SyncObjConsumer, replicated class ReplCounter(SyncObjConsumer): def __init__(self): """ Simple distributed counter. You can set, add, sub and inc counter value. """ super(ReplCounter, self).__init__() self.__counter = int() @replicated def set(self, newValue): """ Set new value to a counter. :param newValue: new value :return: new counter value """ self.__counter = newValue return self.__counter @replicated def add(self, value): """ Adds value to a counter. :param value: value to add :return: new counter value """ self.__counter += value return self.__counter @replicated def sub(self, value): """ Subtracts a value from counter. :param value: value to subtract :return: new counter value """ self.__counter -= value return self.__counter @replicated def inc(self): """ Increments counter value by one. :return: new counter value """ self.__counter += 1 return self.__counter def get(self): """ :return: current counter value """ return self.__counter class ReplList(SyncObjConsumer): def __init__(self): """ Distributed list - it has an interface similar to a regular list. """ super(ReplList, self).__init__() self.__data = [] @replicated def reset(self, newData): """Replace list with a new one""" assert isinstance(newData, list) self.__data = newData @replicated def set(self, position, newValue): """Update value at given position.""" self.__data[position] = newValue @replicated def append(self, item): """Append item to end""" self.__data.append(item) @replicated def extend(self, other): """Extend list by appending elements from the iterable""" self.__data.extend(other) @replicated def insert(self, position, element): """Insert object before position""" self.__data.insert(position, element) @replicated def remove(self, element): """ Remove first occurrence of element. Raises ValueError if the value is not present. """ self.__data.remove(element) @replicated def pop(self, position=None): """ Remove and return item at position (default last). Raises IndexError if list is empty or index is out of range. """ return self.__data.pop(position) @replicated def sort(self, reverse=False): """Stable sort *IN PLACE*""" self.__data.sort(reverse=reverse) def index(self, element): """ Return first position of element. Raises ValueError if the value is not present. """ return self.__data.index(element) def count(self, element): """ Return number of occurrences of element """ return self.__data.count(element) def get(self, position): """ Return value at given position""" return self.__data[position] def __getitem__(self, position): """ Return value at given position""" return self.__data[position] @replicated(ver=1) def __setitem__(self, position, element): """Update value at given position.""" self.__data[position] = element def __len__(self): """Return the number of items of a sequence or collection.""" return len(self.__data) def rawData(self): """Return internal list - use it carefully""" return self.__data class ReplDict(SyncObjConsumer): def __init__(self): """ Distributed dict - it has an interface similar to a regular dict. """ super(ReplDict, self).__init__() self.__data = {} @replicated def reset(self, newData): """Replace dict with a new one""" assert isinstance(newData, dict) self.__data = newData @replicated def __setitem__(self, key, value): """Set value for specified key""" self.__data[key] = value @replicated def set(self, key, value): """Set value for specified key""" self.__data[key] = value @replicated def setdefault(self, key, default): """Return value for specified key, set default value if key not exist""" return self.__data.setdefault(key, default) @replicated def update(self, other): """Adds all values from the other dict""" self.__data.update(other) @replicated def pop(self, key, default=None): """Remove and return value for given key, return default if key not exist""" return self.__data.pop(key, default) @replicated def clear(self): """Remove all items from dict""" self.__data.clear() def __getitem__(self, key): """Return value for given key""" return self.__data[key] def get(self, key, default=None): """Return value for given key, return default if key not exist""" return self.__data.get(key, default) def __len__(self): """Return size of dict""" return len(self.__data) def __contains__(self, key): """True if key exists""" return key in self.__data def keys(self): """Return all keys""" return self.__data.keys() def values(self): """Return all values""" return self.__data.values() def items(self): """Return all items""" return self.__data.items() def rawData(self): """Return internal dict - use it carefully""" return self.__data class ReplSet(SyncObjConsumer): def __init__(self): """ Distributed set - it has an interface similar to a regular set. """ super(ReplSet, self).__init__() self.__data = set() @replicated def reset(self, newData): """Replace set with a new one""" assert isinstance(newData, set) self.__data = newData @replicated def add(self, item): """Add an element to a set""" self.__data.add(item) @replicated def remove(self, item): """ Remove an element from a set; it must be a member. If the element is not a member, raise a KeyError. """ self.__data.remove(item) @replicated def discard(self, item): """ Remove an element from a set if it is a member. If the element is not a member, do nothing. """ self.__data.discard(item) @replicated def pop(self): """ Remove and return an arbitrary set element. Raises KeyError if the set is empty. """ return self.__data.pop() @replicated def clear(self): """ Remove all elements from this set. """ self.__data.clear() @replicated def update(self, other): """ Update a set with the union of itself and others. """ self.__data.update(other) def rawData(self): """Return internal dict - use it carefully""" return self.__data def __len__(self): """Return size of set""" return len(self.__data) def __contains__(self, item): """True if item exists""" return item in self.__data class ReplQueue(SyncObjConsumer): def __init__(self, maxsize=0): """ Replicated FIFO queue. Based on collections.deque. Has an interface similar to Queue. :param maxsize: Max queue size. :type maxsize: int """ super(ReplQueue, self).__init__() self.__maxsize = maxsize self.__data = collections.deque() def qsize(self): """Return size of queue""" return len(self.__data) def empty(self): """True if queue is empty""" return len(self.__data) == 0 def __len__(self): """Return size of queue""" return len(self.__data) def full(self): """True if queue is full""" return len(self.__data) == self.__maxsize @replicated def put(self, item): """Put an item into the queue. True - if item placed in queue. False - if queue is full and item can not be placed.""" if self.__maxsize and len(self.__data) >= self.__maxsize: return False self.__data.append(item) return True @replicated def get(self, default=None): """Extract item from queue. Return default if queue is empty.""" try: return self.__data.popleft() except: return default class ReplPriorityQueue(SyncObjConsumer): def __init__(self, maxsize=0): """ Replicated priority queue. Based on heapq. Has an interface similar to Queue. :param maxsize: Max queue size. :type maxsize: int """ super(ReplPriorityQueue, self).__init__() self.__maxsize = maxsize self.__data = [] def qsize(self): """Return size of queue""" return len(self.__data) def empty(self): """True if queue is empty""" return len(self.__data) == 0 def __len__(self): """Return size of queue""" return len(self.__data) def full(self): """True if queue is full""" return len(self.__data) == self.__maxsize @replicated def put(self, item): """Put an item into the queue. Items should be comparable, eg. tuples. True - if item placed in queue. False - if queue is full and item can not be placed.""" if self.__maxsize and len(self.__data) >= self.__maxsize: return False heapq.heappush(self.__data, item) return True @replicated def get(self, default=None): """Extract the smallest item from queue. Return default if queue is empty.""" if not self.__data: return default return heapq.heappop(self.__data) class _ReplLockManagerImpl(SyncObjConsumer): def __init__(self, autoUnlockTime): super(_ReplLockManagerImpl, self).__init__() self.__locks = {} self.__autoUnlockTime = autoUnlockTime @replicated def acquire(self, lockID, clientID, currentTime): existingLock = self.__locks.get(lockID, None) # Auto-unlock old lock if existingLock is not None: if currentTime - existingLock[1] > self.__autoUnlockTime: existingLock = None # Acquire lock if possible if existingLock is None or existingLock[0] == clientID: self.__locks[lockID] = (clientID, currentTime) return True # Lock already acquired by someone else return False @replicated def prolongate(self, clientID, currentTime): for lockID in list(self.__locks): lockClientID, lockTime = self.__locks[lockID] if currentTime - lockTime > self.__autoUnlockTime: del self.__locks[lockID] continue if lockClientID == clientID: self.__locks[lockID] = (clientID, currentTime) @replicated def release(self, lockID, clientID): existingLock = self.__locks.get(lockID, None) if existingLock is not None and existingLock[0] == clientID: del self.__locks[lockID] def isAcquired(self, lockID, clientID, currentTime): existingLock = self.__locks.get(lockID, None) if existingLock is not None: if existingLock[0] == clientID: if currentTime - existingLock[1] < self.__autoUnlockTime: return True return False class ReplLockManager(object): def __init__(self, autoUnlockTime, selfID = None): """Replicated Lock Manager. Allow to acquire / release distributed locks. :param autoUnlockTime: lock will be released automatically if no response from holder for more than autoUnlockTime seconds :type autoUnlockTime: float :param selfID: (optional) - unique id of current lock holder. :type selfID: str """ self.__lockImpl = _ReplLockManagerImpl(autoUnlockTime) if selfID is None: selfID = '%s:%d:%d' % (socket.gethostname(), os.getpid(), id(self)) self.__selfID = selfID self.__autoUnlockTime = autoUnlockTime self.__mainThread = threading.current_thread() self.__initialised = threading.Event() self.__destroying = False self.__lastProlongateTime = 0 self.__thread = threading.Thread(target=ReplLockManager._autoAcquireThread, args=(weakref.proxy(self),)) self.__thread.start() while not self.__initialised.is_set(): pass def _consumer(self): return self.__lockImpl def destroy(self): """Destroy should be called before destroying ReplLockManager""" self.__destroying = True def _autoAcquireThread(self): self.__initialised.set() try: while True: if not self.__mainThread.is_alive(): break if self.__destroying: break time.sleep(0.1) if time.time() - self.__lastProlongateTime < float(self.__autoUnlockTime) / 4.0: continue syncObj = self.__lockImpl._syncObj if syncObj is None: continue if syncObj._getLeader() is not None: self.__lastProlongateTime = time.time() self.__lockImpl.prolongate(self.__selfID, time.time()) except ReferenceError: pass def tryAcquire(self, lockID, callback=None, sync=False, timeout=None): """Attempt to acquire lock. :param lockID: unique lock identifier. :type lockID: str :param sync: True - to wait until lock is acquired or failed to acquire. :type sync: bool :param callback: if sync is False - callback will be called with operation result. :type callback: func(opResult, error) :param timeout: max operation time (default - unlimited) :type timeout: float :return True if acquired, False - somebody else already acquired lock """ attemptTime = time.time() if sync: acquireRes = self.__lockImpl.acquire(lockID, self.__selfID, attemptTime, callback=callback, sync=sync, timeout=timeout) acquireTime = time.time() if acquireRes: if acquireTime - attemptTime > self.__autoUnlockTime / 2.0: acquireRes = False self.__lockImpl.release(lockID, self.__selfID, sync=sync) return acquireRes def asyncCallback(acquireRes, errCode): if acquireRes: acquireTime = time.time() if acquireTime - attemptTime > self.__autoUnlockTime / 2.0: acquireRes = False self.__lockImpl.release(lockID, self.__selfID, sync=False) callback(acquireRes, errCode) self.__lockImpl.acquire(lockID, self.__selfID, attemptTime, callback=asyncCallback, sync=sync, timeout=timeout) def isAcquired(self, lockID): """Check if lock is acquired by ourselves. :param lockID: unique lock identifier. :type lockID: str :return True if lock is acquired by ourselves. """ return self.__lockImpl.isAcquired(lockID, self.__selfID, time.time()) def release(self, lockID, callback=None, sync=False, timeout=None): """ Release previously-acquired lock. :param lockID: unique lock identifier. :type lockID: str :param sync: True - to wait until lock is released or failed to release. :type sync: bool :param callback: if sync is False - callback will be called with operation result. :type callback: func(opResult, error) :param timeout: max operation time (default - unlimited) :type timeout: float """ self.__lockImpl.release(lockID, self.__selfID, callback=callback, sync=sync, timeout=timeout) PySyncObj-0.3.14/pysyncobj/config.py000066400000000000000000000232351475533247400173410ustar00rootroot00000000000000 class FAIL_REASON: SUCCESS = 0 #: Command successfully applied. QUEUE_FULL = 1 #: Commands queue full MISSING_LEADER = 2 #: Leader is currently missing (leader election in progress, or no connection) DISCARDED = 3 #: Command discarded (cause of new leader elected and another command was applied instead) NOT_LEADER = 4 #: Leader has changed, old leader did not have time to commit command. LEADER_CHANGED = 5 #: Simmilar to NOT_LEADER - leader has changed without command commit. REQUEST_DENIED = 6 #: Command denied class SERIALIZER_STATE: NOT_SERIALIZING = 0 #: Serialization not started or already finished. SERIALIZING = 1 #: Serialization in progress. SUCCESS = 2 #: Serialization successfully finished (should be returned only one time after finished). FAILED = 3 #: Serialization failed (should be returned only one time after finished). class SyncObjConf(object): """PySyncObj configuration object""" def __init__(self, **kwargs): #: Encrypt session with specified password. #: Install `cryptography` module to be able to set password. self.password = kwargs.get('password', None) #: Disable autoTick if you want to call onTick manually. #: Otherwise it will be called automatically from separate thread. self.autoTick = kwargs.get('autoTick', True) self.autoTickPeriod = kwargs.get('autoTickPeriod', 0.05) #: Commands queue is used to store commands before real processing. self.commandsQueueSize = kwargs.get('commandsQueueSize', 100000) #: After randomly selected timeout (in range from minTimeout to maxTimeout) #: leader considered dead, and leader election starts. self.raftMinTimeout = kwargs.get('raftMinTimeout', 0.4) #: Same as raftMinTimeout self.raftMaxTimeout = kwargs.get('raftMaxTimeout', 1.4) #: Interval of sending append_entries (ping) command. #: Should be less than raftMinTimeout. self.appendEntriesPeriod = kwargs.get('appendEntriesPeriod', 0.1) #: When no data received for connectionTimeout - connection considered dead. #: Should be more than raftMaxTimeout. self.connectionTimeout = kwargs.get('connectionTimeout', 3.5) #: Interval between connection attempts. #: Will try to connect to offline nodes each connectionRetryTime. self.connectionRetryTime = kwargs.get('connectionRetryTime', 5.0) #: When leader has no response from the majority of the cluster #: for leaderFallbackTimeout - it will fallback to follower state. #: Should be more than appendEntriesPeriod. self.leaderFallbackTimeout = kwargs.get('leaderFallbackTimeout', 30.0) #: Send multiple entries in a single command. #: Enabled (default) - improve overall performance (requests per second) #: Disabled - improve single request speed (don't wait till batch ready) self.appendEntriesUseBatch = kwargs.get('appendEntriesUseBatch', True) #: Max number of bytes per single append_entries command. self.appendEntriesBatchSizeBytes = kwargs.get('appendEntriesBatchSizeBytes', 2 ** 16) #: Bind address (address:port). Default - None. #: If None - selfAddress is used as bindAddress. #: Could be useful if selfAddress is not equal to bindAddress. #: Eg. with routers, nat, port forwarding, etc. self.bindAddress = kwargs.get('bindAddress', None) #: Preferred address type. Default - ipv4. #: None - no preferences, select random available. #: ipv4 - prefer ipv4 address type, if not available us ipv6. #: ipv6 - prefer ipv6 address type, if not available us ipv4. self.preferredAddrType = kwargs.get('preferredAddrType', 'ipv4') #: Size of send buffer for sockets. self.sendBufferSize = kwargs.get('sendBufferSize', 2 ** 16) #: Size of receive for sockets. self.recvBufferSize = kwargs.get('recvBufferSize', 2 ** 16) #: Time to cache dns requests (improves performance, #: no need to resolve address for each connection attempt). self.dnsCacheTime = kwargs.get('dnsCacheTime', 600.0) #: Time to cache failed dns request. self.dnsFailCacheTime = kwargs.get('dnsFailCacheTime', 30.0) #: Log will be compacted after it reach minEntries size or #: minTime after previous compaction. self.logCompactionMinEntries = kwargs.get('logCompactionMinEntries', 5000) #: Log will be compacted after it reach minEntries size or #: minTime after previous compaction. self.logCompactionMinTime = kwargs.get('logCompactionMinTime', 300) #: If true - each node will start log compaction in separate time window. #: eg. node1 in 12.00-12.10, node2 in 12.10-12.20, node3 12.20 - 12.30, #: then again node1 12.30-12.40, node2 12.40-12.50, etc. self.logCompactionSplit = kwargs.get('logCompactionSplit', False) #: Max number of bytes per single append_entries command #: while sending serialized object. self.logCompactionBatchSize = kwargs.get('logCompactionBatchSize', 2 ** 16) #: If true - commands will be enqueued and executed after leader detected. #: Otherwise - `FAIL_REASON.MISSING_LEADER <#pysyncobj.FAIL_REASON.MISSING_LEADER>`_ error will be emitted. #: Leader is missing when esteblishing connection or when election in progress. self.commandsWaitLeader = kwargs.get('commandsWaitLeader', True) #: File to store full serialized object. Save full dump on disc when doing log compaction. #: None - to disable store. self.fullDumpFile = kwargs.get('fullDumpFile', None) #: File to store operations journal. Save each record as soon as received. self.journalFile = kwargs.get('journalFile', None) #: Will try to bind port every bindRetryTime seconds until success. self.bindRetryTime = kwargs.get('bindRetryTime', 1.0) #: Max number of attempts to bind port (default 0, unlimited). self.maxBindRetries = kwargs.get('maxBindRetries', 0) #: This callback will be called as soon as SyncObj sync all data from leader. self.onReady = kwargs.get('onReady', None) #: This callback will be called for every change of SyncObj state. #: Arguments: onStateChanged(oldState, newState). #: WARNING: there could be multiple leaders at the same time! self.onStateChanged = kwargs.get('onStateChanged', None) #: If enabled - cluster configuration could be changed dynamically. self.dynamicMembershipChange = kwargs.get('dynamicMembershipChange', False) #: Sockets poller: #: * `auto` - auto select best available on current platform #: * `select` - use select poller #: * `poll` - use poll poller self.pollerType = kwargs.get('pollerType', 'auto') #: Use fork if available when serializing on disk. self.useFork = kwargs.get('useFork', True) #: Custom serialize function, it will be called when logCompaction (fullDump) happens. #: If specified - there should be a custom deserializer too. #: Arguments: serializer(fileName, data) #: data - some internal stuff that is *required* to be serialized with your object data. self.serializer = kwargs.get('serializer', None) #: Check custom serialization state, for async serializer. #: Should return one of `SERIALIZER_STATE <#pysyncobj.SERIALIZER_STATE>`_. self.serializeChecker = kwargs.get('serializeChecker', None) #: Custom deserialize function, it will be called when restore from fullDump. #: If specified - there should be a custom serializer too. #: Should return data - internal stuff that was passed to serialize. self.deserializer = kwargs.get('deserializer', None) #: This callback will be called when cluster is switched to new version. #: onCodeVersionChanged(oldVer, newVer) self.onCodeVersionChanged = kwargs.get('onCodeVersionChanged', None) #: TCP socket keepalive #: (keepalive_time_seconds, probe_intervals_seconds, max_fails_count) #: Set to None to disable self.tcp_keepalive = kwargs.get('tcp_keepalive', (16, 3, 5)) def validate(self): assert self.autoTickPeriod > 0 assert self.commandsQueueSize >= 0 assert self.raftMinTimeout > self.appendEntriesPeriod * 3 assert self.raftMaxTimeout > self.raftMinTimeout assert self.appendEntriesPeriod > 0 assert self.leaderFallbackTimeout > self.appendEntriesPeriod assert self.connectionTimeout >= self.raftMaxTimeout assert self.connectionRetryTime >= 0 assert self.appendEntriesBatchSizeBytes > 0 assert self.sendBufferSize > 0 assert self.recvBufferSize > 0 assert self.dnsCacheTime>= 0 assert self.dnsFailCacheTime >= 0 assert self.logCompactionMinEntries >= 2 assert self.logCompactionMinTime > 0 assert self.logCompactionBatchSize > 0 assert self.bindRetryTime > 0 assert (self.deserializer is None) == (self.serializer is None) if self.serializer is not None: assert self.fullDumpFile is not None assert self.preferredAddrType in ('ipv4', 'ipv6', None) if self.tcp_keepalive is not None: assert isinstance(self.tcp_keepalive, tuple) assert len(self.tcp_keepalive) == 3 for i in range(3): assert isinstance(self.tcp_keepalive[i], int) assert self.tcp_keepalive[i] > 0 PySyncObj-0.3.14/pysyncobj/dns_resolver.py000066400000000000000000000043571475533247400206050ustar00rootroot00000000000000import time import socket import random import logging from .monotonic import monotonic as monotonicTime logger = logging.getLogger(__name__) class DnsCachingResolver(object): def __init__(self, cacheTime, failCacheTime): self.__cache = {} self.__cacheTime = cacheTime self.__failCacheTime = failCacheTime self.__preferredAddrFamily = socket.AF_INET def setTimeouts(self, cacheTime, failCacheTime): self.__cacheTime = cacheTime self.__failCacheTime = failCacheTime def resolve(self, hostname): currTime = monotonicTime() cachedTime, ips = self.__cache.get(hostname, (-self.__failCacheTime-1, [])) timePassed = currTime - cachedTime if (timePassed > self.__cacheTime) or (not ips and timePassed > self.__failCacheTime): prevIps = ips ips = self.__doResolve(hostname) if not ips: logger.warning("failed to resolve hostname: " + hostname) ips = prevIps self.__cache[hostname] = (currTime, ips) return None if not ips else random.choice(ips) def setPreferredAddrFamily(self, preferredAddrFamily): if preferredAddrFamily is None: self.__preferredAddrFamily = None elif preferredAddrFamily == 'ipv4': self.__preferredAddrFamily = socket.AF_INET elif preferredAddrFamily == 'ipv6': self.__preferredAddrFamily = socket.AF_INET else: self.__preferredAddrFamily = preferredAddrFamily def __doResolve(self, hostname): try: addrs = socket.getaddrinfo(hostname, None) ips = [] if self.__preferredAddrFamily is not None: ips = list(set([addr[4][0] for addr in addrs\ if addr[0] == self.__preferredAddrFamily])) if not ips: ips = list(set([addr[4][0] for addr in addrs])) except socket.gaierror: logger.warning('failed to resolve host %s', hostname) ips = [] return ips _g_resolver = None def globalDnsResolver(): global _g_resolver if _g_resolver is None: _g_resolver = DnsCachingResolver(cacheTime=600.0, failCacheTime=30.0) return _g_resolver PySyncObj-0.3.14/pysyncobj/encryptor.py000066400000000000000000000013431475533247400201150ustar00rootroot00000000000000import base64 try: import cryptography from cryptography.fernet import Fernet from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC HAS_CRYPTO = True except: HAS_CRYPTO = False SALT = b'\x15%q\xe6\xbb\x02\xa6\xf8\x13q\x90\xcf6+\x1e\xeb' def getEncryptor(password): if not isinstance(password, bytes): password = bytes(password.encode()) kdf = PBKDF2HMAC( algorithm=hashes.SHA256(), length=32, salt=SALT, iterations=100000, backend=default_backend() ) key = base64.urlsafe_b64encode(kdf.derive(password)) return Fernet(key) PySyncObj-0.3.14/pysyncobj/fast_queue.py000066400000000000000000000013421475533247400202300ustar00rootroot00000000000000try: import Queue except ImportError: import queue as Queue from collections import deque import threading # According to benchmarks, standard Queue is slow. # Using FastQueue improves overall performance by ~15% class FastQueue(object): def __init__(self, maxSize): self.__queue = deque() self.__lock = threading.Lock() self.__maxSize = maxSize def put_nowait(self, value): with self.__lock: if len(self.__queue) > self.__maxSize: raise Queue.Full() self.__queue.append(value) def get_nowait(self): with self.__lock: if len(self.__queue) == 0: raise Queue.Empty() return self.__queue.popleft() PySyncObj-0.3.14/pysyncobj/journal.py000066400000000000000000000173371475533247400175540ustar00rootroot00000000000000import os import mmap import struct import shutil from .version import VERSION from .pickle import to_bytes, loads, dumps class Journal(object): def add(self, command, idx, term): raise NotImplementedError def clear(self): raise NotImplementedError def deleteEntriesFrom(self, entryFrom): raise NotImplementedError def deleteEntriesTo(self, entryTo): raise NotImplementedError def __getitem__(self, item): raise NotImplementedError def __len__(self): raise NotImplementedError def _destroy(self): raise NotImplementedError def setRaftCommitIndex(self, raftCommitIndex): raise NotImplementedError def getRaftCommitIndex(self): raise NotImplementedError def onOneSecondTimer(self): pass class MemoryJournal(Journal): def __init__(self): self.__journal = [] self.__bytesSize = 0 self.__lastCommitIndex = 0 def add(self, command, idx, term): self.__journal.append((command, idx, term)) def clear(self): self.__journal = [] def deleteEntriesFrom(self, entryFrom): del self.__journal[entryFrom:] def deleteEntriesTo(self, entryTo): self.__journal = self.__journal[entryTo:] def __getitem__(self, item): return self.__journal[item] def __len__(self): return len(self.__journal) def _destroy(self): pass def setRaftCommitIndex(self, raftCommitIndex): pass def getRaftCommitIndex(self): return 1 class ResizableFile(object): def __init__(self, fileName, initialSize = 1024, resizeFactor = 2.0, defaultContent = None): self.__fileName = fileName self.__resizeFactor = resizeFactor if not os.path.exists(fileName): with open(fileName, 'wb') as f: if defaultContent is not None: f.write(defaultContent) self.__f = open(fileName, 'r+b') self.__mm = mmap.mmap(self.__f.fileno(), 0) currSize = self.__mm.size() if currSize < initialSize: try: self.__mm.resize(initialSize) except SystemError: self.__extand(initialSize - currSize) def write(self, offset, values): size = len(values) currSize = self.__mm.size() if offset + size > self.__mm.size(): try: self.__mm.resize(int(self.__mm.size() * self.__resizeFactor)) except SystemError: self.__extand(int(self.__mm.size() * self.__resizeFactor) - currSize) self.__mm[offset:offset + size] = values def read(self, offset, size): return self.__mm[offset:offset + size] def __extand(self, bytesToAdd): self.__mm.close() self.__f.close() with open(self.__fileName, 'ab') as f: f.write(b'\0' * bytesToAdd) self.__f = open(self.__fileName, 'r+b') self.__mm = mmap.mmap(self.__f.fileno(), 0) def _destroy(self): self.__mm.flush() self.__mm.close() self.__f.close() def flush(self): self.__mm.flush() class MetaStorer(object): def __init__(self, path): self.__path = path def getMeta(self): meta = {} try: meta = loads(open(self.__path, 'rb').read()) except: pass return meta def storeMeta(self, meta): with open(self.__path + '.tmp', 'wb') as f: f.write(dumps(meta)) f.flush() shutil.move(self.__path + '.tmp', self.__path) def getPath(self): return self.__path JOURNAL_FORMAT_VERSION = 1 APP_NAME = b'PYSYNCOBJ' APP_VERSION = str.encode(VERSION) NAME_SIZE = 24 VERSION_SIZE = 8 assert len(APP_NAME) < NAME_SIZE assert len(APP_VERSION) < VERSION_SIZE FIRST_RECORD_OFFSET = NAME_SIZE + VERSION_SIZE + 4 + 4 LAST_RECORD_OFFSET_OFFSET = NAME_SIZE + VERSION_SIZE + 4 # # APP_NAME (24b) + APP_VERSION (8b) + FORMAT_VERSION (4b) + LAST_RECORD_OFFSET (4b) + # record1size + record1 + record1size + record2size + record2 + record2size + ... # (record1) | (record2) | ... # class FileJournal(Journal): def __init__(self, journalFile): self.__journalFile = ResizableFile(journalFile, defaultContent=self.__getDefaultHeader()) self.__journal = [] self.__metaStorer = MetaStorer(journalFile + '.meta') self.__meta = self.__metaStorer.getMeta() self.__metaSaved = True currentOffset = FIRST_RECORD_OFFSET lastRecordOffset = self.__getLastRecordOffset() while currentOffset < lastRecordOffset: nextRecordSize = struct.unpack(' Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import time __all__ = ('monotonic',) try: time.CLOCK_MONOTONIC_RAW time.clock_gettime(time.CLOCK_MONOTONIC_RAW) monotonic = lambda: time.clock_gettime(time.CLOCK_MONOTONIC_RAW) except AttributeError: import ctypes import ctypes.util import os import sys import threading try: if sys.platform == 'darwin': # OS X, iOS # See Technical Q&A QA1398 of the Mac Developer Library: # libc = ctypes.CDLL('/usr/lib/libc.dylib', use_errno=True) class mach_timebase_info_data_t(ctypes.Structure): """System timebase info. Defined in .""" _fields_ = (('numer', ctypes.c_uint32), ('denom', ctypes.c_uint32)) mach_absolute_time = libc.mach_absolute_time mach_absolute_time.restype = ctypes.c_uint64 timebase = mach_timebase_info_data_t() libc.mach_timebase_info(ctypes.byref(timebase)) ticks_per_second = timebase.numer / timebase.denom * 1.0e9 def monotonic(): """Monotonic clock, cannot go backward.""" return mach_absolute_time() / ticks_per_second elif sys.platform.startswith('win32') or sys.platform.startswith('cygwin'): if sys.platform.startswith('cygwin'): # Note: cygwin implements clock_gettime (CLOCK_MONOTONIC = 4) since # version 1.7.6. Using raw WinAPI for maximum version compatibility. # Ugly hack using the wrong calling convention (in 32-bit mode) # because ctypes has no windll under cygwin (and it also seems that # the code letting you select stdcall in _ctypes doesn't exist under # the preprocessor definitions relevant to cygwin). # This is 'safe' because: # 1. The ABI of GetTickCount and GetTickCount64 is identical for # both calling conventions because they both have no parameters. # 2. libffi masks the problem because after making the call it doesn't # touch anything through esp and epilogue code restores a correct # esp from ebp afterwards. try: kernel32 = ctypes.cdll.kernel32 except OSError: # 'No such file or directory' kernel32 = ctypes.cdll.LoadLibrary('kernel32.dll') else: kernel32 = ctypes.windll.kernel32 GetTickCount64 = getattr(kernel32, 'GetTickCount64', None) if GetTickCount64: # Windows Vista / Windows Server 2008 or newer. GetTickCount64.restype = ctypes.c_ulonglong def monotonic(): """Monotonic clock, cannot go backward.""" return GetTickCount64() / 1000.0 else: # Before Windows Vista. GetTickCount = kernel32.GetTickCount GetTickCount.restype = ctypes.c_uint32 get_tick_count_lock = threading.Lock() get_tick_count_last_sample = 0 get_tick_count_wraparounds = 0 def monotonic(): """Monotonic clock, cannot go backward.""" global get_tick_count_last_sample global get_tick_count_wraparounds with get_tick_count_lock: current_sample = GetTickCount() if current_sample < get_tick_count_last_sample: get_tick_count_wraparounds += 1 get_tick_count_last_sample = current_sample final_milliseconds = get_tick_count_wraparounds << 32 final_milliseconds += get_tick_count_last_sample return final_milliseconds / 1000.0 else: try: clock_gettime = ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True).clock_gettime except Exception: clock_gettime = ctypes.CDLL(ctypes.util.find_library('rt'), use_errno=True).clock_gettime class timespec(ctypes.Structure): """Time specification, as described in clock_gettime(3).""" _fields_ = (('tv_sec', ctypes.c_long), ('tv_nsec', ctypes.c_long)) if sys.platform.startswith('linux'): CLOCK_MONOTONIC = 4 # actually this is CLOCK_MONOTONIC_RAW elif sys.platform.startswith('freebsd'): CLOCK_MONOTONIC = 4 elif sys.platform.startswith('sunos5'): CLOCK_MONOTONIC = 4 elif 'bsd' in sys.platform: CLOCK_MONOTONIC = 3 elif sys.platform.startswith('aix'): CLOCK_MONOTONIC = ctypes.c_longlong(10) def monotonic(): """Monotonic clock, cannot go backward.""" ts = timespec() if clock_gettime(CLOCK_MONOTONIC, ctypes.pointer(ts)): errno = ctypes.get_errno() raise OSError(errno, os.strerror(errno)) return ts.tv_sec + ts.tv_nsec / 1.0e9 # Perform a sanity-check. if monotonic() - monotonic() > 0: raise ValueError('monotonic() is not monotonic!') except Exception as e: monotonic = lambda: time.time() PySyncObj-0.3.14/pysyncobj/node.py000066400000000000000000000055421475533247400170220ustar00rootroot00000000000000from .dns_resolver import globalDnsResolver class Node(object): """ A representation of any node in the network. The ID must uniquely identify a node. Node objects with the same ID will be treated as equal, i.e. as representing the same node. """ def __init__(self, id, **kwargs): """ Initialise the Node; id must be immutable, hashable, and unique. :param id: unique, immutable, hashable ID of a node :type id: any :param **kwargs: any further information that should be kept about this node """ self._id = id for key in kwargs: setattr(self, key, kwargs[key]) def __setattr__(self, name, value): if name == 'id': raise AttributeError('Node id is not mutable') super(Node, self).__setattr__(name, value) def __eq__(self, other): return isinstance(other, Node) and self.id == other.id def __ne__(self, other): # In Python 3, __ne__ defaults to inverting the result of __eq__. # Python 2 isn't as sane. So for Python 2 compatibility, we also need to define the != operator explicitly. return not (self == other) def __hash__(self): return hash(self.id) def __str__(self): return self.id def __repr__(self): v = vars(self) return '{}({}{})'.format(type(self).__name__, repr(self.id), (', ' + ', '.join('{} = {}'.format(key, repr(v[key])) for key in v if key != '_id')) if len(v) > 1 else '') def _destroy(self): pass @property def id(self): return self._id class TCPNode(Node): """ A node intended for communication over TCP/IP. Its id is the network address (host:port). """ def __init__(self, address, **kwargs): """ Initialise the TCPNode :param address: network address of the node in the format 'host:port' :type address: str :param **kwargs: any further information that should be kept about this node """ super(TCPNode, self).__init__(address, **kwargs) self.__address = address self.__host, port = address.rsplit(':', 1) self.__port = int(port) #self.__ip = globalDnsResolver().resolve(self.host) @property def address(self): return self.__address @property def host(self): return self.__host @property def port(self): return self.__port @property def ip(self): return globalDnsResolver().resolve(self.__host) def __repr__(self): v = vars(self) filtered = ['_id', '_TCPNode__address', '_TCPNode__host', '_TCPNode__port', '_TCPNode__ip'] formatted = ['{} = {}'.format(key, repr(v[key])) for key in v if key not in filtered] return '{}({}{})'.format(type(self).__name__, repr(self.id), (', ' + ', '.join(formatted)) if len(formatted) else '') PySyncObj-0.3.14/pysyncobj/pickle.py000066400000000000000000000040641475533247400173420ustar00rootroot00000000000000import sys is_py3 = sys.version_info >= (3, 0) if is_py3: import pickle from struct import unpack # python3 sometimes failes to unpickle data pickled by python2, it happens # because it is trying to decode binary data into the string and fails. # UnicodeDecodeError exception is raised in this case. Instead of simply # giving up we will retry decoding with with "slow" _Unpickler implemented # in pure python with overriden following methods. # The main idea is - treat object as binary if the decoding has failed. # Such approach will not affect performance when we run all nodes with # the same python version, beacuse it will never retry. def _load_short_binstring(self): len = ord(self.read(1)) data = self.read(len) try: data = str(data, self.encoding, self.errors) except: pass self.append(data) def _load_binstring(self): len, = unpack(' [(termID, callback), ...] self.__commandsLocalCounter = 0 self.__commandsWaitingReply = {} # commandLocalCounter => callback self.__properies = set() for key in self.__dict__: self.__properies.add(key) self.__enabledCodeVersion = 0 if self.__conf.autoTick: self.__mainThread = threading.current_thread() self.__initialised = threading.Event() self.__thread = threading.Thread(target=SyncObj._autoTickThread, args=(weakref.proxy(self),)) self.__thread.start() self.__initialised.wait() # while not self.__initialised.is_set(): # pass else: try: while not self.__transport.ready: self.__transport.tryGetReady() except TransportNotReadyError: logger.exception('failed to perform initialization') raise SyncObjException('BindError') # Backwards compatibility def destroy(self): """ Correctly destroy SyncObj. Stop autoTickThread, close connections, etc. """ if self.__conf.autoTick: self.__destroying = True else: self._doDestroy() def tick_thread_alive(self): """ Check if the tick thread is alive. """ if self.__thread and self.__thread.is_alive(): return True return False def destroy_synchronous(self): """ Correctly destroy SyncObj. Stop autoTickThread, close connections, etc. and ensure the threads are gone. """ self.destroy() self.__thread.join() def waitReady(self): """ Waits until the transport is ready for operation. :raises TransportNotReadyError: if the transport fails to get ready """ self.__transport.waitReady() def waitBinded(self): """ Waits until initialized (binded port). If success - just returns. If failed to initialized after conf.maxBindRetries - raise SyncObjException. """ try: self.__transport.waitReady() except TransportNotReadyError: raise SyncObjException('BindError') if not self.__transport.ready: raise SyncObjException('BindError') def _destroy(self): self.destroy() def _doDestroy(self): self.__transport.destroy() for consumer in self.__consumers: consumer._destroy() self.__raftLog._destroy() def getCodeVersion(self): return self.__enabledCodeVersion def setCodeVersion(self, newVersion, callback = None): """Switch to a new code version on all cluster nodes. You should ensure that cluster nodes are updated, otherwise they won't be able to apply commands. :param newVersion: new code version :type int :param callback: will be called on success or fail :type callback: function(`FAIL_REASON <#pysyncobj.FAIL_REASON>`_, None) """ assert isinstance(newVersion, int) if newVersion > self.__selfCodeVersion: raise Exception('wrong version, current version is %d, requested version is %d' % (self.__selfCodeVersion, newVersion)) if newVersion < self.__enabledCodeVersion: raise Exception('wrong version, enabled version is %d, requested version is %d' % (self.__enabledCodeVersion, newVersion)) self._applyCommand(pickle.dumps(newVersion), callback, _COMMAND_TYPE.VERSION) def addNodeToCluster(self, node, callback = None): """Add single node to cluster (dynamic membership changes). Async. You should wait until node successfully added before adding next node. :param node: node object or 'nodeHost:nodePort' :type node: Node | str :param callback: will be called on success or fail :type callback: function(`FAIL_REASON <#pysyncobj.FAIL_REASON>`_, None) """ if not self.__conf.dynamicMembershipChange: raise Exception('dynamicMembershipChange is disabled') if not isinstance(node, Node): node = self.__nodeClass(node) self._applyCommand(pickle.dumps(['add', node.id, node]), callback, _COMMAND_TYPE.MEMBERSHIP) def removeNodeFromCluster(self, node, callback = None): """Remove single node from cluster (dynamic membership changes). Async. You should wait until node successfully added before adding next node. :param node: node object or 'nodeHost:nodePort' :type node: Node | str :param callback: will be called on success or fail :type callback: function(`FAIL_REASON <#pysyncobj.FAIL_REASON>`_, None) """ if not self.__conf.dynamicMembershipChange: raise Exception('dynamicMembershipChange is disabled') if not isinstance(node, Node): node = self.__nodeClass(node) self._applyCommand(pickle.dumps(['rem', node.id, node]), callback, _COMMAND_TYPE.MEMBERSHIP) def _setCodeVersion(self, args, callback): self.setCodeVersion(args[0], callback) def _addNodeToCluster(self, args, callback): self.addNodeToCluster(args[0], callback) def _removeNodeFromCluster(self, args, callback): node = args[0] if node == self.__selfNode.address: callback(None, FAIL_REASON.REQUEST_DENIED) else: self.removeNodeFromCluster(node, callback) def __onSetCodeVersion(self, newVersion): methods = [m for m in dir(self) if callable(getattr(self, m)) and\ getattr(getattr(self, m), 'replicated', False) and \ m != getattr(getattr(self, m), 'origName')] self.__currentVersionFuncNames = {} funcVersions = collections.defaultdict(set) for method in methods: ver = getattr(getattr(self, method), 'ver') origFuncName = getattr(getattr(self, method), 'origName') funcVersions[origFuncName].add(ver) for consumer in self.__consumers: consumerID = id(consumer) consumerMethods = [m for m in dir(consumer) if callable(getattr(consumer, m)) and \ getattr(getattr(consumer, m), 'replicated', False)] for method in consumerMethods: ver = getattr(getattr(consumer, method), 'ver') origFuncName = getattr(getattr(consumer, method), 'origName') funcVersions[(consumerID, origFuncName)].add(ver) for funcName, versions in iteritems(funcVersions): versions = sorted(list(versions)) for v in versions: if v > newVersion: break realFuncName = funcName[1] if isinstance(funcName, tuple) else funcName self.__currentVersionFuncNames[funcName] = realFuncName + '_v' + str(v) def _getFuncName(self, funcName): return self.__currentVersionFuncNames[funcName] def _applyCommand(self, command, callback, commandType = None): try: if commandType is None: self.__commandsQueue.put_nowait((command, callback)) else: self.__commandsQueue.put_nowait((_bchr(commandType) + command, callback)) if not self.__conf.appendEntriesUseBatch and PIPE_NOTIFIER_ENABLED: self.__pipeNotifier.notify() except Queue.Full: self.__callErrCallback(FAIL_REASON.QUEUE_FULL, callback) def _checkCommandsToApply(self): startTime = monotonicTime() while monotonicTime() - startTime < self.__conf.appendEntriesPeriod: if self.__raftLeader is None and self.__conf.commandsWaitLeader: break try: command, callback = self.__commandsQueue.get_nowait() except Queue.Empty: break requestNode, requestID = None, None if isinstance(callback, tuple): requestNode, requestID = callback if self.__raftState == _RAFT_STATE.LEADER: idx, term = self.__getCurrentLogIndex() + 1, self.__raftCurrentTerm if self.__conf.dynamicMembershipChange: changeClusterRequest = self.__parseChangeClusterRequest(command) else: changeClusterRequest = None if changeClusterRequest is None or self.__changeCluster(changeClusterRequest): self.__raftLog.add(command, idx, term) if requestNode is None: if callback is not None: self.__commandsWaitingCommit[idx].append((term, callback)) else: self.__transport.send(requestNode, { 'type': 'apply_command_response', 'request_id': requestID, 'log_idx': idx, 'log_term': term, }) if not self.__conf.appendEntriesUseBatch: self.__sendAppendEntries() else: if requestNode is None: if callback is not None: callback(None, FAIL_REASON.REQUEST_DENIED) else: self.__transport.send(requestNode, { 'type': 'apply_command_response', 'request_id': requestID, 'error': FAIL_REASON.REQUEST_DENIED, }) elif self.__raftLeader is not None: if requestNode is None: message = { 'type': 'apply_command', 'command': command, } if callback is not None: self.__commandsLocalCounter += 1 self.__commandsWaitingReply[self.__commandsLocalCounter] = callback message['request_id'] = self.__commandsLocalCounter self.__transport.send(self.__raftLeader, message) else: self.__transport.send(requestNode, { 'type': 'apply_command_response', 'request_id': requestID, 'error': FAIL_REASON.NOT_LEADER, }) else: self.__callErrCallback(FAIL_REASON.MISSING_LEADER, callback) def _autoTickThread(self): try: self.__transport.tryGetReady() except TransportNotReadyError: logger.exception('failed to perform initialization') return finally: self.__initialised.set() time.sleep(0.1) try: while True: if not self.__mainThread.is_alive(): break if self.__destroying: self._doDestroy() break try: self._onTick(self.__conf.autoTickPeriod) except Exception: # log, wait a little and retry logger.exception('failed _onTick in _autoTickThread') time.sleep(self.__conf.autoTickPeriod) except ReferenceError: pass def doTick(self, timeToWait=0.0): """Performs single tick. Should be called manually if `autoTick <#pysyncobj.SyncObjConf.autoTick>`_ disabled :param timeToWait: max time to wait for next tick. If zero - perform single tick without waiting for new events. Otherwise - wait for new socket event and return. :type timeToWait: float """ assert not self.__conf.autoTick self._onTick(timeToWait) def _onTick(self, timeToWait=0.0): if not self.__transport.ready: try: self.__transport.tryGetReady() except TransportNotReadyError: # Implicitly handled in the 'if not self.__transport.ready' below pass if not self.__transport.ready: time.sleep(timeToWait) self.__applyLogEntries() return if self.__needLoadDumpFile: if self.__conf.fullDumpFile is not None and os.path.isfile(self.__conf.fullDumpFile): self.__loadDumpFile(clearJournal=False) self.__needLoadDumpFile = False workTime = monotonicTime() - self.__startTime if workTime > self.__numOneSecondDumps: self.__numOneSecondDumps += 1 self.__raftLog.onOneSecondTimer() if self.__raftState in (_RAFT_STATE.FOLLOWER, _RAFT_STATE.CANDIDATE) and self.__selfNode is not None: if self.__raftElectionDeadline < monotonicTime() and self.__connectedToAnyone(): self.__raftElectionDeadline = monotonicTime() + self.__generateRaftTimeout() self.__raftLeader = None self.__setState(_RAFT_STATE.CANDIDATE) self.__raftCurrentTerm += 1 self.__votedForNodeId = self.__selfNode.id self.__votesCount = 1 for node in self.__otherNodes: self.__transport.send(node, { 'type': 'request_vote', 'term': self.__raftCurrentTerm, 'last_log_index': self.__getCurrentLogIndex(), 'last_log_term': self.__getCurrentLogTerm(), }) self.__onLeaderChanged() if self.__votesCount > (len(self.__otherNodes) + 1) / 2: self.__onBecomeLeader() if self.__raftState == _RAFT_STATE.LEADER: commitIdx = self.__raftCommitIndex nextCommitIdx = self.__raftCommitIndex while commitIdx < self.__getCurrentLogIndex(): commitIdx += 1 count = 1 for node in self.__otherNodes: if self.__raftMatchIndex[node] >= commitIdx: count += 1 if count <= (len(self.__otherNodes) + 1) / 2: break entries = self.__getEntries(commitIdx, 1) if not entries: continue commitTerm = entries[0][2] if commitTerm != self.__raftCurrentTerm: continue nextCommitIdx = commitIdx if self.__raftCommitIndex != nextCommitIdx: self.__raftCommitIndex = nextCommitIdx self.__raftLog.setRaftCommitIndex(self.__raftCommitIndex) self.__leaderCommitIndex = self.__raftCommitIndex deadline = monotonicTime() - self.__conf.leaderFallbackTimeout count = 1 for node in self.__otherNodes: if self.__lastResponseTime[node] > deadline: count += 1 if count <= (len(self.__otherNodes) + 1) / 2: self.__setState(_RAFT_STATE.FOLLOWER) self.__raftLeader = None needSendAppendEntries = self.__applyLogEntries() if self.__raftState == _RAFT_STATE.LEADER: if monotonicTime() > self.__newAppendEntriesTime or needSendAppendEntries: self.__sendAppendEntries() if not self.__onReadyCalled and self.__raftLastApplied == self.__leaderCommitIndex: if self.__conf.onReady: self.__conf.onReady() self.__onReadyCalled = True self._checkCommandsToApply() self.__tryLogCompaction() with self.__onTickCallbacksLock: for callback in self.__onTickCallbacks: callback() self._poller.poll(timeToWait) def __applyLogEntries(self): needSendAppendEntries = False if self.__raftCommitIndex > self.__raftLastApplied: count = self.__raftCommitIndex - self.__raftLastApplied entries = self.__getEntries(self.__raftLastApplied + 1, count) for entry in entries: try: currentTermID = entry[2] subscribers = self.__commandsWaitingCommit.pop(entry[1], []) res = self.__doApplyCommand(entry[0]) for subscribeTermID, callback in subscribers: if subscribeTermID == currentTermID: callback(res, FAIL_REASON.SUCCESS) else: callback(None, FAIL_REASON.DISCARDED) self.__raftLastApplied += 1 except SyncObjExceptionWrongVer as e: logger.error( 'request to switch to unsupported code version (self version: %d, requested version: %d)' % (self.__selfCodeVersion, e.ver)) if not self.__conf.appendEntriesUseBatch: needSendAppendEntries = True return needSendAppendEntries def addOnTickCallback(self, callback): with self.__onTickCallbacksLock: self.__onTickCallbacks.append(callback) def removeOnTickCallback(self, callback): with self.__onTickCallbacksLock: try: self.__onTickCallbacks.remove(callback) except ValueError: # callback not in list, ignore pass def isNodeConnected(self, node): """ Checks if the given node is connected :param node: node to check :type node: Node :rtype: bool """ return node in self.__connectedNodes @property def selfNode(self): """ :rtype: Node """ return self.__selfNode @property def otherNodes(self): """ :rtype: set of Node """ return self.__otherNodes.copy() @property def readonlyNodes(self): """ :rtype: set of Node """ return self.__readonlyNodes.copy() @property def raftLastApplied(self): """ :rtype: int """ return self.__raftLastApplied @property def raftCommitIndex(self): """ :rtype: int """ return self.__raftCommitIndex @property def raftCurrentTerm(self): """ :rtype: int """ return self.__raftCurrentTerm @property def hasQuorum(self): ''' Does the cluster have a quorum according to this node :rtype: bool ''' nodes = self.__otherNodes node_count = len(nodes) # Get number of connected nodes that participate in cluster quorum connected_count = len(nodes.intersection(self.__connectedNodes)) if self.__selfNode is not None: # This node participates in cluster quorum connected_count += 1 node_count += 1 return connected_count > node_count / 2 def getStatus(self): """Dumps different debug info about cluster to dict and return it""" status = {} status['version'] = VERSION status['revision'] = 'deprecated' status['self'] = self.selfNode status['state'] = self.__raftState status['leader'] = self.__raftLeader status['has_quorum'] = self.hasQuorum status['partner_nodes_count'] = len(self.__otherNodes) for node in self.__otherNodes: status['partner_node_status_server_' + node.id] = 2 if self.isNodeConnected(node) else 0 status['readonly_nodes_count'] = len(self.__readonlyNodes) for node in self.__readonlyNodes: status['readonly_node_status_server_' + node.id] = 2 if self.isNodeConnected(node) else 0 status['log_len'] = len(self.__raftLog) status['last_applied'] = self.raftLastApplied status['commit_idx'] = self.raftCommitIndex status['raft_term'] = self.raftCurrentTerm status['next_node_idx_count'] = len(self.__raftNextIndex) for node, idx in iteritems(self.__raftNextIndex): status['next_node_idx_server_' + node.id] = idx status['match_idx_count'] = len(self.__raftMatchIndex) for node, idx in iteritems(self.__raftMatchIndex): status['match_idx_server_' + node.id] = idx status['leader_commit_idx'] = self.__leaderCommitIndex status['uptime'] = int(monotonicTime() - self.__startTime) status['self_code_version'] = self.__selfCodeVersion status['enabled_code_version'] = self.__enabledCodeVersion return status def _getStatus(self, args, callback): callback(self.getStatus(), None) def printStatus(self): """Dumps different debug info about cluster to default logger""" status = self.getStatus() for k, v in iteritems(status): logger.info('%s: %s' % (str(k), str(v))) def _printStatus(self): self.printStatus() def forceLogCompaction(self): """Force to start log compaction (without waiting required time or required number of entries)""" self.__forceLogCompaction = True def _forceLogCompaction(self): self.forceLogCompaction() def __doApplyCommand(self, command): commandType = ord(command[:1]) # Skip no-op and membership change commands if commandType == _COMMAND_TYPE.VERSION: ver = pickle.loads(command[1:]) if self.__selfCodeVersion < ver: raise SyncObjExceptionWrongVer(ver) oldVer = self.__enabledCodeVersion self.__enabledCodeVersion = ver callback = self.__conf.onCodeVersionChanged self.__onSetCodeVersion(ver) if callback is not None: callback(oldVer, ver) return # This is required only after node restarts and apply journal # for normal case it is already done earlier and calls will be ignored clusterChangeRequest = self.__parseChangeClusterRequest(command) if clusterChangeRequest is not None: self.__doChangeCluster(clusterChangeRequest) return if commandType != _COMMAND_TYPE.REGULAR: return command = pickle.loads(command[1:]) args = [] kwargs = { '_doApply': True, } if not isinstance(command, tuple): funcID = command elif len(command) == 2: funcID, args = command else: funcID, args, newKwArgs = command kwargs.update(newKwArgs) return self._idToMethod[funcID](*args, **kwargs) def __onMessageReceived(self, node, message): if message['type'] == 'request_vote' and self.__selfNode is not None: if message['term'] > self.__raftCurrentTerm: self.__raftCurrentTerm = message['term'] self.__votedForNodeId = None self.__setState(_RAFT_STATE.FOLLOWER) self.__raftLeader = None if self.__raftState in (_RAFT_STATE.FOLLOWER, _RAFT_STATE.CANDIDATE): lastLogTerm = message['last_log_term'] lastLogIdx = message['last_log_index'] if message['term'] >= self.__raftCurrentTerm: if lastLogTerm < self.__getCurrentLogTerm(): return if lastLogTerm == self.__getCurrentLogTerm() and \ lastLogIdx < self.__getCurrentLogIndex(): return if self.__votedForNodeId is not None: return self.__votedForNodeId = node.id self.__raftElectionDeadline = monotonicTime() + self.__generateRaftTimeout() self.__transport.send(node, { 'type': 'response_vote', 'term': message['term'], }) if message['type'] == 'append_entries' and message['term'] >= self.__raftCurrentTerm: self.__raftElectionDeadline = monotonicTime() + self.__generateRaftTimeout() if self.__raftLeader != node: self.__onLeaderChanged() self.__raftLeader = node if message['term'] > self.__raftCurrentTerm: self.__raftCurrentTerm = message['term'] self.__votedForNodeId = None self.__setState(_RAFT_STATE.FOLLOWER) newEntries = message.get('entries', []) serialized = message.get('serialized', None) self.__leaderCommitIndex = leaderCommitIndex = message['commit_index'] # Regular append entries if 'prevLogIdx' in message: transmission = message.get('transmission', None) if transmission is not None: if transmission == 'start': self.__recvTransmission = message['data'] self.__sendNextNodeIdx(node, success=False, reset=False) return elif transmission == 'process': self.__recvTransmission += message['data'] self.__sendNextNodeIdx(node, success=False, reset=False) return elif transmission == 'finish': self.__recvTransmission += message['data'] newEntries = [pickle.loads(self.__recvTransmission)] self.__recvTransmission = '' else: raise Exception('Wrong transmission type') prevLogIdx = message['prevLogIdx'] prevLogTerm = message['prevLogTerm'] prevEntries = self.__getEntries(prevLogIdx) if not prevEntries: self.__sendNextNodeIdx(node, success=False, reset=True) return if prevEntries[0][2] != prevLogTerm: self.__sendNextNodeIdx(node, nextNodeIdx = prevLogIdx, success = False, reset=True) return if len(prevEntries) > 1: # rollback cluster changes if self.__conf.dynamicMembershipChange: for entry in reversed(prevEntries[1:]): clusterChangeRequest = self.__parseChangeClusterRequest(entry[0]) if clusterChangeRequest is not None: self.__doChangeCluster(clusterChangeRequest, reverse=True) self.__deleteEntriesFrom(prevLogIdx + 1) for entry in newEntries: self.__raftLog.add(*entry) # apply cluster changes if self.__conf.dynamicMembershipChange: for entry in newEntries: clusterChangeRequest = self.__parseChangeClusterRequest(entry[0]) if clusterChangeRequest is not None: self.__doChangeCluster(clusterChangeRequest) nextNodeIdx = prevLogIdx + 1 if newEntries: nextNodeIdx = newEntries[-1][1] + 1 self.__sendNextNodeIdx(node, nextNodeIdx=nextNodeIdx, success=True) # Install snapshot elif serialized is not None: if self.__serializer.setTransmissionData(serialized): self.__loadDumpFile(clearJournal=True) self.__sendNextNodeIdx(node, success=True) if leaderCommitIndex > self.__raftCommitIndex: self.__raftCommitIndex = min(leaderCommitIndex, self.__getCurrentLogIndex()) self.__raftLog.setRaftCommitIndex(self.__raftCommitIndex) if message['type'] == 'apply_command': if 'request_id' in message: self._applyCommand(message['command'], (node, message['request_id'])) else: self._applyCommand(message['command'], None) if message['type'] == 'apply_command_response': requestID = message['request_id'] error = message.get('error', None) callback = self.__commandsWaitingReply.pop(requestID, None) if callback is not None: if error is not None: callback(None, error) else: idx = message['log_idx'] term = message['log_term'] assert idx > self.__raftLastApplied self.__commandsWaitingCommit[idx].append((term, callback)) if self.__raftState == _RAFT_STATE.CANDIDATE: if message['type'] == 'response_vote' and message['term'] == self.__raftCurrentTerm: self.__votesCount += 1 if self.__votesCount > (len(self.__otherNodes) + 1) / 2: self.__onBecomeLeader() if self.__raftState == _RAFT_STATE.LEADER: if message['type'] == 'next_node_idx': reset = message['reset'] nextNodeIdx = message['next_node_idx'] success = message['success'] currentNodeIdx = nextNodeIdx - 1 if reset: self.__raftNextIndex[node] = nextNodeIdx if success: if self.__raftMatchIndex[node] < currentNodeIdx: self.__raftMatchIndex[node] = currentNodeIdx self.__raftNextIndex[node] = nextNodeIdx self.__lastResponseTime[node] = monotonicTime() def __callErrCallback(self, err, callback): if callback is None: return if isinstance(callback, tuple): requestNode, requestID = callback self.__transport.send(requestNode, { 'type': 'apply_command_response', 'request_id': requestID, 'error': err, }) return callback(None, err) def __sendNextNodeIdx(self, node, reset=False, nextNodeIdx = None, success = False): if nextNodeIdx is None: nextNodeIdx = self.__getCurrentLogIndex() + 1 self.__transport.send(node, { 'type': 'next_node_idx', 'next_node_idx': nextNodeIdx, 'reset': reset, 'success': success, }) def __generateRaftTimeout(self): minTimeout = self.__conf.raftMinTimeout maxTimeout = self.__conf.raftMaxTimeout return minTimeout + (maxTimeout - minTimeout) * random.random() def __onReadonlyNodeConnected(self, node): self.__readonlyNodes.add(node) self.__connectedNodes.add(node) self.__raftNextIndex[node] = self.__getCurrentLogIndex() + 1 self.__raftMatchIndex[node] = 0 def __onReadonlyNodeDisconnected(self, node): self.__readonlyNodes.discard(node) self.__connectedNodes.discard(node) self.__raftNextIndex.pop(node, None) self.__raftMatchIndex.pop(node, None) node._destroy() def __onNodeConnected(self, node): self.__connectedNodes.add(node) def __onNodeDisconnected(self, node): self.__connectedNodes.discard(node) def __getCurrentLogIndex(self): return self.__raftLog[-1][1] def __getCurrentLogTerm(self): return self.__raftLog[-1][2] def __getPrevLogIndexTerm(self, nextNodeIndex): prevIndex = nextNodeIndex - 1 entries = self.__getEntries(prevIndex, 1) if entries: return prevIndex, entries[0][2] return None, None def __getEntries(self, fromIDx, count=None, maxSizeBytes = None): firstEntryIDx = self.__raftLog[0][1] if fromIDx is None or fromIDx < firstEntryIDx: return [] diff = fromIDx - firstEntryIDx if count is None: result = self.__raftLog[diff:] else: result = self.__raftLog[diff:diff + count] if maxSizeBytes is None: return result totalSize = 0 i = 0 for i, entry in enumerate(result): totalSize += len(entry[0]) if totalSize >= maxSizeBytes: break return result[:i + 1] def _isLeader(self): """ Check if current node has a leader state. WARNING: there could be multiple leaders at the same time! :return: True if leader, False otherwise :rtype: bool """ return self.__raftState == _RAFT_STATE.LEADER def _getLeader(self): """ Returns last known leader. WARNING: this information could be outdated, eg. there could be another leader selected! WARNING: there could be multiple leaders at the same time! :return: the last known leader node. :rtype: Node """ return self.__raftLeader def isReady(self): """Check if current node is initially synced with others and has an actual data. :return: True if ready, False otherwise :rtype: bool """ return self.__onReadyCalled def _isReady(self): return self.isReady() def _getTerm(self): return self.__raftCurrentTerm def _getRaftLogSize(self): return len(self.__raftLog) def __deleteEntriesFrom(self, fromIDx): firstEntryIDx = self.__raftLog[0][1] diff = fromIDx - firstEntryIDx if diff < 0: return self.__raftLog.deleteEntriesFrom(diff) def __deleteEntriesTo(self, toIDx): firstEntryIDx = self.__raftLog[0][1] diff = toIDx - firstEntryIDx if diff < 0: return self.__raftLog.deleteEntriesTo(diff) def __onBecomeLeader(self): self.__raftLeader = self.__selfNode self.__setState(_RAFT_STATE.LEADER) self.__lastResponseTime.clear() for node in self.__otherNodes | self.__readonlyNodes: self.__raftNextIndex[node] = self.__getCurrentLogIndex() + 1 self.__raftMatchIndex[node] = 0 self.__lastResponseTime[node] = monotonicTime() # No-op command after leader election. idx, term = self.__getCurrentLogIndex() + 1, self.__raftCurrentTerm self.__raftLog.add(_bchr(_COMMAND_TYPE.NO_OP), idx, term) self.__noopIDx = idx if not self.__conf.appendEntriesUseBatch: self.__sendAppendEntries() self.__sendAppendEntries() def __setState(self, newState): oldState = self.__raftState self.__raftState = newState callback = self.__conf.onStateChanged if callback is not None and oldState != newState: callback(oldState, newState) def __onLeaderChanged(self): for id in sorted(self.__commandsWaitingReply): self.__commandsWaitingReply[id](None, FAIL_REASON.LEADER_CHANGED) self.__commandsWaitingReply = {} def __sendAppendEntries(self): self.__newAppendEntriesTime = monotonicTime() + self.__conf.appendEntriesPeriod startTime = monotonicTime() batchSizeBytes = self.__conf.appendEntriesBatchSizeBytes for node in self.__otherNodes | self.__readonlyNodes: if node not in self.__connectedNodes: self.__serializer.cancelTransmisstion(node) continue sendSingle = True sendingSerialized = False nextNodeIndex = self.__raftNextIndex[node] while nextNodeIndex <= self.__getCurrentLogIndex() or sendSingle or sendingSerialized: if nextNodeIndex > self.__raftLog[0][1]: prevLogIdx, prevLogTerm = self.__getPrevLogIndexTerm(nextNodeIndex) entries = [] if nextNodeIndex <= self.__getCurrentLogIndex(): entries = self.__getEntries(nextNodeIndex, None, batchSizeBytes) self.__raftNextIndex[node] = entries[-1][1] + 1 if len(entries) == 1 and len(entries[0][0]) >= batchSizeBytes: entry = pickle.dumps(entries[0]) for pos in xrange(0, len(entry), batchSizeBytes): currData = entry[pos:pos + batchSizeBytes] if pos == 0: transmission = 'start' elif pos + batchSizeBytes >= len(entries[0][0]): transmission = 'finish' else: transmission = 'process' message = { 'type': 'append_entries', 'transmission': transmission, 'data': currData, 'term': self.__raftCurrentTerm, 'commit_index': self.__raftCommitIndex, 'prevLogIdx': prevLogIdx, 'prevLogTerm': prevLogTerm, } self.__transport.send(node, message) if node not in self.__connectedNodes: break else: message = { 'type': 'append_entries', 'term': self.__raftCurrentTerm, 'commit_index': self.__raftCommitIndex, 'entries': entries, 'prevLogIdx': prevLogIdx, 'prevLogTerm': prevLogTerm, } self.__transport.send(node, message) if node not in self.__connectedNodes: break else: transmissionData = self.__serializer.getTransmissionData(node) message = { 'type': 'append_entries', 'term': self.__raftCurrentTerm, 'commit_index': self.__raftCommitIndex, 'serialized': transmissionData, } self.__transport.send(node, message) if node not in self.__connectedNodes: break if transmissionData is not None: isLast = transmissionData[2] if isLast: self.__raftNextIndex[node] = self.__raftLog[1][1] + 1 sendingSerialized = False else: sendingSerialized = True else: sendingSerialized = False nextNodeIndex = self.__raftNextIndex[node] sendSingle = False delta = monotonicTime() - startTime if delta > self.__conf.appendEntriesPeriod: break def __connectedToAnyone(self): return len(self.__connectedNodes) > 0 or len(self.__otherNodes) == 0 def _getConf(self): return self.__conf @property def conf(self): return self.__conf def _getEncryptor(self): return self.__encryptor @property def encryptor(self): return self.__encryptor def __changeCluster(self, request): if self.__raftLastApplied < self.__noopIDx: # No-op entry was not commited yet return False if self.__changeClusterIDx is not None: if self.__raftLastApplied >= self.__changeClusterIDx: self.__changeClusterIDx = None # Previous cluster change request was not commited yet if self.__changeClusterIDx is not None: return False return self.__doChangeCluster(request) def __setCodeVersion(self, newVersion): self.__enabledCodeVersion = newVersion def __doChangeCluster(self, request, reverse = False): requestType = request[0] requestNodeId = request[1] if len(request) >= 3: requestNode = request[2] if not isinstance(requestNode, Node): # Actually shouldn't be necessary, but better safe than sorry. requestNode = self.__nodeClass(requestNode) else: requestNode = self.__nodeClass(requestNodeId) if requestType == 'add': adding = not reverse elif requestType == 'rem': adding = reverse else: return False if adding: newNode = requestNode # Node already exists in cluster if newNode == self.__selfNode or newNode in self.__otherNodes: return False self.__otherNodes.add(newNode) self.__raftNextIndex[newNode] = self.__getCurrentLogIndex() + 1 self.__raftMatchIndex[newNode] = 0 if self._isLeader(): self.__lastResponseTime[newNode] = monotonicTime() self.__transport.addNode(newNode) return True else: oldNode = requestNode if oldNode == self.__selfNode: return False if oldNode not in self.__otherNodes: return False self.__otherNodes.discard(oldNode) self.__raftNextIndex.pop(oldNode, None) self.__raftMatchIndex.pop(oldNode, None) self.__transport.dropNode(oldNode) return True def __parseChangeClusterRequest(self, command): commandType = ord(command[:1]) if commandType != _COMMAND_TYPE.MEMBERSHIP: return None return pickle.loads(command[1:]) def __tryLogCompaction(self): currTime = monotonicTime() serializeState, serializeID = self.__serializer.checkSerializing() if serializeState == SERIALIZER_STATE.SUCCESS: self.__lastSerializedTime = currTime self.__deleteEntriesTo(serializeID) self.__lastSerializedEntry = serializeID if serializeState == SERIALIZER_STATE.FAILED: logger.warning('Failed to store full dump') if serializeState != SERIALIZER_STATE.NOT_SERIALIZING: return if len(self.__raftLog) <= self.__conf.logCompactionMinEntries and \ currTime - self.__lastSerializedTime <= self.__conf.logCompactionMinTime and \ not self.__forceLogCompaction: return if self.__conf.logCompactionSplit: allNodeIds = sorted([node.id for node in (self.__otherNodes | {self.__selfNode})]) nodesCount = len(allNodeIds) selfIdx = allNodeIds.index(self.__selfNode.id) interval = self.__conf.logCompactionMinTime periodStart = int(currTime / interval ) * interval nodeInterval = float(interval) / nodesCount nodeIntervalStart = periodStart + selfIdx * nodeInterval nodeIntervalEnd = nodeIntervalStart + 0.3 * nodeInterval if currTime < nodeIntervalStart or currTime >= nodeIntervalEnd: return self.__forceLogCompaction = False lastAppliedEntries = self.__getEntries(self.__raftLastApplied - 1, 2) if len(lastAppliedEntries) < 2 or lastAppliedEntries[0][1] == self.__lastSerializedEntry: self.__lastSerializedTime = currTime return if self.__conf.serializer is None: selfData = dict([(k, v) for k, v in iteritems(self.__dict__) if k not in self.__properies]) data = selfData if self.__consumers: data = [selfData] for consumer in self.__consumers: data.append(consumer._serialize()) else: data = None cluster = self.__otherNodes | {self.__selfNode} self.__serializer.serialize((data, lastAppliedEntries[1], lastAppliedEntries[0], cluster), lastAppliedEntries[0][1]) def __loadDumpFile(self, clearJournal): try: data = self.__serializer.deserialize() if data[0] is not None: if self.__consumers: selfData = data[0][0] consumersData = data[0][1:] else: selfData = data[0] consumersData = [] for k, v in iteritems(selfData): self.__dict__[k] = v for i, consumer in enumerate(self.__consumers): consumer._deserialize(consumersData[i]) if clearJournal or \ len(self.__raftLog) < 2 or \ self.__raftLog[0] != data[2] or \ self.__raftLog[1] != data[1]: self.__raftLog.clear() self.__raftLog.add(*data[2]) self.__raftLog.add(*data[1]) self.__raftLastApplied = data[1][1] if self.__conf.dynamicMembershipChange: self.__updateClusterConfiguration([node for node in data[3] if node != self.__selfNode]) self.__onSetCodeVersion(0) except: logger.exception('failed to load full dump') def __updateClusterConfiguration(self, newNodes): # newNodes: list of Node or node ID newNodes = {self.__nodeClass(node) if not isinstance(node, Node) else node for node in newNodes} nodesToRemove = self.__otherNodes - newNodes nodesToAdd = newNodes - self.__otherNodes for node in nodesToRemove: self.__raftNextIndex.pop(node, None) self.__raftMatchIndex.pop(node, None) self.__transport.dropNode(node) self.__otherNodes = newNodes for node in nodesToAdd: self.__transport.addNode(node) self.__raftNextIndex[node] = self.__getCurrentLogIndex() + 1 self.__raftMatchIndex[node] = 0 def __copy_func(f, name): if is_py3: res = types.FunctionType(f.__code__, f.__globals__, name, f.__defaults__, f.__closure__) res.__dict__ = f.__dict__ else: res = types.FunctionType(f.func_code, f.func_globals, name, f.func_defaults, f.func_closure) res.func_dict = f.func_dict return res class AsyncResult(object): def __init__(self): self.result = None self.error = None self.event = threading.Event() def onResult(self, res, err): self.result = res self.error = err self.event.set() def replicated(*decArgs, **decKwargs): """Replicated decorator. Use it to mark your class members that modifies a class state. Function will be called asynchronously. Function accepts flowing additional parameters (optional): 'callback': callback(result, failReason), failReason - `FAIL_REASON <#pysyncobj.FAIL_REASON>`_. 'sync': True - to block execution and wait for result, False - async call. If callback is passed, 'sync' option is ignored. 'timeout': if 'sync' is enabled, and no result is available for 'timeout' seconds - SyncObjException will be raised. These parameters are reserved and should not be used in kwargs of your replicated method. :param func: arbitrary class member :type func: function :param ver: (optional) - code version (for zero deployment) :type ver: int """ def replicatedImpl(func): def newFunc(self, *args, **kwargs): if kwargs.pop('_doApply', False): return func(self, *args, **kwargs) else: if isinstance(self, SyncObj): applier = self._applyCommand funcName = self._getFuncName(func.__name__) funcID = self._methodToID[funcName] elif isinstance(self, SyncObjConsumer): consumerId = id(self) funcName = self._syncObj._getFuncName((consumerId, func.__name__)) funcID = self._syncObj._methodToID[(consumerId, funcName)] applier = self._syncObj._applyCommand else: raise SyncObjException("Class should be inherited from SyncObj or SyncObjConsumer") callback = kwargs.pop('callback', None) if kwargs: cmd = (funcID, args, kwargs) elif args and not kwargs: cmd = (funcID, args) else: cmd = funcID sync = kwargs.pop('sync', False) if callback is not None: sync = False if sync: asyncResult = AsyncResult() callback = asyncResult.onResult timeout = kwargs.pop('timeout', None) applier(pickle.dumps(cmd), callback, _COMMAND_TYPE.REGULAR) if sync: res = asyncResult.event.wait(timeout) if not res: raise SyncObjException('Timeout') if not asyncResult.error == 0: raise SyncObjException(asyncResult.error) return asyncResult.result func_dict = newFunc.__dict__ if is_py3 else newFunc.func_dict func_dict['replicated'] = True func_dict['ver'] = int(decKwargs.get('ver', 0)) func_dict['origName'] = func.__name__ callframe = sys._getframe(1 if decKwargs else 2) namespace = callframe.f_locals newFuncName = func.__name__ + '_v' + str(func_dict['ver']) namespace[newFuncName] = __copy_func(newFunc, newFuncName) functools.update_wrapper(newFunc, func) return newFunc if len(decArgs) == 1 and len(decKwargs) == 0 and callable(decArgs[0]): return replicatedImpl(decArgs[0]) return replicatedImpl def replicated_sync(*decArgs, **decKwargs): def replicated_sync_impl(func, timeout = None): """Same as replicated, but synchronous by default. :param func: arbitrary class member :type func: function :param timeout: time to wait (seconds). Default: None :type timeout: float or None """ def newFunc(self, *args, **kwargs): if kwargs.get('_doApply', False): return replicated(func)(self, *args, **kwargs) else: kwargs.setdefault('timeout', timeout) kwargs.setdefault('sync', True) return replicated(func)(self, *args, **kwargs) func_dict = newFunc.__dict__ if is_py3 else newFunc.func_dict func_dict['replicated'] = True func_dict['ver'] = int(decKwargs.get('ver', 0)) func_dict['origName'] = func.__name__ callframe = sys._getframe(1 if decKwargs else 2) namespace = callframe.f_locals newFuncName = func.__name__ + '_v' + str(func_dict['ver']) namespace[newFuncName] = __copy_func(newFunc, newFuncName) functools.update_wrapper(newFunc, func) return newFunc if len(decArgs) == 1 and len(decKwargs) == 0 and callable(decArgs[0]): return replicated_sync_impl(decArgs[0]) return replicated_sync_impl PySyncObj-0.3.14/pysyncobj/syncobj_admin.py000066400000000000000000000041741475533247400207140ustar00rootroot00000000000000#!/usr/bin/env python import sys, os from argparse import ArgumentParser from .utility import TcpUtility, UtilityException def checkCorrectAddress(address): try: host, port = address.rsplit(':', 1) port = int(port) assert (port > 0 and port < 65536) return True except: return False def executeAdminCommand(args): parser = ArgumentParser() parser.add_argument('-conn', action='store', dest='connection', help='address to connect') parser.add_argument('-pass', action='store', dest='password', help='cluster\'s password') parser.add_argument('-status', action='store_true', help='send command \'status\'') parser.add_argument('-add', action='store', dest='add', help='send command \'add\'') parser.add_argument('-remove', action='store', dest='remove', help='send command \'remove\'') parser.add_argument('-set_version', action='store', dest='version', type=int, help='set cluster code version') data = parser.parse_args(args) if not checkCorrectAddress(data.connection): return 'invalid address to connect' if data.status: message = ['status'] elif data.add: if not checkCorrectAddress(data.add): return 'invalid address to command add' message = ['add', data.add] elif data.remove: if not checkCorrectAddress(data.remove): return 'invalid address to command remove' message = ['remove', data.remove] elif data.version is not None: message = ['set_version', data.version] else: return 'invalid command' util = TcpUtility(data.password) try: result = util.executeCommand(data.connection, message) except UtilityException as e: return str(e) if isinstance(result, str): return result if isinstance(result, dict): return '\n'.join('%s: %s' % (k, v) for k, v in sorted(result.items())) return str(result) def main(args=None): if args is None: args = sys.argv[1:] result = executeAdminCommand(args) sys.stdout.write(result) sys.stdout.write(os.linesep) if __name__ == '__main__': main() PySyncObj-0.3.14/pysyncobj/tcp_connection.py000066400000000000000000000252171475533247400211030ustar00rootroot00000000000000import time import socket from sys import platform import zlib import struct import pysyncobj.pickle as pickle import pysyncobj.win_inet_pton from .poller import POLL_EVENT_TYPE from .monotonic import monotonic as monotonicTime class CONNECTION_STATE: DISCONNECTED = 0 CONNECTING = 1 CONNECTED = 2 def _getAddrType(addr): try: socket.inet_aton(addr) return socket.AF_INET except socket.error: pass try: socket.inet_pton(socket.AF_INET6, addr) return socket.AF_INET6 except socket.error: pass raise Exception('unknown address type') import socket def set_keepalive_linux(sock, after_idle_sec=1, interval_sec=3, max_fails=5): sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, after_idle_sec) sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, interval_sec) sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, max_fails) def set_keepalive_osx(sock, after_idle_sec=1, interval_sec=3, max_fails=5): TCP_KEEPALIVE = 0x10 sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) sock.setsockopt(socket.IPPROTO_TCP, TCP_KEEPALIVE, interval_sec) def set_keepalive_windows(sock, after_idle_sec=1, interval_sec=3, max_fails=5): sock.ioctl(socket.SIO_KEEPALIVE_VALS, (1, after_idle_sec * 1000, interval_sec * 1000)) def set_keepalive(sock, after_idle_sec=1, interval_sec=3, max_fails=5): if platform == "linux" or platform == "linux2": set_keepalive_linux(sock, after_idle_sec, interval_sec, max_fails) elif platform == "darwin": set_keepalive_osx(sock, after_idle_sec, interval_sec, max_fails) elif platform == "win32": set_keepalive_windows(sock, after_idle_sec, interval_sec, max_fails) class TcpConnection(object): def __init__(self, poller, onMessageReceived = None, onConnected = None, onDisconnected = None, socket=None, timeout=10.0, sendBufferSize = 2 ** 13, recvBufferSize = 2 ** 13, keepalive=None): self.sendRandKey = None self.recvRandKey = None self.recvLastTimestamp = 0 self.encryptor = None self.__socket = socket self.__readBuffer = bytes() self.__writeBuffer = bytes() self.__lastReadTime = monotonicTime() self.__timeout = timeout self.__poller = poller self.__keepalive = keepalive if socket is not None: self.__socket = socket self.__fileno = socket.fileno() self.__state = CONNECTION_STATE.CONNECTED self.setSockoptKeepalive() self.__poller.subscribe(self.__fileno, self.__processConnection, POLL_EVENT_TYPE.READ | POLL_EVENT_TYPE.WRITE | POLL_EVENT_TYPE.ERROR) else: self.__state = CONNECTION_STATE.DISCONNECTED self.__fileno = None self.__socket = None self.__onMessageReceived = onMessageReceived self.__onConnected = onConnected self.__onDisconnected = onDisconnected self.__sendBufferSize = sendBufferSize self.__recvBufferSize = recvBufferSize def setSockoptKeepalive(self): if self.__socket is None: return if self.__keepalive is None: return set_keepalive( self.__socket, self.__keepalive[0], self.__keepalive[1], self.__keepalive[2], ) def setOnConnectedCallback(self, onConnected): self.__onConnected = onConnected def setOnMessageReceivedCallback(self, onMessageReceived): self.__onMessageReceived = onMessageReceived def setOnDisconnectedCallback(self, onDisconnected): self.__onDisconnected = onDisconnected def connect(self, host, port): if host is None: return False self.__state = CONNECTION_STATE.DISCONNECTED self.__fileno = None self.__socket = socket.socket(_getAddrType(host), socket.SOCK_STREAM) self.__socket.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, self.__sendBufferSize) self.__socket.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, self.__recvBufferSize) self.__socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) self.setSockoptKeepalive() self.__socket.setblocking(0) self.__readBuffer = bytes() self.__writeBuffer = bytes() self.__lastReadTime = monotonicTime() try: self.__socket.connect((host, port)) except socket.error as e: if e.errno not in (socket.errno.EINPROGRESS, socket.errno.EWOULDBLOCK): return False self.__fileno = self.__socket.fileno() self.__state = CONNECTION_STATE.CONNECTING self.__poller.subscribe(self.__fileno, self.__processConnection, POLL_EVENT_TYPE.READ | POLL_EVENT_TYPE.WRITE | POLL_EVENT_TYPE.ERROR) return True def send(self, message): if self.sendRandKey: message = (self.sendRandKey, message) data = zlib.compress(pickle.dumps(message), 3) if self.encryptor: data = self.encryptor.encrypt_at_time(data, int(monotonicTime())) data = struct.pack('i', len(data)) + data self.__writeBuffer += data self.__trySendBuffer() def fileno(self): return self.__fileno def disconnect(self): needCallDisconnect = False if self.__onDisconnected is not None and self.__state != CONNECTION_STATE.DISCONNECTED: needCallDisconnect = True self.sendRandKey = None self.recvRandKey = None self.recvLastTimestamp = 0 if self.__socket is not None: self.__socket.close() self.__socket = None if self.__fileno is not None: self.__poller.unsubscribe(self.__fileno) self.__fileno = None self.__writeBuffer = bytes() self.__readBuffer = bytes() self.__state = CONNECTION_STATE.DISCONNECTED if needCallDisconnect: self.__onDisconnected() def getSendBufferSize(self): return len(self.__writeBuffer) def __processConnection(self, descr, eventType): poller = self.__poller if descr != self.__fileno: poller.unsubscribe(descr) return if eventType & POLL_EVENT_TYPE.ERROR: self.disconnect() return self.__processConnectionTimeout() if self.state == CONNECTION_STATE.DISCONNECTED: return if eventType & POLL_EVENT_TYPE.READ or eventType & POLL_EVENT_TYPE.WRITE: if self.__socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR): self.disconnect() return if self.__state == CONNECTION_STATE.CONNECTING: if self.__onConnected is not None: self.__onConnected() if self.__state == CONNECTION_STATE.DISCONNECTED: return self.__state = CONNECTION_STATE.CONNECTED self.__lastReadTime = monotonicTime() return if eventType & POLL_EVENT_TYPE.WRITE: self.__trySendBuffer() if self.__state == CONNECTION_STATE.DISCONNECTED: return event = POLL_EVENT_TYPE.READ | POLL_EVENT_TYPE.ERROR if len(self.__writeBuffer) > 0: event |= POLL_EVENT_TYPE.WRITE poller.subscribe(descr, self.__processConnection, event) if eventType & POLL_EVENT_TYPE.READ: self.__tryReadBuffer() if self.__state == CONNECTION_STATE.DISCONNECTED: return while True: message = self.__processParseMessage() if message is None: break if self.__onMessageReceived is not None: self.__onMessageReceived(message) if self.__state == CONNECTION_STATE.DISCONNECTED: return def __processConnectionTimeout(self): if monotonicTime() - self.__lastReadTime > self.__timeout: self.disconnect() return def __trySendBuffer(self): self.__processConnectionTimeout() if self.state == CONNECTION_STATE.DISCONNECTED: return while self.__processSend(): pass def __processSend(self): if not self.__writeBuffer: return False try: res = self.__socket.send(self.__writeBuffer) if res < 0: self.disconnect() return False if res == 0: return False self.__writeBuffer = self.__writeBuffer[res:] return True except socket.error as e: if e.errno not in (socket.errno.EAGAIN, socket.errno.EWOULDBLOCK): self.disconnect() return False def __tryReadBuffer(self): while self.__processRead(): pass self.__lastReadTime = monotonicTime() def __processRead(self): try: incoming = self.__socket.recv(self.__recvBufferSize) except socket.error as e: if e.errno not in (socket.errno.EAGAIN, socket.errno.EWOULDBLOCK): self.disconnect() return False if self.__socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR): self.disconnect() return False if not incoming: self.disconnect() return False self.__readBuffer += incoming return True def __processParseMessage(self): if len(self.__readBuffer) < 4: return None l = struct.unpack('i', self.__readBuffer[:4])[0] if len(self.__readBuffer) - 4 < l: return None data = self.__readBuffer[4:4 + l] try: if self.encryptor: dataTimestamp = self.encryptor.extract_timestamp(data) assert dataTimestamp >= self.recvLastTimestamp self.recvLastTimestamp = dataTimestamp # Unfortunately we can't get a timestamp and data in one go data = self.encryptor.decrypt(data) message = pickle.loads(zlib.decompress(data)) if self.recvRandKey: randKey, message = message assert randKey == self.recvRandKey except: # Why no logging of security errors? self.disconnect() return None self.__readBuffer = self.__readBuffer[4 + l:] return message @property def state(self): return self.__state PySyncObj-0.3.14/pysyncobj/tcp_server.py000066400000000000000000000060561475533247400202520ustar00rootroot00000000000000import socket from .poller import POLL_EVENT_TYPE from .tcp_connection import TcpConnection, _getAddrType class SERVER_STATE: UNBINDED = 0, BINDED = 1 class TcpServer(object): def __init__( self, poller, host, port, onNewConnection, sendBufferSize = 2 ** 13, recvBufferSize = 2 ** 13, connectionTimeout = 3.5, keepalive = None, ): self.__poller = poller self.__host = host self.__port = int(port) self.__hostAddrType = _getAddrType(host) self.__sendBufferSize = sendBufferSize self.__recvBufferSize = recvBufferSize self.__socket = None self.__fileno = None self.__keepalive = keepalive self.__state = SERVER_STATE.UNBINDED self.__onNewConnectionCallback = onNewConnection self.__connectionTimeout = connectionTimeout def bind(self): self.__socket = socket.socket(self.__hostAddrType, socket.SOCK_STREAM) self.__socket.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, self.__sendBufferSize) self.__socket.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, self.__recvBufferSize) self.__socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) self.__socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.__socket.setblocking(0) self.__socket.bind((self.__host, self.__port)) self.__socket.listen(5) self.__fileno = self.__socket.fileno() self.__poller.subscribe(self.__fileno, self.__onNewConnection, POLL_EVENT_TYPE.READ | POLL_EVENT_TYPE.ERROR) self.__state = SERVER_STATE.BINDED def unbind(self): self.__state = SERVER_STATE.UNBINDED if self.__fileno is not None: self.__poller.unsubscribe(self.__fileno) self.__fileno = None if self.__socket is not None: self.__socket.close() def __onNewConnection(self, descr, event): if event & POLL_EVENT_TYPE.READ: try: sock, addr = self.__socket.accept() sock.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, self.__sendBufferSize) sock.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, self.__recvBufferSize) sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) sock.setblocking(0) conn = TcpConnection( poller=self.__poller, socket=sock, timeout=self.__connectionTimeout, sendBufferSize=self.__sendBufferSize, recvBufferSize=self.__recvBufferSize, keepalive=self.__keepalive, ) self.__onNewConnectionCallback(conn) except socket.error as e: if e.errno not in (socket.errno.EAGAIN, socket.errno.EWOULDBLOCK): self.unbind() return if event & POLL_EVENT_TYPE.ERROR: self.unbind() return PySyncObj-0.3.14/pysyncobj/transport.py000066400000000000000000000517761475533247400201430ustar00rootroot00000000000000from .config import FAIL_REASON from .dns_resolver import globalDnsResolver from .monotonic import monotonic as monotonicTime from .node import Node, TCPNode from .tcp_connection import TcpConnection, CONNECTION_STATE from .tcp_server import TcpServer import functools import os import threading import time import random class TransportNotReadyError(Exception): """Transport failed to get ready for operation.""" class Transport(object): """Base class for implementing a transport between PySyncObj nodes""" def __init__(self, syncObj, selfNode, otherNodes): """ Initialise the transport :param syncObj: SyncObj :type syncObj: SyncObj :param selfNode: current server node, or None if this is a read-only node :type selfNode: Node or None :param otherNodes: partner nodes :type otherNodes: list of Node """ self._onMessageReceivedCallback = None self._onNodeConnectedCallback = None self._onNodeDisconnectedCallback = None self._onReadonlyNodeConnectedCallback = None self._onReadonlyNodeDisconnectedCallback = None self._onUtilityMessageCallbacks = {} def setOnMessageReceivedCallback(self, callback): """ Set the callback for when a message is received, or disable callback by passing None :param callback callback :type callback function(node: Node, message: any) or None """ self._onMessageReceivedCallback = callback def setOnNodeConnectedCallback(self, callback): """ Set the callback for when the connection to a (non-read-only) node is established, or disable callback by passing None :param callback callback :type callback function(node: Node) or None """ self._onNodeConnectedCallback = callback def setOnNodeDisconnectedCallback(self, callback): """ Set the callback for when the connection to a (non-read-only) node is terminated or is considered dead, or disable callback by passing None :param callback callback :type callback function(node: Node) or None """ self._onNodeDisconnectedCallback = callback def setOnReadonlyNodeConnectedCallback(self, callback): """ Set the callback for when a read-only node connects, or disable callback by passing None :param callback callback :type callback function(node: Node) or None """ self._onReadonlyNodeConnectedCallback = callback def setOnReadonlyNodeDisconnectedCallback(self, callback): """ Set the callback for when a read-only node disconnects (or the connection is lost), or disable callback by passing None :param callback callback :type callback function(node: Node) or None """ self._onReadonlyNodeDisconnectedCallback = callback def setOnUtilityMessageCallback(self, message, callback): """ Set the callback for when an utility message is received, or disable callback by passing None :param message: the utility message string (add, remove, set_version, and so on) :type message: str :param callback: callback :type callback: function(message: list, callback: function) or None """ if callback: self._onUtilityMessageCallbacks[message] = callback elif message in self._onUtilityMessageCallbacks: del self._onUtilityMessageCallbacks[message] # Helper functions so you don't need to check for the callbacks manually in subclasses def _onMessageReceived(self, node, message): if self._onMessageReceivedCallback is not None: self._onMessageReceivedCallback(node, message) def _onNodeConnected(self, node): if self._onNodeConnectedCallback is not None: self._onNodeConnectedCallback(node) def _onNodeDisconnected(self, node): if self._onNodeDisconnectedCallback is not None: self._onNodeDisconnectedCallback(node) def _onReadonlyNodeConnected(self, node): if self._onReadonlyNodeConnectedCallback is not None: self._onReadonlyNodeConnectedCallback(node) def _onReadonlyNodeDisconnected(self, node): if self._onReadonlyNodeDisconnectedCallback is not None: self._onReadonlyNodeDisconnectedCallback(node) def tryGetReady(self): """ Try to get the transport ready for operation. This may for example mean binding a server to a port. :raises TransportNotReadyError: if the transport fails to get ready for operation """ @property def ready(self): """ Whether the transport is ready for operation. :rtype bool """ return True def waitReady(self): """ Wait for the transport to be ready. :raises TransportNotReadyError: if the transport fails to get ready for operation """ def addNode(self, node): """ Add a node to the network :param node node to add :type node Node """ def dropNode(self, node): """ Remove a node from the network (meaning connections, buffers, etc. related to this node can be dropped) :param node node to drop :type node Node """ def send(self, node, message): """ Send a message to a node. The message should be picklable. The return value signifies whether the message is thought to have been sent successfully. It does not necessarily mean that the message actually arrived at the node. :param node target node :type node Node :param message message :type message any :returns success :rtype bool """ raise NotImplementedError def destroy(self): """ Destroy the transport """ class TCPTransport(Transport): def __init__(self, syncObj, selfNode, otherNodes): """ Initialise the TCP transport. On normal (non-read-only) nodes, this will start a TCP server. On all nodes, it will initiate relevant connections to other nodes. :param syncObj: SyncObj :type syncObj: SyncObj :param selfNode: current node (None if this is a read-only node) :type selfNode: TCPNode or None :param otherNodes: partner nodes :type otherNodes: iterable of TCPNode """ super(TCPTransport, self).__init__(syncObj, selfNode, otherNodes) self._syncObj = syncObj self._server = None self._connections = {} # Node object -> TcpConnection object self._unknownConnections = set() # set of TcpConnection objects self._selfNode = selfNode self._selfIsReadonlyNode = selfNode is None self._nodes = set() # set of TCPNode self._readonlyNodes = set() # set of Node self._nodeAddrToNode = {} # node ID/address -> TCPNode (does not include read-only nodes) self._lastConnectAttempt = {} # TPCNode -> float (seconds since epoch) self._preventConnectNodes = set() # set of TCPNode to which no (re)connection should be triggered on _connectIfNecessary; used via dropNode and destroy to cleanly remove a node self._readonlyNodesCounter = 0 self._lastBindAttemptTime = 0 self._bindAttempts = 0 self._bindOverEvent = threading.Event() # gets triggered either when the server has either been bound correctly or when the number of bind attempts exceeds the config value maxBindRetries self._ready = False self._send_random_sleep_duration = 0 self._syncObj.addOnTickCallback(self._onTick) for node in otherNodes: self.addNode(node) if not self._selfIsReadonlyNode: self._createServer() else: self._ready = True def _connToNode(self, conn): """ Find the node to which a connection belongs. :param conn: connection object :type conn: TcpConnection :returns corresponding node or None if the node cannot be found :rtype Node or None """ for node in self._connections: if self._connections[node] is conn: return node return None def tryGetReady(self): """ Try to bind the server if necessary. :raises TransportNotReadyError if the server could not be bound """ self._maybeBind() @property def ready(self): return self._ready def _createServer(self): """ Create the TCP server (but don't bind yet) """ conf = self._syncObj.conf bindAddr = conf.bindAddress seflAddr = getattr(self._selfNode, 'address') if bindAddr is not None: host, port = bindAddr.rsplit(':', 1) elif seflAddr is not None: host, port = seflAddr.rsplit(':', 1) if ':' in host: host = '::' else: host = '0.0.0.0' else: raise RuntimeError('Unable to determine bind address') if host != '0.0.0.0': host = globalDnsResolver().resolve(host) self._server = TcpServer(self._syncObj._poller, host, port, onNewConnection = self._onNewIncomingConnection, sendBufferSize = conf.sendBufferSize, recvBufferSize = conf.recvBufferSize, connectionTimeout = conf.connectionTimeout) def _maybeBind(self): """ Bind the server unless it is already bound, this is a read-only node, or the last attempt was too recently. :raises TransportNotReadyError if the bind attempt fails """ if self._ready or self._selfIsReadonlyNode or monotonicTime() < self._lastBindAttemptTime + self._syncObj.conf.bindRetryTime: return self._lastBindAttemptTime = monotonicTime() try: self._server.bind() except Exception as e: self._bindAttempts += 1 if self._syncObj.conf.maxBindRetries and self._bindAttempts >= self._syncObj.conf.maxBindRetries: self._bindOverEvent.set() raise TransportNotReadyError else: self._ready = True self._bindOverEvent.set() def _onTick(self): """ Tick callback. Binds the server and connects to other nodes as necessary. """ try: self._maybeBind() except TransportNotReadyError: pass self._connectIfNecessary() def _onNewIncomingConnection(self, conn): """ Callback for connections initiated by the other side :param conn: connection object :type conn: TcpConnection """ self._unknownConnections.add(conn) encryptor = self._syncObj.encryptor if encryptor: conn.encryptor = encryptor conn.setOnMessageReceivedCallback(functools.partial(self._onIncomingMessageReceived, conn)) conn.setOnDisconnectedCallback(functools.partial(self._onDisconnected, conn)) def _onIncomingMessageReceived(self, conn, message): """ Callback for initial messages on incoming connections. Handles encryption, utility messages, and association of the connection with a Node. Once this initial setup is done, the relevant connected callback is executed, and further messages are deferred to the onMessageReceived callback. :param conn: connection object :type conn: TcpConnection :param message: received message :type message: any """ if self._syncObj.encryptor and not conn.sendRandKey: conn.sendRandKey = message conn.recvRandKey = os.urandom(32) conn.send(conn.recvRandKey) return # Utility messages if isinstance(message, list) and self._onUtilityMessage(conn, message): return # At this point, message should be either a node ID (i.e. address) or 'readonly' node = self._nodeAddrToNode[message] if message in self._nodeAddrToNode else None if node is None and message != 'readonly': conn.disconnect() self._unknownConnections.discard(conn) return readonly = node is None if readonly: nodeId = str(self._readonlyNodesCounter) node = Node(nodeId) self._readonlyNodes.add(node) self._readonlyNodesCounter += 1 self._unknownConnections.discard(conn) self._connections[node] = conn conn.setOnMessageReceivedCallback(functools.partial(self._onMessageReceived, node)) if not readonly: self._onNodeConnected(node) else: self._onReadonlyNodeConnected(node) def _onUtilityMessage(self, conn, message): command = message[0] if command in self._onUtilityMessageCallbacks: message[0] = command.upper() callback = functools.partial(self._utilityCallback, conn = conn, args = message) try: self._onUtilityMessageCallbacks[command](message[1:], callback) except Exception as e: conn.send(str(e)) return True def _utilityCallback(self, res, err, conn, args): """ Callback for the utility messages :param res: result of the command :param err: error code (one of pysyncobj.config.FAIL_REASON) :param conn: utility connection :param args: command with arguments """ if not (err is None and res): cmdResult = 'SUCCESS' if err == FAIL_REASON.SUCCESS else 'FAIL' res = ' '.join(map(str, [cmdResult] + args)) conn.send(res) def _shouldConnect(self, node): """ Check whether this node should initiate a connection to another node :param node: the other node :type node: Node """ return isinstance(node, TCPNode) and node not in self._preventConnectNodes and (self._selfIsReadonlyNode or self._selfNode.address > node.address) def _connectIfNecessarySingle(self, node): """ Connect to a node if necessary. :param node: node to connect to :type node: Node """ if node in self._connections and self._connections[node].state != CONNECTION_STATE.DISCONNECTED: return True if not self._shouldConnect(node): return False assert node in self._connections # Since we "should connect" to this node, there should always be a connection object already in place. if node in self._lastConnectAttempt and monotonicTime() - self._lastConnectAttempt[node] < self._syncObj.conf.connectionRetryTime: return False self._lastConnectAttempt[node] = monotonicTime() return self._connections[node].connect(node.ip, node.port) def _connectIfNecessary(self): """ Connect to all nodes as necessary. """ for node in self._nodes: self._connectIfNecessarySingle(node) def _sendSelfAddress(self, conn): if self._selfIsReadonlyNode: conn.send('readonly') else: conn.send(self._selfNode.address) def _onOutgoingConnected(self, conn): """ Callback for when a new connection from this to another node is established. Handles encryption and informs the other node which node this is. If encryption is disabled, this triggers the onNodeConnected callback and messages are deferred to the onMessageReceived callback. If encryption is enabled, the first message is handled by _onOutgoingMessageReceived. :param conn: connection object :type conn: TcpConnection """ if self._syncObj.encryptor: conn.setOnMessageReceivedCallback(functools.partial(self._onOutgoingMessageReceived, conn)) # So we can process the sendRandKey conn.recvRandKey = os.urandom(32) conn.send(conn.recvRandKey) else: self._sendSelfAddress(conn) # The onMessageReceived callback is configured in addNode already. self._onNodeConnected(self._connToNode(conn)) def _onOutgoingMessageReceived(self, conn, message): """ Callback for receiving a message on a new outgoing connection. Used only if encryption is enabled to exchange the random keys. Once the key exchange is done, this triggers the onNodeConnected callback, and further messages are deferred to the onMessageReceived callback. :param conn: connection object :type conn: TcpConnection :param message: received message :type message: any """ if not conn.sendRandKey: conn.sendRandKey = message self._sendSelfAddress(conn) node = self._connToNode(conn) conn.setOnMessageReceivedCallback(functools.partial(self._onMessageReceived, node)) self._onNodeConnected(node) def _onDisconnected(self, conn): """ Callback for when a connection is terminated or considered dead. Initiates a reconnect if necessary. :param conn: connection object :type conn: TcpConnection """ self._unknownConnections.discard(conn) node = self._connToNode(conn) if node is not None: if node in self._nodes: self._onNodeDisconnected(node) self._connectIfNecessarySingle(node) else: self._readonlyNodes.discard(node) self._onReadonlyNodeDisconnected(node) def waitReady(self): """ Wait for the TCP transport to become ready for operation, i.e. the server to be bound. This method should be called from a different thread than used for the SyncObj ticks. :raises TransportNotReadyError: if the number of bind tries exceeds the configured limit """ self._bindOverEvent.wait() if not self._ready: raise TransportNotReadyError def addNode(self, node): """ Add a node to the network :param node: node to add :type node: TCPNode """ self._nodes.add(node) self._nodeAddrToNode[node.address] = node if self._shouldConnect(node): conn = TcpConnection( poller = self._syncObj._poller, timeout = self._syncObj.conf.connectionTimeout, sendBufferSize = self._syncObj.conf.sendBufferSize, recvBufferSize = self._syncObj.conf.recvBufferSize, keepalive = self._syncObj.conf.tcp_keepalive, ) conn.encryptor = self._syncObj.encryptor conn.setOnConnectedCallback(functools.partial(self._onOutgoingConnected, conn)) conn.setOnMessageReceivedCallback(functools.partial(self._onMessageReceived, node)) conn.setOnDisconnectedCallback(functools.partial(self._onDisconnected, conn)) self._connections[node] = conn def dropNode(self, node): """ Drop a node from the network :param node: node to drop :type node: Node """ conn = self._connections.pop(node, None) if conn is not None: # Calling conn.disconnect() immediately triggers the onDisconnected callback if the connection isn't already disconnected, so this is necessary to prevent the automatic reconnect. self._preventConnectNodes.add(node) conn.disconnect() self._preventConnectNodes.remove(node) if isinstance(node, TCPNode): self._nodes.discard(node) self._nodeAddrToNode.pop(node.address, None) else: self._readonlyNodes.discard(node) self._lastConnectAttempt.pop(node, None) def send(self, node, message): """ Send a message to a node. Returns False if the connection appears to be dead either before or after actually trying to send the message. :param node: target node :type node: Node :param message: message :param message: any :returns success :rtype bool """ if node not in self._connections or self._connections[node].state != CONNECTION_STATE.CONNECTED: return False if self._send_random_sleep_duration: time.sleep(random.random() * self._send_random_sleep_duration) self._connections[node].send(message) if self._connections[node].state != CONNECTION_STATE.CONNECTED: return False return True def destroy(self): """ Destroy this transport """ self.setOnMessageReceivedCallback(None) self.setOnNodeConnectedCallback(None) self.setOnNodeDisconnectedCallback(None) self.setOnReadonlyNodeConnectedCallback(None) self.setOnReadonlyNodeDisconnectedCallback(None) for node in self._nodes | self._readonlyNodes: self.dropNode(node) if self._server is not None: self._server.unbind() for conn in list(self._unknownConnections): conn.disconnect() self._unknownConnections = set() PySyncObj-0.3.14/pysyncobj/utility.py000066400000000000000000000060721475533247400175770ustar00rootroot00000000000000import os import time from .encryptor import getEncryptor from .node import Node, TCPNode from .poller import createPoller from .tcp_connection import TcpConnection class UtilityException(Exception): pass class Utility(object): def __init__(self, password=None, timeout=900.0): """ Initialise the utility object :param password: password for encryption :type password: str or None :param timeout: communication timeout :type timeout: float """ def executeCommand(self, node, command): """ Executes command on the given node. :param node: where to execute the command :type node: Node or str :param command: the command which should be sent :type command: list :returns: result :rtype: any object :raises: UtilityException in case of error """ class TcpUtility(Utility): def __init__(self, password=None, timeout=900.0): self.__timeout = timeout self.__poller = createPoller('auto') self.__connection = TcpConnection(self.__poller, onDisconnected=self.__onDisconnected, onMessageReceived=self.__onMessageReceived, onConnected=self.__onConnected, timeout=timeout) if password is not None: self.__connection.encryptor = getEncryptor(password) self.__result = None self.__error = None def executeCommand(self, node, command): self.__result = None self.__error = None if not isinstance(node, Node): try: node = TCPNode(node) except Exception: self.__error = 'invalid address to connect' return self.__isConnected = self.__connection.connect(node.ip, node.port) if not self.__isConnected: self.__error = "can't connected" return deadline = time.time() + self.__timeout self.__data = command while self.__isConnected: self.__poller.poll(0.5) if time.time() > deadline: self.__connection.disconnect() if self.__result is None: raise UtilityException(self.__error) return self.__result def __onMessageReceived(self, message): if self.__connection.encryptor and not self.__connection.sendRandKey: self.__connection.sendRandKey = message self.__connection.send(self.__data) return self.__result = message self.__connection.disconnect() def __onDisconnected(self): self.__isConnected = False if self.__result is None: self.__error = 'connection lost' def __onConnected(self): if self.__connection.encryptor: self.__connection.recvRandKey = os.urandom(32) self.__connection.send(self.__connection.recvRandKey) return self.__connection.send(self.__data) PySyncObj-0.3.14/pysyncobj/version.py000066400000000000000000000000231475533247400175470ustar00rootroot00000000000000VERSION = '0.3.14' PySyncObj-0.3.14/pysyncobj/win_inet_pton.py000066400000000000000000000052751475533247400207540ustar00rootroot00000000000000# This software released into the public domain. Anyone is free to copy, # modify, publish, use, compile, sell, or distribute this software, # either in source code form or as a compiled binary, for any purpose, # commercial or non-commercial, and by any means. import socket import ctypes import os class sockaddr(ctypes.Structure): _fields_ = [("sa_family", ctypes.c_short), ("__pad1", ctypes.c_ushort), ("ipv4_addr", ctypes.c_byte * 4), ("ipv6_addr", ctypes.c_byte * 16), ("__pad2", ctypes.c_ulong)] if hasattr(ctypes, 'windll'): WSAStringToAddressA = ctypes.windll.ws2_32.WSAStringToAddressA WSAAddressToStringA = ctypes.windll.ws2_32.WSAAddressToStringA else: def not_windows(): raise SystemError( "Invalid platform. ctypes.windll must be available." ) WSAStringToAddressA = not_windows WSAAddressToStringA = not_windows def inet_pton(address_family, ip_string): addr = sockaddr() addr.sa_family = address_family addr_size = ctypes.c_int(ctypes.sizeof(addr)) if WSAStringToAddressA( ip_string, address_family, None, ctypes.byref(addr), ctypes.byref(addr_size) ) != 0: raise socket.error(ctypes.FormatError()) if address_family == socket.AF_INET: return ctypes.string_at(addr.ipv4_addr, 4) if address_family == socket.AF_INET6: return ctypes.string_at(addr.ipv6_addr, 16) raise socket.error('unknown address family') def inet_ntop(address_family, packed_ip): addr = sockaddr() addr.sa_family = address_family addr_size = ctypes.c_int(ctypes.sizeof(addr)) ip_string = ctypes.create_string_buffer(128) ip_string_size = ctypes.c_int(ctypes.sizeof(ip_string)) if address_family == socket.AF_INET: if len(packed_ip) != ctypes.sizeof(addr.ipv4_addr): raise socket.error('packed IP wrong length for inet_ntoa') ctypes.memmove(addr.ipv4_addr, packed_ip, 4) elif address_family == socket.AF_INET6: if len(packed_ip) != ctypes.sizeof(addr.ipv6_addr): raise socket.error('packed IP wrong length for inet_ntoa') ctypes.memmove(addr.ipv6_addr, packed_ip, 16) else: raise socket.error('unknown address family') if WSAAddressToStringA( ctypes.byref(addr), addr_size, None, ip_string, ctypes.byref(ip_string_size) ) != 0: raise socket.error(ctypes.FormatError()) return ip_string[:ip_string_size.value - 1] # Adding our two functions to the socket library if os.name == 'nt': socket.inet_pton = inet_pton socket.inet_ntop = inet_ntop PySyncObj-0.3.14/setup.cfg000066400000000000000000000000501475533247400153110ustar00rootroot00000000000000[metadata] description-file = README.md PySyncObj-0.3.14/setup.py000066400000000000000000000024651475533247400152160ustar00rootroot00000000000000from setuptools import setup from pysyncobj.version import VERSION description='A library for replicating your python class between multiple servers, based on raft protocol' try: import pypandoc long_description = pypandoc.convert('README.md', 'rst') except(IOError, ImportError, RuntimeError): long_description = description setup( name='pysyncobj', packages=['pysyncobj'], version=VERSION, description=description, long_description=long_description, author='Filipp Ozinov', author_email='fippo@mail.ru', license='MIT', url='https://github.com/bakwc/PySyncObj', download_url='https://github.com/bakwc/PySyncObj/tarball/' + VERSION, keywords=['network', 'replication', 'raft', 'synchronization'], classifiers=[ 'Topic :: System :: Networking', 'Topic :: System :: Distributed Computing', 'Intended Audience :: Developers', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Operating System :: POSIX :: Linux', 'Operating System :: MacOS :: MacOS X', 'License :: OSI Approved :: MIT License', ], entry_points={ 'console_scripts': [ 'syncobj_admin=pysyncobj.syncobj_admin:main', ], }, ) PySyncObj-0.3.14/syncobj_admin.py000077500000000000000000000001471475533247400166730ustar00rootroot00000000000000#!/usr/bin/env python from pysyncobj.syncobj_admin import main if __name__ == '__main__': main() PySyncObj-0.3.14/test_syncobj.py000077500000000000000000002003721475533247400165640ustar00rootroot00000000000000from __future__ import print_function import os import time import pytest import random import threading import sys import pysyncobj.pickle as pickle import pysyncobj.dns_resolver as dns_resolver import platform if sys.version_info >= (3, 0): xrange = range from functools import partial import functools import struct import logging from pysyncobj import SyncObj, SyncObjConf, replicated, FAIL_REASON, _COMMAND_TYPE, \ createJournal, HAS_CRYPTO, replicated_sync, SyncObjException, SyncObjConsumer, _RAFT_STATE from pysyncobj.syncobj_admin import executeAdminCommand from pysyncobj.batteries import ReplCounter, ReplList, ReplDict, ReplSet, ReplLockManager, ReplQueue, ReplPriorityQueue from pysyncobj.node import TCPNode from collections import defaultdict logging.basicConfig(format=u'[%(asctime)s %(filename)s:%(lineno)d %(levelname)s] %(message)s', level=logging.DEBUG) _bchr = functools.partial(struct.pack, 'B') class TEST_TYPE: DEFAULT = 0 COMPACTION_1 = 1 COMPACTION_2 = 2 RAND_1 = 3 JOURNAL_1 = 4 AUTO_TICK_1 = 5 WAIT_BIND = 6 LARGE_COMMAND = 7 class TestObj(SyncObj): def __init__(self, selfNodeAddr, otherNodeAddrs, testType=TEST_TYPE.DEFAULT, compactionMinEntries=0, dumpFile=None, journalFile=None, password=None, dynamicMembershipChange=False, useFork=True, testBindAddr=False, consumers=None, onStateChanged=None, leaderFallbackTimeout=None): cfg = SyncObjConf(autoTick=False, appendEntriesUseBatch=False) cfg.appendEntriesPeriod = 0.1 cfg.raftMinTimeout = 0.5 cfg.raftMaxTimeout = 1.0 cfg.dynamicMembershipChange = dynamicMembershipChange cfg.onStateChanged = onStateChanged if leaderFallbackTimeout is not None: cfg.leaderFallbackTimeout = leaderFallbackTimeout if testBindAddr: cfg.bindAddress = selfNodeAddr if dumpFile is not None: cfg.fullDumpFile = dumpFile if password is not None: cfg.password = password cfg.useFork = useFork if testType == TEST_TYPE.COMPACTION_1: cfg.logCompactionMinEntries = compactionMinEntries cfg.logCompactionMinTime = 0.1 cfg.appendEntriesUseBatch = True if testType == TEST_TYPE.COMPACTION_2: cfg.logCompactionMinEntries = 99999 cfg.logCompactionMinTime = 99999 cfg.fullDumpFile = dumpFile if testType == TEST_TYPE.LARGE_COMMAND: cfg.connectionTimeout = 15.0 cfg.logCompactionMinEntries = 99999 cfg.logCompactionMinTime = 99999 cfg.fullDumpFile = dumpFile cfg.raftMinTimeout = 1.5 cfg.raftMaxTimeout = 2.5 # cfg.appendEntriesBatchSizeBytes = 2 ** 13 if testType == TEST_TYPE.RAND_1: cfg.autoTickPeriod = 0.05 cfg.appendEntriesPeriod = 0.02 cfg.raftMinTimeout = 0.1 cfg.raftMaxTimeout = 0.2 cfg.logCompactionMinTime = 9999999 cfg.logCompactionMinEntries = 9999999 cfg.journalFile = journalFile if testType == TEST_TYPE.JOURNAL_1: cfg.logCompactionMinTime = 999999 cfg.logCompactionMinEntries = 999999 cfg.fullDumpFile = dumpFile cfg.journalFile = journalFile if testType == TEST_TYPE.AUTO_TICK_1: cfg.autoTick = True cfg.pollerType = 'select' if testType == TEST_TYPE.WAIT_BIND: cfg.maxBindRetries = 1 cfg.autoTick = True super(TestObj, self).__init__(selfNodeAddr, otherNodeAddrs, cfg, consumers) self.__counter = 0 self.__data = {} if testType == TEST_TYPE.RAND_1: self._SyncObj__transport._send_random_sleep_duration = 0.03 @replicated def addValue(self, value): self.__counter += value return self.__counter @replicated def addKeyValue(self, key, value): self.__data[key] = value @replicated_sync def addValueSync(self, value): self.__counter += value return self.__counter @replicated def testMethod(self): self.__data['testKey'] = 'valueVer1' @replicated(ver=1) def testMethod(self): self.__data['testKey'] = 'valueVer2' def getCounter(self): return self.__counter def getValue(self, key): return self.__data.get(key, None) def dumpKeys(self): print('keys:', sorted(self.__data.keys())) def singleTickFunc(o, timeToTick, interval, stopFunc): currTime = time.time() finishTime = currTime + timeToTick while time.time() < finishTime: o._onTick(interval) if stopFunc is not None: if stopFunc(): break def utilityTickFunc(args, currRes, key): currRes[key] = executeAdminCommand(args) def doSyncObjAdminTicks(objects, arguments, timeToTick, currRes, interval=0.05, stopFunc=None): objThreads = [] utilityThreads = [] for o in objects: t1 = threading.Thread(target=singleTickFunc, args=(o, timeToTick, interval, stopFunc)) t1.start() objThreads.append(t1) if arguments.get(o) is not None: t2 = threading.Thread(target=utilityTickFunc, args=(arguments[o], currRes, o)) t2.start() utilityThreads.append(t2) for t in objThreads: t.join() for t in utilityThreads: t.join() def doTicks(objects, timeToTick, interval=0.05, stopFunc=None): threads = [] for o in objects: t = threading.Thread(target=singleTickFunc, args=(o, timeToTick, interval, stopFunc)) t.start() threads.append(t) for t in threads: t.join() def doAutoTicks(interval=0.05, stopFunc=None): deadline = time.time() + interval while not stopFunc(): time.sleep(0.02) t2 = time.time() if t2 >= deadline: break _g_nextAddress = 6000 + 60 * (int(time.time()) % 600) def getNextAddr(ipv6=False, isLocalhost=False): global _g_nextAddress _g_nextAddress += 1 if ipv6: return '::1:%d' % _g_nextAddress if isLocalhost: return 'localhost:%d' % _g_nextAddress return '127.0.0.1:%d' % _g_nextAddress _g_nextDumpFile = 1 _g_nextJournalFile = 1 def getNextDumpFile(): global _g_nextDumpFile fname = 'dump%d.bin' % _g_nextDumpFile _g_nextDumpFile += 1 return fname def getNextJournalFile(): global _g_nextJournalFile fname = 'journal%d.bin' % _g_nextJournalFile _g_nextJournalFile += 1 return fname def test_syncTwoObjects(): random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]]) o2 = TestObj(a[1], [a[0]]) objs = [o1, o2] assert not o1._isReady() assert not o2._isReady() doTicks(objs, 10.0, stopFunc=lambda: o1._isReady() and o2._isReady()) o1.waitBinded() o2.waitBinded() o1._printStatus() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1._isReady() assert o2._isReady() o1.addValue(150) o2.addValue(200) doTicks(objs, 10.0, stopFunc=lambda: o1.getCounter() == 350 and o2.getCounter() == 350) assert o1._isReady() assert o2._isReady() assert o1.getCounter() == 350 assert o2.getCounter() == 350 o1._destroy() o2._destroy() def test_hasQuorum(): random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]]) o2 = TestObj(a[1], [a[0]]) objs = [o1, o2] doTicks(objs, 10.0, stopFunc=lambda: o1._isReady() and o2._isReady()) o1.waitBinded() o2.waitBinded() o1._printStatus() assert o1.hasQuorum # Stop the second node in the cluster o2._destroy() doTicks(objs, 10.0, stopFunc=lambda: not o1.hasQuorum) assert not o1.hasQuorum o1._destroy() def test_singleObject(): random.seed(42) a = [getNextAddr(), ] o1 = TestObj(a[0], []) objs = [o1, ] assert not o1._isReady() doTicks(objs, 3.0, stopFunc=lambda: o1._isReady()) o1._printStatus() assert o1._getLeader().address in a assert o1._isReady() o1.addValue(150) o1.addValue(200) doTicks(objs, 3.0, stopFunc=lambda: o1.getCounter() == 350) assert o1._isReady() assert o1.getCounter() == 350 o1._destroy() def test_syncThreeObjectsLeaderFail(): random.seed(12) a = [getNextAddr(), getNextAddr(), getNextAddr()] states = defaultdict(list) o1 = TestObj(a[0], [a[1], a[2]], testBindAddr=True, onStateChanged=lambda old, new: states[a[0]].append(new)) o2 = TestObj(a[1], [a[2], a[0]], testBindAddr=True, onStateChanged=lambda old, new: states[a[1]].append(new)) o3 = TestObj(a[2], [a[0], a[1]], testBindAddr=True, onStateChanged=lambda old, new: states[a[2]].append(new)) objs = [o1, o2, o3] assert not o1._isReady() assert not o2._isReady() assert not o3._isReady() doTicks(objs, 10.0, stopFunc=lambda: o1._isReady() and o2._isReady() and o3._isReady()) assert o1._isReady() assert o2._isReady() assert o3._isReady() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1._getLeader() == o3._getLeader() assert _RAFT_STATE.LEADER in states[o1._getLeader().address] o1.addValue(150) o2.addValue(200) doTicks(objs, 10.0, stopFunc=lambda: o3.getCounter() == 350) assert o3.getCounter() == 350 prevLeader = o1._getLeader() newObjs = [o for o in objs if o._SyncObj__selfNode != prevLeader] assert len(newObjs) == 2 doTicks(newObjs, 10.0, stopFunc=lambda: newObjs[0]._getLeader() != prevLeader and \ newObjs[0]._getLeader() is not None and \ newObjs[0]._getLeader().address in a and \ newObjs[0]._getLeader() == newObjs[1]._getLeader()) assert newObjs[0]._getLeader() != prevLeader assert newObjs[0]._getLeader().address in a assert newObjs[0]._getLeader() == newObjs[1]._getLeader() assert _RAFT_STATE.LEADER in states[newObjs[0]._getLeader().address] newObjs[1].addValue(50) doTicks(newObjs, 10, stopFunc=lambda: newObjs[0].getCounter() == 400) assert newObjs[0].getCounter() == 400 doTicks(objs, 10.0, stopFunc=lambda: sum([int(o.getCounter() == 400) for o in objs]) == len(objs)) for o in objs: assert o.getCounter() == 400 o1._destroy() o2._destroy() o3._destroy() def test_manyActionsLogCompaction(): random.seed(42) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1], a[2]], TEST_TYPE.COMPACTION_1, compactionMinEntries=100) o2 = TestObj(a[1], [a[2], a[0]], TEST_TYPE.COMPACTION_1, compactionMinEntries=100) o3 = TestObj(a[2], [a[0], a[1]], TEST_TYPE.COMPACTION_1, compactionMinEntries=100) objs = [o1, o2, o3] assert not o1._isReady() assert not o2._isReady() assert not o3._isReady() doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady() and o3._isReady()) assert o1._isReady() assert o2._isReady() assert o3._isReady() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1._getLeader() == o3._getLeader() for i in xrange(0, 500): o1.addValue(1) o2.addValue(1) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 1000 and o2.getCounter() == 1000 and o3.getCounter() == 1000 and o1._getRaftLogSize() <= 100 and o2._getRaftLogSize() <= 100 and o3._getRaftLogSize() <= 100 ) assert o1.getCounter() == 1000 assert o2.getCounter() == 1000 assert o3.getCounter() == 1000 assert o1._getRaftLogSize() <= 100 assert o2._getRaftLogSize() <= 100 assert o3._getRaftLogSize() <= 100 newObjs = [o1, o2] doTicks(newObjs, 10, stopFunc=lambda: o3._getLeader() is None) for i in xrange(0, 500): o1.addValue(1) o2.addValue(1) doTicks(newObjs, 10, stopFunc=lambda: o1.getCounter() == 2000 and o2.getCounter() == 2000 and o1._getRaftLogSize() <= 100 and o2._getRaftLogSize() <= 100 and o3._getRaftLogSize() <= 100 ) assert o1.getCounter() == 2000 assert o2.getCounter() == 2000 assert o3.getCounter() != 2000 doTicks(objs, 10, stopFunc=lambda: o3.getCounter() == 2000) assert o3.getCounter() == 2000 assert o1._getRaftLogSize() <= 100 assert o2._getRaftLogSize() <= 100 assert o3._getRaftLogSize() <= 100 o1._destroy() o2._destroy() o3._destroy() def onAddValue(res, err, info): assert res == 3 assert err == FAIL_REASON.SUCCESS info['callback'] = True def test_checkCallbacksSimple(): random.seed(42) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1], a[2]]) o2 = TestObj(a[1], [a[2], a[0]]) o3 = TestObj(a[2], [a[0], a[1]]) objs = [o1, o2, o3] assert not o1._isReady() assert not o2._isReady() assert not o3._isReady() doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady() and o3._isReady()) assert o1._isReady() assert o2._isReady() assert o3._isReady() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1._getLeader() == o3._getLeader() callbackInfo = { 'callback': False } o1.addValue(3, callback=partial(onAddValue, info=callbackInfo)) doTicks(objs, 10, stopFunc=lambda: o2.getCounter() == 3 and callbackInfo['callback'] == True) assert o2.getCounter() == 3 assert callbackInfo['callback'] == True o1._destroy() o2._destroy() o3._destroy() def removeFiles(files): for f in (files): if os.path.isfile(f): for i in xrange(0, 15): try: if os.path.isfile(f): os.remove(f) break else: break except: time.sleep(1.0) def checkDumpToFile(useFork): dumpFiles = [getNextDumpFile(), getNextDumpFile()] removeFiles(dumpFiles) random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.COMPACTION_2, dumpFile=dumpFiles[0], useFork=useFork) o2 = TestObj(a[1], [a[0]], TEST_TYPE.COMPACTION_2, dumpFile=dumpFiles[1], useFork=useFork) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() o1.addValue(150) o2.addValue(200) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 350 and o2.getCounter() == 350) assert o1.getCounter() == 350 assert o2.getCounter() == 350 o1._forceLogCompaction() o2._forceLogCompaction() doTicks(objs, 1.5) o1._destroy() o2._destroy() a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.COMPACTION_2, dumpFile=dumpFiles[0], useFork=useFork) o2 = TestObj(a[1], [a[0]], TEST_TYPE.COMPACTION_2, dumpFile=dumpFiles[1], useFork=useFork) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._isReady() assert o2._isReady() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1.getCounter() == 350 assert o2.getCounter() == 350 o1._destroy() o2._destroy() removeFiles(dumpFiles) def test_checkDumpToFile(): if hasattr(os, 'fork'): checkDumpToFile(True) checkDumpToFile(False) def getRandStr(): return '%0100000x' % random.randrange(16 ** 100000) def test_checkBigStorage(): dumpFiles = [getNextDumpFile(), getNextDumpFile()] removeFiles(dumpFiles) random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.COMPACTION_2, dumpFile=dumpFiles[0]) o2 = TestObj(a[1], [a[0]], TEST_TYPE.COMPACTION_2, dumpFile=dumpFiles[1]) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() # Store ~50Mb data. testRandStr = getRandStr() for i in xrange(0, 500): o1.addKeyValue(i, getRandStr()) o1.addKeyValue('test', testRandStr) # Wait for replication. doTicks(objs, 60, stopFunc=lambda: o1.getValue('test') == testRandStr and \ o2.getValue('test') == testRandStr) assert o1.getValue('test') == testRandStr o1._forceLogCompaction() o2._forceLogCompaction() # Wait for disk dump doTicks(objs, 8.0) o1._destroy() o2._destroy() a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.COMPACTION_2, dumpFile=dumpFiles[0]) o2 = TestObj(a[1], [a[0]], TEST_TYPE.COMPACTION_2, dumpFile=dumpFiles[1]) objs = [o1, o2] # Wait for disk load, election and replication doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1.getValue('test') == testRandStr assert o2.getValue('test') == testRandStr o1._destroy() o2._destroy() removeFiles(dumpFiles) @pytest.mark.skipif(sys.platform == "win32" or platform.python_implementation() != 'CPython', reason="does not run on windows or pypy") def test_encryptionCorrectPassword(): assert HAS_CRYPTO random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], password='asd') o2 = TestObj(a[1], [a[0]], password='asd') objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() o1.addValue(150) o2.addValue(200) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 350 and o2.getCounter() == 350) assert o1.getCounter() == 350 assert o2.getCounter() == 350 for conn in list(o1._SyncObj__transport._connections.values()) + list(o2._SyncObj__transport._connections.values()): conn.disconnect() doTicks(objs, 10) o1.addValue(100) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 450 and o2.getCounter() == 450) assert o1.getCounter() == 450 assert o2.getCounter() == 450 o1._destroy() o2._destroy() @pytest.mark.skipif(platform.python_implementation() != 'CPython', reason="does not have crypto on pypy") def test_encryptionWrongPassword(): assert HAS_CRYPTO random.seed(12) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1], a[2]], password='asd') o2 = TestObj(a[1], [a[2], a[0]], password='asd') o3 = TestObj(a[2], [a[0], a[1]], password='qwe') objs = [o1, o2, o3] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() doTicks(objs, 1.0) assert o3._getLeader() is None o1._destroy() o2._destroy() o3._destroy() def _checkSameLeader(objs): for obj1 in objs: l1 = obj1._getLeader() if l1 != obj1._SyncObj__selfNode: continue t1 = obj1._getTerm() for obj2 in objs: l2 = obj2._getLeader() if l2 != obj2._SyncObj__selfNode: continue if obj2._getTerm() != t1: continue if l2 != l1: obj1._printStatus() obj2._printStatus() return False return True def _checkSameLeader2(objs): for obj1 in objs: l1 = obj1._getLeader() if l1 is None: continue t1 = obj1._getTerm() for obj2 in objs: l2 = obj2._getLeader() if l2 is None: continue if obj2._getTerm() != t1: continue if l2 != l1: obj1._printStatus() obj2._printStatus() return False return True def test_randomTest1(): journalFiles = [getNextJournalFile(), getNextJournalFile(), getNextJournalFile()] removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) random.seed(12) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1], a[2]], TEST_TYPE.RAND_1, journalFile=journalFiles[0]) o2 = TestObj(a[1], [a[2], a[0]], TEST_TYPE.RAND_1, journalFile=journalFiles[1]) o3 = TestObj(a[2], [a[0], a[1]], TEST_TYPE.RAND_1, journalFile=journalFiles[2]) objs = [o1, o2, o3] raft_commit_indices = [0, 0, 0] st = time.time() while time.time() - st < 120.0: doTicks(objs, random.random() * 0.3, interval=0.05) for i in range(3): new_commit_idx = objs[i]._SyncObj__raftCommitIndex assert new_commit_idx >= raft_commit_indices[i] raft_commit_indices[i] = new_commit_idx assert _checkSameLeader(objs) assert _checkSameLeader2(objs) for i in xrange(0, random.randint(0, 2)): random.choice(objs).addValue(random.randint(0, 10)) newObjs = list(objs) newObjs.pop(random.randint(0, len(newObjs) - 1)) doTicks(newObjs, random.random() * 0.3, interval=0.05) for i in range(3): new_commit_idx = objs[i]._SyncObj__raftCommitIndex assert new_commit_idx >= raft_commit_indices[i] raft_commit_indices[i] = new_commit_idx assert _checkSameLeader(objs) assert _checkSameLeader2(objs) for i in xrange(0, random.randint(0, 2)): random.choice(objs).addValue(random.randint(0, 10)) if not (o1.getCounter() == o2.getCounter() == o3.getCounter()): print(time.time(), 'counters:', o1.getCounter(), o2.getCounter(), o3.getCounter()) # disable send delays to make test finish faster for obj in objs: obj._SyncObj__transport._send_random_sleep_duration = 0.00 st = time.time() while not (o1.getCounter() == o2.getCounter() == o3.getCounter()): doTicks(objs, 2.0, interval=0.05) if time.time() - st > 30: break if not (o1.getCounter() == o2.getCounter() == o3.getCounter()): o1._printStatus() o2._printStatus() o3._printStatus() print('Logs same:', o1._SyncObj__raftLog == o2._SyncObj__raftLog == o3._SyncObj__raftLog) print(time.time(), 'counters:', o1.getCounter(), o2.getCounter(), o3.getCounter()) raise AssertionError('Values not equal') counter = o1.getCounter() o1._destroy() o2._destroy() o3._destroy() del o1 del o2 del o3 time.sleep(0.1) o1 = TestObj(a[0], [a[1], a[2]], TEST_TYPE.RAND_1, journalFile=journalFiles[0]) o2 = TestObj(a[1], [a[2], a[0]], TEST_TYPE.RAND_1, journalFile=journalFiles[1]) o3 = TestObj(a[2], [a[0], a[1]], TEST_TYPE.RAND_1, journalFile=journalFiles[2]) objs = [o1, o2, o3] st = time.time() while not (o1.getCounter() == o2.getCounter() == o3.getCounter() == counter): doTicks(objs, 2.0, interval=0.05) if time.time() - st > 30: break if not (o1.getCounter() == o2.getCounter() == o3.getCounter() >= counter): o1._printStatus() o2._printStatus() o3._printStatus() print('Logs same:', o1._SyncObj__raftLog == o2._SyncObj__raftLog == o3._SyncObj__raftLog) print(time.time(), 'counters:', o1.getCounter(), o2.getCounter(), o3.getCounter(), counter) raise AssertionError('Values not equal') removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) # Ensure that raftLog after serialization is the same as in serialized data def test_logCompactionRegressionTest1(): random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]]) o2 = TestObj(a[1], [a[0]]) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() o1._forceLogCompaction() doTicks(objs, 0.5) assert o1._SyncObj__forceLogCompaction == False logAfterCompaction = o1._SyncObj__raftLog o1._SyncObj__loadDumpFile(True) logAfterDeserialize = o1._SyncObj__raftLog assert logAfterCompaction == logAfterDeserialize o1._destroy() o2._destroy() def test_logCompactionRegressionTest2(): dumpFiles = [getNextDumpFile(), getNextDumpFile(), getNextDumpFile()] removeFiles(dumpFiles) random.seed(12) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1], a[2]], dumpFile=dumpFiles[0]) o2 = TestObj(a[1], [a[2], a[0]], dumpFile=dumpFiles[1]) o3 = TestObj(a[2], [a[0], a[1]], dumpFile=dumpFiles[2]) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) objs = [o1, o2, o3] o1.addValue(2) o1.addValue(3) doTicks(objs, 10, stopFunc=lambda: o3.getCounter() == 5) o3._forceLogCompaction() doTicks(objs, 0.5) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() == o3._getLeader() o3._destroy() objs = [o1, o2] o1.addValue(2) o1.addValue(3) doTicks(objs, 0.5) o1._forceLogCompaction() o2._forceLogCompaction() doTicks(objs, 0.5) o3 = TestObj(a[2], [a[0], a[1]], dumpFile=dumpFiles[2]) objs = [o1, o2, o3] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady() and o3._isReady()) assert o1._isReady() assert o2._isReady() assert o3._isReady() o1._destroy() o2._destroy() o3._destroy() removeFiles(dumpFiles) def __checkParnerNodeExists(obj, nodeAddr, shouldExist=True): nodeAddrSet = {node.address for node in obj._SyncObj__otherNodes} return ( nodeAddr in nodeAddrSet) == shouldExist # either nodeAddr is in nodeAddrSet and shouldExist is True, or nodeAddr isn't in the set and shouldExist is False def test_doChangeClusterUT1(): dumpFiles = [getNextDumpFile()] removeFiles(dumpFiles) baseAddr = getNextAddr() oterAddr = getNextAddr() o1 = TestObj(baseAddr, ['localhost:1235', oterAddr], dumpFile=dumpFiles[0], dynamicMembershipChange=True) __checkParnerNodeExists(o1, 'localhost:1238', False) __checkParnerNodeExists(o1, 'localhost:1239', False) __checkParnerNodeExists(o1, 'localhost:1235', True) noop = _bchr(_COMMAND_TYPE.NO_OP) member = _bchr(_COMMAND_TYPE.MEMBERSHIP) # Check regular configuration change - adding o1._SyncObj__onMessageReceived(TCPNode('localhost:12345'), { 'type': 'append_entries', 'term': 1, 'prevLogIdx': 1, 'prevLogTerm': 0, 'commit_index': 2, 'entries': [(noop, 2, 1), (noop, 3, 1), (member + pickle.dumps(['add', 'localhost:1238']), 4, 1)] }) __checkParnerNodeExists(o1, 'localhost:1238', True) __checkParnerNodeExists(o1, 'localhost:1239', False) # Check rollback adding o1._SyncObj__onMessageReceived(TCPNode('localhost:1236'), { 'type': 'append_entries', 'term': 2, 'prevLogIdx': 2, 'prevLogTerm': 1, 'commit_index': 3, 'entries': [(noop, 3, 2), (member + pickle.dumps(['add', 'localhost:1239']), 4, 2)] }) __checkParnerNodeExists(o1, 'localhost:1238', False) __checkParnerNodeExists(o1, 'localhost:1239', True) __checkParnerNodeExists(o1, oterAddr, True) # Check regular configuration change - removing o1._SyncObj__onMessageReceived(TCPNode('localhost:1236'), { 'type': 'append_entries', 'term': 2, 'prevLogIdx': 4, 'prevLogTerm': 2, 'commit_index': 4, 'entries': [(member + pickle.dumps(['rem', 'localhost:1235']), 5, 2)] }) __checkParnerNodeExists(o1, 'localhost:1238', False) __checkParnerNodeExists(o1, 'localhost:1239', True) __checkParnerNodeExists(o1, 'localhost:1235', False) # Check log compaction o1._forceLogCompaction() doTicks([o1], 0.5) o1._destroy() o2 = TestObj(oterAddr, [baseAddr, 'localhost:1236'], dumpFile='dump1.bin', dynamicMembershipChange=True) doTicks([o2], 0.5) __checkParnerNodeExists(o2, oterAddr, False) __checkParnerNodeExists(o2, baseAddr, True) __checkParnerNodeExists(o2, 'localhost:1238', False) __checkParnerNodeExists(o2, 'localhost:1239', True) __checkParnerNodeExists(o2, 'localhost:1235', False) o2._destroy() removeFiles(dumpFiles) def test_doChangeClusterUT2(): a = [getNextAddr(), getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1], a[2]], dynamicMembershipChange=True) o2 = TestObj(a[1], [a[2], a[0]], dynamicMembershipChange=True) o3 = TestObj(a[2], [a[0], a[1]], dynamicMembershipChange=True) doTicks([o1, o2, o3], 10, stopFunc=lambda: o1._isReady() and o2._isReady() and o3._isReady()) assert o1._isReady() == o2._isReady() == o3._isReady() == True o3.addValue(50) o2.addNodeToCluster(a[3]) success = False for i in xrange(10): doTicks([o1, o2, o3], 0.5) res = True res &= __checkParnerNodeExists(o1, a[3], True) res &= __checkParnerNodeExists(o2, a[3], True) res &= __checkParnerNodeExists(o3, a[3], True) if res: success = True break o2.addNodeToCluster(a[3]) assert success o4 = TestObj(a[3], [a[0], a[1], a[2]], dynamicMembershipChange=True) doTicks([o1, o2, o3, o4], 10, stopFunc=lambda: o4._isReady()) o1.addValue(450) doTicks([o1, o2, o3, o4], 10, stopFunc=lambda: o4.getCounter() == 500) assert o4.getCounter() == 500 o1._destroy() o2._destroy() o3._destroy() o4._destroy() def test_journalTest1(): dumpFiles = [getNextDumpFile(), getNextDumpFile()] journalFiles = [getNextJournalFile(), getNextJournalFile()] removeFiles(dumpFiles) removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[0], journalFile=journalFiles[0]) o2 = TestObj(a[1], [a[0]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[1], journalFile=journalFiles[1]) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() o1.addValue(150) o2.addValue(200) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 350 and o2.getCounter() == 350) assert o1.getCounter() == 350 assert o2.getCounter() == 350 o1._destroy() o2._destroy() a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[0], journalFile=journalFiles[0]) o2 = TestObj(a[1], [a[0]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[1], journalFile=journalFiles[1]) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady() and \ o1.getCounter() == 350 and o2.getCounter() == 350) assert o1._isReady() assert o2._isReady() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1.getCounter() == 350 assert o2.getCounter() == 350 o1.addValue(100) o2.addValue(150) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 600 and o2.getCounter() == 600) assert o1.getCounter() == 600 assert o2.getCounter() == 600 o1._forceLogCompaction() o2._forceLogCompaction() doTicks(objs, 0.5) o1.addValue(150) o2.addValue(150) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 900 and o2.getCounter() == 900) assert o1.getCounter() == 900 assert o2.getCounter() == 900 o1._destroy() o2._destroy() a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[0], journalFile=journalFiles[0]) o2 = TestObj(a[1], [a[0]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[1], journalFile=journalFiles[1]) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady() and \ o1.getCounter() == 900 and o2.getCounter() == 900) assert o1._isReady() assert o2._isReady() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1.getCounter() == 900 assert o2.getCounter() == 900 o1._destroy() o2._destroy() removeFiles(dumpFiles) removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) def test_journalTest2(): journalFiles = [getNextJournalFile()] removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) removeFiles(journalFiles) journal = createJournal(journalFiles[0]) journal.add(b'cmd1', 1, 0) journal.add(b'cmd2', 2, 0) journal.add(b'cmd3', 3, 0) journal._destroy() journal = createJournal(journalFiles[0]) assert len(journal) == 3 assert journal[0] == (b'cmd1', 1, 0) assert journal[-1] == (b'cmd3', 3, 0) journal.deleteEntriesFrom(2) journal._destroy() journal = createJournal(journalFiles[0]) assert len(journal) == 2 assert journal[0] == (b'cmd1', 1, 0) assert journal[-1] == (b'cmd2', 2, 0) journal.deleteEntriesTo(1) journal._destroy() journal = createJournal(journalFiles[0]) assert len(journal) == 1 assert journal[0] == (b'cmd2', 2, 0) journal._destroy() removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) def test_applyJournalAfterRestart(): dumpFiles = [getNextDumpFile(), getNextDumpFile()] journalFiles = [getNextJournalFile(), getNextJournalFile()] removeFiles(dumpFiles) removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[0], journalFile=journalFiles[0]) o2 = TestObj(a[1], [a[0]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[1], journalFile=journalFiles[1]) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() o1.addValue(150) o2.addValue(200) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 350 and o2.getCounter() == 350) assert o1.getCounter() == 350 assert o2.getCounter() == 350 doTicks(objs, 2) o1._destroy() o2._destroy() removeFiles(dumpFiles) o1 = TestObj(a[0], [a[1]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[0], journalFile=journalFiles[0]) objs = [o1] doTicks(objs, 10, o1.getCounter() == 350) assert o1.getCounter() == 350 removeFiles(dumpFiles) removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) def test_autoTick1(): random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.AUTO_TICK_1) o2 = TestObj(a[1], [a[0]], TEST_TYPE.AUTO_TICK_1) assert not o1._isReady() assert not o2._isReady() time.sleep(4.5) assert o1._isReady() assert o2._isReady() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1._isReady() assert o2._isReady() o1.addValue(150) o2.addValue(200) time.sleep(1.5) assert o1._isReady() assert o2._isReady() assert o1.getCounter() == 350 assert o2.getCounter() == 350 assert o2.addValueSync(10) == 360 assert o1.addValueSync(20) == 380 o1._destroy() o2._destroy() time.sleep(0.5) def test_largeCommands(): dumpFiles = [getNextDumpFile(), getNextDumpFile()] removeFiles(dumpFiles) random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.LARGE_COMMAND, dumpFile=dumpFiles[0], leaderFallbackTimeout=60.0) o2 = TestObj(a[1], [a[0]], TEST_TYPE.LARGE_COMMAND, dumpFile=dumpFiles[1], leaderFallbackTimeout=60.0) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() # Generate ~20Mb data. testRandStr = getRandStr() bigStr = '' for i in xrange(0, 200): bigStr += getRandStr() o1.addKeyValue('big', bigStr) o1.addKeyValue('test', testRandStr) # Wait for replication. doTicks(objs, 60, stopFunc=lambda: o1.getValue('test') == testRandStr and \ o2.getValue('test') == testRandStr and \ o1.getValue('big') == bigStr and \ o2.getValue('big') == bigStr) assert o1.getValue('test') == testRandStr assert o2.getValue('big') == bigStr o1._forceLogCompaction() o2._forceLogCompaction() # Wait for disk dump doTicks(objs, 8.0) o1._destroy() o2._destroy() a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.LARGE_COMMAND, dumpFile=dumpFiles[0], leaderFallbackTimeout=60.0) o2 = TestObj(a[1], [a[0]], TEST_TYPE.LARGE_COMMAND, dumpFile=dumpFiles[1], leaderFallbackTimeout=60.0) objs = [o1, o2] # Wait for disk load, election and replication doTicks(objs, 60, stopFunc=lambda: o1.getValue('test') == testRandStr and \ o2.getValue('test') == testRandStr and \ o1.getValue('big') == bigStr and \ o2.getValue('big') == bigStr and \ o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1.getValue('test') == testRandStr assert o2.getValue('big') == bigStr assert o1.getValue('test') == testRandStr assert o2.getValue('big') == bigStr o1._destroy() o2._destroy() removeFiles(dumpFiles) @pytest.mark.skipif(platform.python_implementation() != 'CPython', reason="does not have crypto on pypy") def test_readOnlyNodes(): random.seed(12) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1], a[2]], password='123') o2 = TestObj(a[1], [a[2], a[0]], password='123') o3 = TestObj(a[2], [a[0], a[1]], password='123') objs = [o1, o2, o3] b1 = TestObj(None, [a[0], a[1], a[2]], password='123') b2 = TestObj(None, [a[0], a[1], a[2]], password='123') roObjs = [b1, b2] doTicks(objs, 10.0, stopFunc=lambda: o1._isReady() and o2._isReady() and o3._isReady()) assert o1._isReady() assert o2._isReady() assert o3._isReady() o1.addValue(150) o2.addValue(200) doTicks(objs, 10.0, stopFunc=lambda: o3.getCounter() == 350) doTicks(objs + roObjs, 4.0, stopFunc=lambda: b1.getCounter() == 350 and b2.getCounter() == 350) assert b1.getCounter() == b2.getCounter() == 350 assert o1._getLeader() == b1._getLeader() == o2._getLeader() == b2._getLeader() assert b1._getLeader().address in a prevLeader = o1._getLeader() newObjs = [o for o in objs if o._SyncObj__selfNode != prevLeader] assert len(newObjs) == 2 doTicks(newObjs + roObjs, 10.0, stopFunc=lambda: newObjs[0]._getLeader() != prevLeader and \ newObjs[0]._getLeader() is not None and \ newObjs[0]._getLeader().address in a and \ newObjs[0]._getLeader() == newObjs[1]._getLeader()) assert newObjs[0]._getLeader() != prevLeader assert newObjs[0]._getLeader().address in a assert newObjs[0]._getLeader() == newObjs[1]._getLeader() newObjs[1].addValue(50) doTicks(newObjs + roObjs, 10.0, stopFunc=lambda: newObjs[0].getCounter() == 400 and b1.getCounter() == 400) o1._printStatus() o2._printStatus() o3._printStatus() b1._printStatus() assert newObjs[0].getCounter() == 400 assert b1.getCounter() == 400 doTicks(objs + roObjs, 10.0, stopFunc=lambda: sum([int(o.getCounter() == 400) for o in objs + roObjs]) == len(objs + roObjs)) for o in objs + roObjs: assert o.getCounter() == 400 currRes = {} def onAdd(res, err): currRes[0] = err b1.addValue(50, callback=onAdd) doTicks(objs + roObjs, 5.0, stopFunc=lambda: o1.getCounter() == 450 and \ b1.getCounter() == 450 and \ b2.getCounter() == 450 and currRes.get(0) == FAIL_REASON.SUCCESS) assert o1.getCounter() == 450 assert b1.getCounter() == 450 assert b2.getCounter() == 450 assert currRes.get(0) == FAIL_REASON.SUCCESS # check that all objects have 2 readonly nodes assert all(map(lambda o: o.getStatus()['readonly_nodes_count'] == 2, objs)) # disconnect readonly node b1._destroy() doTicks(objs, 2.0) assert all(map(lambda o: o.getStatus()['readonly_nodes_count'] == 1, objs)) o1._destroy() o2._destroy() o3._destroy() b1._destroy() b2._destroy() @pytest.mark.skipif(platform.python_implementation() != 'CPython', reason="does not have crypto on pypy") def test_syncobjAdminStatus(): assert HAS_CRYPTO random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], password='123') o2 = TestObj(a[1], [a[0]], password='123') assert not o1._isReady() assert not o2._isReady() doTicks([o1, o2], 10.0, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._isReady() assert o2._isReady() status1 = o1.getStatus() status2 = o2.getStatus() assert 'version' in status1 assert 'log_len' in status2 trueRes = { o1: '\n'.join('%s: %s' % (k, v) for k, v in sorted(status1.items())), o2: '\n'.join('%s: %s' % (k, v) for k, v in sorted(status2.items())), } currRes = { } args = { o1: ['-conn', a[0], '-pass', '123', '-status'], o2: ['-conn', a[1], '-pass', '123', '-status'], } doSyncObjAdminTicks([o1, o2], args, 10.0, currRes, stopFunc=lambda: currRes.get(o1) is not None and currRes.get(o2) is not None) assert len(currRes[o1]) == len(trueRes[o1]) assert len(currRes[o2]) == len(trueRes[o2]) o1._destroy() o2._destroy() def test_syncobjAdminAddRemove(): random.seed(42) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], dynamicMembershipChange=True) o2 = TestObj(a[1], [a[0]], dynamicMembershipChange=True) assert not o1._isReady() assert not o2._isReady() doTicks([o1, o2], 10.0, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._isReady() assert o2._isReady() trueRes = 'SUCCESS ADD ' + a[2] currRes = {} args = { o1: ['-conn', a[0], '-add', a[2]], } doSyncObjAdminTicks([o1, o2], args, 10.0, currRes, stopFunc=lambda: currRes.get(o1) is not None) assert currRes[o1] == trueRes o3 = TestObj(a[2], [a[1], a[0]], dynamicMembershipChange=True) doTicks([o1, o2, o3], 10.0, stopFunc=lambda: o1._isReady() and o2._isReady() and o3._isReady()) assert o1._isReady() assert o2._isReady() assert o3._isReady() trueRes = 'SUCCESS REMOVE ' + a[2] args[o1] = None args[o2] = ['-conn', a[1], '-remove', a[2]] doSyncObjAdminTicks([o1, o2, o3], args, 10.0, currRes, stopFunc=lambda: currRes.get(o2) is not None) assert currRes[o2] == trueRes o3._destroy() doTicks([o1, o2], 10.0, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._isReady() assert o2._isReady() o1._destroy() o2._destroy() def test_journalWithAddNodes(): dumpFiles = [getNextDumpFile(), getNextDumpFile(), getNextDumpFile()] journalFiles = [getNextJournalFile(), getNextJournalFile(), getNextJournalFile()] removeFiles(dumpFiles) removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) random.seed(42) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[0], journalFile=journalFiles[0], dynamicMembershipChange=True) o2 = TestObj(a[1], [a[0]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[1], journalFile=journalFiles[1], dynamicMembershipChange=True) objs = [o1, o2] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() o1.addValue(150) o2.addValue(200) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 350 and o2.getCounter() == 350) assert o1.getCounter() == 350 assert o2.getCounter() == 350 doTicks(objs, 2) trueRes = 'SUCCESS ADD ' + a[2] currRes = {} args = { o1: ['-conn', a[0], '-add', a[2]], } doSyncObjAdminTicks([o1, o2], args, 10.0, currRes, stopFunc=lambda: currRes.get(o1) is not None) assert currRes[o1] == trueRes o3 = TestObj(a[2], [a[1], a[0]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[2], journalFile=journalFiles[2], dynamicMembershipChange=True) doTicks([o1, o2, o3], 10.0, stopFunc=lambda: o1._isReady() and o2._isReady() and o3._isReady()) assert o1._isReady() assert o2._isReady() assert o3._isReady() assert o3.getCounter() == 350 doTicks(objs, 2) o1._destroy() o2._destroy() o3._destroy() removeFiles(dumpFiles) o1 = TestObj(a[0], [a[1]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[0], journalFile=journalFiles[0], dynamicMembershipChange=True) o2 = TestObj(a[1], [a[0]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[1], journalFile=journalFiles[1], dynamicMembershipChange=True) o3 = TestObj(a[2], [a[1], a[0]], TEST_TYPE.JOURNAL_1, dumpFile=dumpFiles[2], journalFile=journalFiles[2], dynamicMembershipChange=True) objs = [o1, o2, o3] doTicks(objs, 10, stopFunc=lambda: o1._isReady() and o1.getCounter() == 350 and o3._isReady() and o3.getCounter() == 350) assert o1._isReady() assert o3._isReady() assert o1.getCounter() == 350 assert o3.getCounter() == 350 o2.addValue(200) doTicks(objs, 10, stopFunc=lambda: o1.getCounter() == 550 and o3.getCounter() == 550) assert o1.getCounter() == 550 assert o3.getCounter() == 550 o1._destroy() o2._destroy() o3._destroy() removeFiles(dumpFiles) removeFiles(journalFiles) removeFiles([e + '.meta' for e in journalFiles]) def test_syncobjAdminSetVersion(): random.seed(42) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], dynamicMembershipChange=True) o2 = TestObj(a[1], [a[0]], dynamicMembershipChange=True) assert not o1._isReady() assert not o2._isReady() doTicks([o1, o2], 10.0, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._isReady() assert o2._isReady() assert o1.getCodeVersion() == 0 assert o2.getCodeVersion() == 0 o2.testMethod() doTicks([o1, o2], 10.0, stopFunc=lambda: o1.getValue('testKey') == 'valueVer1' and \ o2.getValue('testKey') == 'valueVer1') assert o1.getValue('testKey') == 'valueVer1' assert o2.getValue('testKey') == 'valueVer1' trueRes = 'SUCCESS SET_VERSION 1' currRes = {} args = { o1: ['-conn', a[0], '-set_version', '1'], } doSyncObjAdminTicks([o1, o2], args, 10.0, currRes, stopFunc=lambda: currRes.get(o1) is not None) assert currRes[o1] == trueRes doTicks([o1, o2], 10.0, stopFunc=lambda: o1.getCodeVersion() == 1 and o2.getCodeVersion() == 1) assert o1.getCodeVersion() == 1 assert o2.getCodeVersion() == 1 o2.testMethod() doTicks([o1, o2], 10.0, stopFunc=lambda: o1.getValue('testKey') == 'valueVer2' and \ o2.getValue('testKey') == 'valueVer2') assert o1.getValue('testKey') == 'valueVer2' assert o2.getValue('testKey') == 'valueVer2' o1._destroy() o2._destroy() @pytest.mark.skipif(os.name == 'nt', reason='temporary disabled for windows') def test_syncobjWaitBinded(): random.seed(42) a = [getNextAddr(), getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], testType=TEST_TYPE.WAIT_BIND) o2 = TestObj(a[1], [a[0]], testType=TEST_TYPE.WAIT_BIND) o1.waitBinded() o2.waitBinded() o3 = TestObj(a[1], [a[0]], testType=TEST_TYPE.WAIT_BIND) with pytest.raises(SyncObjException): o3.waitBinded() o1.destroy() o2.destroy() o3.destroy() @pytest.mark.skipif(os.name == 'nt', reason='temporary disabled for windows') def test_unpickle(): data = {'foo': 'bar', 'command': b'\xfa', 'entries': [b'\xfb', b'\xfc']} python2_cpickle = b'\x80\x02}q\x01(U\x03fooq\x02U\x03barq\x03U\x07commandq\x04U\x01\xfaU\x07entriesq\x05]q\x06(U\x01\xfbU\x01\xfceu.' python2_pickle = b'\x80\x02}q\x00(U\x03fooq\x01U\x03barq\x02U\x07commandq\x03U\x01\xfaq\x04U\x07entriesq\x05]q\x06(U\x01\xfbq\x07U\x01\xfcq\x08eu.' python3_pickle = b'\x80\x02}q\x00(X\x03\x00\x00\x00fooq\x01X\x03\x00\x00\x00barq\x02X\x07\x00\x00\x00commandq\x03c_codecs\nencode\nq\x04X\x02\x00\x00\x00\xc3\xbaq\x05X\x06\x00\x00\x00latin1q\x06\x86q\x07Rq\x08X\x07\x00\x00\x00entriesq\t]q\n(h\x04X\x02\x00\x00\x00\xc3\xbbq\x0bh\x06\x86q\x0cRq\rh\x04X\x02\x00\x00\x00\xc3\xbcq\x0eh\x06\x86q\x0fRq\x10eu.' python2_cpickle_data = pickle.loads(python2_cpickle) assert data == python2_cpickle_data, 'Failed to unpickle data pickled by python2 cPickle' python2_pickle_data = pickle.loads(python2_pickle) assert data == python2_pickle_data, 'Failed to unpickle data pickled by python2 pickle' python3_pickle_data = pickle.loads(python3_pickle) assert data == python3_pickle_data, 'Failed to unpickle data pickled by python3 pickle' class TestConsumer1(SyncObjConsumer): def __init__(self): super(TestConsumer1, self).__init__() self.__counter = 0 @replicated def add(self, value): self.__counter += value @replicated def set(self, value): self.__counter = value def get(self): return self.__counter class TestConsumer2(SyncObjConsumer): def __init__(self): super(TestConsumer2, self).__init__() self.__values = {} @replicated def set(self, key, value): self.__values[key] = value def get(self, key): return self.__values.get(key) def test_consumers(): random.seed(42) a = [getNextAddr(), getNextAddr(), getNextAddr()] c11 = TestConsumer1() c12 = TestConsumer1() c13 = TestConsumer2() c21 = TestConsumer1() c22 = TestConsumer1() c23 = TestConsumer2() c31 = TestConsumer1() c32 = TestConsumer1() c33 = TestConsumer2() o1 = TestObj(a[0], [a[1], a[2]], consumers=[c11, c12, c13]) o2 = TestObj(a[1], [a[0], a[2]], consumers=[c21, c22, c23]) o3 = TestObj(a[2], [a[0], a[1]], consumers=[c31, c32, c33]) objs = [o1, o2] assert not o1._isReady() assert not o2._isReady() doTicks(objs, 10.0, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1._isReady() assert o2._isReady() c11.set(42) c11.add(10) c12.add(15) c13.set('testKey', 'testValue') doTicks(objs, 10.0, stopFunc=lambda: c21.get() == 52 and c22.get() == 15 and c23.get('testKey') == 'testValue') assert c21.get() == 52 assert c22.get() == 15 assert c23.get('testKey') == 'testValue' o1.forceLogCompaction() o2.forceLogCompaction() doTicks(objs, 0.5) objs = [o1, o2, o3] doTicks(objs, 10.0, stopFunc=lambda: c31.get() == 52 and c32.get() == 15 and c33.get('testKey') == 'testValue') assert c31.get() == 52 assert c32.get() == 15 assert c33.get('testKey') == 'testValue' o1.destroy() o2.destroy() o3.destroy() def test_batteriesCommon(): d1 = ReplDict() l1 = ReplLockManager(autoUnlockTime=30.0) d2 = ReplDict() l2 = ReplLockManager(autoUnlockTime=30.0) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], TEST_TYPE.AUTO_TICK_1, consumers=[d1, l1]) o2 = TestObj(a[1], [a[0]], TEST_TYPE.AUTO_TICK_1, consumers=[d2, l2]) doAutoTicks(10.0, stopFunc=lambda: o1.isReady() and o2.isReady()) assert o1.isReady() and o2.isReady() d1.set('testKey', 'testValue', sync=True) doAutoTicks(3.0, stopFunc=lambda: d2.get('testKey') == 'testValue') assert d2['testKey'] == 'testValue' d2.pop('testKey', sync=True) doAutoTicks(3.0, stopFunc=lambda: d1.get('testKey') == None) assert d1.get('testKey') == None assert l1.tryAcquire('test.lock1', sync=True) == True assert l2.tryAcquire('test.lock1', sync=True) == False assert l2.isAcquired('test.lock1') == False l1id = l1._ReplLockManager__selfID l1._ReplLockManager__lockImpl.prolongate(l1id, 0, _doApply=True) l1.release('test.lock1', sync=True) assert l2.tryAcquire('test.lock1', sync=True) == True assert d1.setdefault('keyA', 'valueA', sync=True) == 'valueA' assert d2.setdefault('keyA', 'valueB', sync=True) == 'valueA' d2.pop('keyA', sync=True) assert d2.setdefault('keyA', 'valueB', sync=True) == 'valueB' o1.destroy() o2.destroy() l1.destroy() l2.destroy() def test_ReplCounter(): c = ReplCounter() c.set(42, _doApply=True) assert c.get() == 42 c.add(10, _doApply=True) assert c.get() == 52 c.sub(20, _doApply=True) assert c.get() == 32 c.inc(_doApply=True) assert c.get() == 33 def test_ReplList(): l = ReplList() l.reset([1, 2, 3], _doApply=True) assert l.rawData() == [1, 2, 3] l.set(1, 10, _doApply=True) assert l.rawData() == [1, 10, 3] l.append(42, _doApply=True) assert l.rawData() == [1, 10, 3, 42] l.extend([5, 6], _doApply=True) assert l.rawData() == [1, 10, 3, 42, 5, 6] l.insert(2, 66, _doApply=True) assert l.rawData() == [1, 10, 66, 3, 42, 5, 6] l.remove(66, _doApply=True) assert l.rawData() == [1, 10, 3, 42, 5, 6] l.pop(1, _doApply=True) assert l.rawData() == [1, 3, 42, 5, 6] l.sort(reverse=True, _doApply=True) assert l.rawData() == [42, 6, 5, 3, 1] assert l.index(6) == 1 assert l.count(42) == 1 assert l.get(2) == 5 assert l[4] == 1 assert len(l) == 5 l.__setitem__(0, 43, _doApply=True) assert l[0] == 43 def test_ReplDict(): d = ReplDict() d.reset({ 1: 1, 2: 22, }, _doApply=True) assert d.rawData() == { 1: 1, 2: 22, } d.__setitem__(1, 10, _doApply=True) assert d.rawData() == { 1: 10, 2: 22, } d.set(1, 20, _doApply=True) assert d.rawData() == { 1: 20, 2: 22, } assert d.setdefault(1, 50, _doApply=True) == 20 assert d.setdefault(3, 50, _doApply=True) == 50 d.update({ 5: 5, 6: 7, }, _doApply=True) assert d.rawData() == { 1: 20, 2: 22, 3: 50, 5: 5, 6: 7, } assert d.pop(3, _doApply=True) == 50 assert d.pop(6, _doApply=True) == 7 assert d.pop(6, _doApply=True) == None assert d.pop(6, 0, _doApply=True) == 0 assert d.rawData() == { 1: 20, 2: 22, 5: 5, } assert d[1] == 20 assert d.get(2) == 22 assert d.get(22) == None assert d.get(22, 10) == 10 assert len(d) == 3 assert 2 in d assert 22 not in d assert sorted(d.keys()) == [1, 2, 5] assert sorted(d.values()) == [5, 20, 22] assert d.items() == d.rawData().items() d.clear(_doApply=True) assert len(d) == 0 def test_ReplSet(): s = ReplSet() s.reset({1, 4}, _doApply=True) assert s.rawData() == {1, 4} s.add(10, _doApply=True) assert s.rawData() == {1, 4, 10} s.remove(1, _doApply=True) s.discard(10, _doApply=True) assert s.rawData() == {4} assert s.pop(_doApply=True) == 4 s.add(48, _doApply=True) s.update({9, 2, 3}, _doApply=True) assert s.rawData() == {9, 2, 3, 48} assert len(s) == 4 assert 9 in s assert 42 not in s s.clear(_doApply=True) assert len(s) == 0 assert 9 not in s def test_ReplQueue(): q = ReplQueue() q.put(42, _doApply=True) q.put(33, _doApply=True) q.put(14, _doApply=True) assert q.get(_doApply=True) == 42 assert q.qsize() == 2 assert len(q) == 2 assert q.empty() == False assert q.get(_doApply=True) == 33 assert q.get(-1, _doApply=True) == 14 assert q.get(_doApply=True) == None assert q.get(-1, _doApply=True) == -1 assert q.empty() q = ReplQueue(3) q.put(42, _doApply=True) q.put(33, _doApply=True) assert q.full() == False assert q.put(14, _doApply=True) == True assert q.full() == True assert q.put(19, _doApply=True) == False assert q.get(_doApply=True) == 42 def test_ReplPriorityQueue(): q = ReplPriorityQueue() q.put(42, _doApply=True) q.put(14, _doApply=True) q.put(33, _doApply=True) assert q.get(_doApply=True) == 14 assert q.qsize() == 2 assert len(q) == 2 assert q.empty() == False assert q.get(_doApply=True) == 33 assert q.get(-1, _doApply=True) == 42 assert q.get(_doApply=True) == None assert q.get(-1, _doApply=True) == -1 assert q.empty() q = ReplPriorityQueue(3) q.put(42, _doApply=True) q.put(33, _doApply=True) assert q.full() == False assert q.put(14, _doApply=True) == True assert q.full() == True assert q.put(19, _doApply=True) == False assert q.get(_doApply=True) == 14 # https://github.com/travis-ci/travis-ci/issues/8695 @pytest.mark.skipif(os.name == 'nt' or os.environ.get('TRAVIS') == 'true', reason='temporary disabled for windows') def test_ipv6(): random.seed(42) a = [getNextAddr(ipv6=True), getNextAddr(ipv6=True)] o1 = TestObj(a[0], [a[1]]) o2 = TestObj(a[1], [a[0]]) objs = [o1, o2] assert not o1._isReady() assert not o2._isReady() doTicks(objs, 10.0, stopFunc=lambda: o1._isReady() and o2._isReady()) assert o1._isReady() assert o2._isReady() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1._isReady() assert o2._isReady() o1.addValue(150) o2.addValue(200) doTicks(objs, 10.0, stopFunc=lambda: o1.getCounter() == 350 and o2.getCounter() == 350) assert o1._isReady() assert o2._isReady() assert o1.getCounter() == 350 assert o2.getCounter() == 350 o1._destroy() o2._destroy() def test_localhost(): random.seed(42) a = [getNextAddr(isLocalhost=True), getNextAddr(isLocalhost=True)] o1 = TestObj(a[0], [a[1]]) o2 = TestObj(a[1], [a[0]]) objs = [o1, o2] assert not o1._isReady() assert not o2._isReady() doTicks(objs, 3.0, stopFunc=lambda: o1._isReady() and o2._isReady()) o1.waitBinded() o2.waitBinded() o1._printStatus() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1._isReady() assert o2._isReady() o1.addValue(150) o2.addValue(200) doTicks(objs, 10.0, stopFunc=lambda: o1.getCounter() == 350 and o2.getCounter() == 350) assert o1._isReady() assert o2._isReady() assert o1.getCounter() == 350 assert o2.getCounter() == 350 o1._destroy() o2._destroy() def test_leaderFallback(): random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]], leaderFallbackTimeout=30.0) o2 = TestObj(a[1], [a[0]], leaderFallbackTimeout=30.0) objs = [o1, o2] assert not o1._isReady() assert not o2._isReady() doTicks(objs, 5.0, stopFunc=lambda: o1._isReady() and o2._isReady()) o1._SyncObj__conf.leaderFallbackTimeout = 3.0 o2._SyncObj__conf.leaderFallbackTimeout = 3.0 doTicks([o for o in objs if o._isLeader()], 2.0) assert o1._isLeader() or o2._isLeader() doTicks([o for o in objs if o._isLeader()], 2.0) assert not o1._isLeader() and not o2._isLeader() class ZeroDeployConsumerAlpha(SyncObjConsumer): @replicated(ver=1) def someMethod(self): pass @replicated def methodTwo(self): pass class ZeroDeployConsumerBravo(SyncObjConsumer): @replicated def alphaMethod(self): pass @replicated(ver=3) def methodTwo(self): pass class ZeroDeployTestObj(SyncObj): def __init__(self, selfAddr, otherAddrs, consumers): cfg = SyncObjConf(autoTick=False) super(ZeroDeployTestObj, self).__init__(selfAddr, otherAddrs, cfg, consumers=consumers) @replicated def someMethod(self): pass @replicated def otherMethod(self): pass @replicated(ver=1) def thirdMethod(self): pass @replicated(ver=2) def lastMethod(self): pass @replicated(ver=3) def lastMethod(self): pass def test_zeroDeployVersions(): random.seed(42) a = [getNextAddr()] cAlpha = ZeroDeployConsumerAlpha() cBravo = ZeroDeployConsumerBravo() o1 = ZeroDeployTestObj(a[0], [], [cAlpha, cBravo]) assert hasattr(o1, 'otherMethod_v0') == True assert hasattr(o1, 'lastMethod_v2') == True assert hasattr(o1, 'lastMethod_v3') == True assert hasattr(o1, 'lastMethod_v4') == False assert hasattr(cAlpha, 'methodTwo_v0') == True assert hasattr(cBravo, 'methodTwo_v3') == True assert o1._methodToID['lastMethod_v2'] > o1._methodToID['otherMethod_v0'] assert o1._methodToID['lastMethod_v3'] > o1._methodToID['lastMethod_v2'] assert o1._methodToID['lastMethod_v3'] > o1._methodToID['someMethod_v0'] assert o1._methodToID['thirdMethod_v1'] > o1._methodToID['someMethod_v0'] assert o1._methodToID['lastMethod_v2'] > o1._methodToID[(id(cAlpha), 'methodTwo_v0')] assert o1._methodToID[id(cBravo), 'methodTwo_v3'] > o1._methodToID['lastMethod_v2'] assert 'someMethod' not in o1._methodToID assert 'thirdMethod' not in o1._methodToID assert 'lastMethod' not in o1._methodToID def test_dnsResolverBug(monkeypatch): monkeypatch.setattr(dns_resolver, "monotonicTime", lambda: 0.0) resolver = dns_resolver.DnsCachingResolver(600, 30) ip = resolver.resolve('localhost') assert ip == '127.0.0.1' class MockSocket(object): def __init__(self, socket, numSuccessSends): self.socket = socket self.numSuccessSends = numSuccessSends def send(self, data): self.numSuccessSends -= 1 if self.numSuccessSends <= 0: return -100500 return self.socket.send(data) def close(self): return self.socket.close() def getsockopt(self, *args, **kwargs): return self.socket.getsockopt(*args, **kwargs) def recv(self, *args, **kwargs): return self.socket.recv(*args, **kwargs) def setMockSocket(o, numSuccess = 0): for readonlyNode in o._SyncObj__readonlyNodes: for node, conn in o._SyncObj__transport._connections.items(): if node == readonlyNode: origSocket = conn._TcpConnection__socket conn._TcpConnection__socket = MockSocket(origSocket, numSuccess) #origSend = origSocket.send #origSocket.send = lambda x: mockSend(origSend, x) #print("Set mock send") def test_readOnlyDrop(): random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1]]) o2 = TestObj(a[1], [a[0]]) o3 = TestObj(None, [a[0], a[1]]) objs = [o1, o2, o3] assert not o1._isReady() assert not o2._isReady() assert not o3._isReady() doTicks(objs, 10.0, stopFunc=lambda: o1._isReady() and o2._isReady() and o3._isReady()) o1.waitBinded() o2.waitBinded() o1._printStatus() assert o1._getLeader().address in a assert o1._getLeader() == o2._getLeader() assert o1._isReady() assert o2._isReady() assert o3._isReady() o1.addValue(150) o2.addValue(200) doTicks(objs, 10.0, stopFunc=lambda: o1.getCounter() == 350 and o2.getCounter() == 350 and o3.getCounter() == 350) assert o1._isReady() assert o2._isReady() assert o1.getCounter() == 350 assert o2.getCounter() == 350 assert o3.getCounter() == 350 setMockSocket(o1, 1) setMockSocket(o2, 1) global _g_numSuccessSends _g_numSuccessSends = 0 for i in range(150): o1.addValue(1) for i in range(200): o2.addValue(1) doTicks(objs, 10.0, stopFunc=lambda: o1.getCounter() == 700 and o2.getCounter() == 700) assert o1.getCounter() == 700 assert o2.getCounter() == 700 o1._destroy() o2._destroy() o3._destroy() def test_filterParners(): random.seed(42) a = [getNextAddr(), getNextAddr()] o1 = TestObj(a[0], [a[1], a[0]]) assert len(o1._SyncObj__otherNodes) == 1 PySyncObj-0.3.14/test_zerodowntime/000077500000000000000000000000001475533247400172625ustar00rootroot00000000000000PySyncObj-0.3.14/test_zerodowntime/README.md000066400000000000000000000014561475533247400205470ustar00rootroot00000000000000test.py is a script to test zero-downtime upgrades between two versions of the code. Use `python3 test.py -h` to see its options. The basic operation is that test.py spawns a cluster on the local machine. This cluster is simply a distributed counter. test.py then sends increment commands to the cluster processes in a random (but controllable) way. One after the other, it also takes a process down, upgrades its code, and restarts it again a bit later. The other processes continue working, i.e. the cluster should still be functional ("zero downtime"). At the end, the Raft logs and counter values from all processes are compared to check that everything was working correctly. proc.py is the script executed by the individual processes spawned by test.py. It takes commands via stdin and replies via stdout. PySyncObj-0.3.14/test_zerodowntime/proc.py000066400000000000000000000027111475533247400206000ustar00rootroot00000000000000import pysyncobj import pysyncobj.testrevision import sys import time class MyCounter(pysyncobj.SyncObj): def __init__(self, selfAddr, otherAddrs, **kwargs): super(MyCounter, self).__init__(selfAddr, otherAddrs, **kwargs) self._counter = 0 @pysyncobj.replicated def incCounter(self): self._counter += 1 def getCounter(self): return self._counter def main(argv = sys.argv[1:]):#, stdin = sys.stdin): selfAddr = argv[0] otherAddrs = argv[1:] conf = pysyncobj.SyncObjConf() conf.journalFile = './journal' conf.fullDumpFile = './dump' counter = MyCounter(selfAddr, otherAddrs, conf = conf) print('{} ready at {}'.format(selfAddr, pysyncobj.testrevision.rev), file = sys.stderr) while True: line = sys.stdin.readline().strip() if line == 'wait': time.sleep(2) print('waited', flush = True) elif line == 'increment': while True: try: counter.incCounter(sync = True) except pysyncobj.SyncObjException as e: print('{} increment yielded SyncObjException with error code {}, retrying'.format(selfAddr, e.errorCode), file = sys.stderr) else: break print('incremented', flush = True) elif line == 'print': print(counter.getCounter(), flush = True) elif line == 'printlog': print(repr(counter._SyncObj__raftLog[:]).replace('\n', ' '), flush = True) elif line == 'quit' or line == '': break else: print('Got unknown command: {}'.format(line), file = sys.stderr) if __name__ == '__main__': main() PySyncObj-0.3.14/test_zerodowntime/test.py000066400000000000000000000244051475533247400206200ustar00rootroot00000000000000import argparse import contextlib import os import random import shutil import subprocess import sys import tempfile import time # Change directory context manager from https://stackoverflow.com/a/24176022 @contextlib.contextmanager def cd(newdir): prevdir = os.getcwd() os.chdir(os.path.expanduser(newdir)) try: yield finally: os.chdir(prevdir) # Parse arguments parser = argparse.ArgumentParser(formatter_class = argparse.ArgumentDefaultsHelpFormatter) parser.add_argument('revA', help = 'path or git revision for the "old" version. When it is a path, it must be the directory containing the pysyncobj package. When it is a git revision, the parent directory of the directory containing this script must be the git repository, and this repository must contain the revision (i.e. run this script from within the repository).') parser.add_argument('revB', help = 'path or git revision for the "new" version') parser.add_argument('cycles', nargs = '?', type = int, default = 120, help = 'Number of cycles to run; must be at least ten times the number of processes') parser.add_argument('processes', nargs = '?', type = int, default = 10, help = 'Number of parallel processes; must be at least 3') parser.add_argument('seed', nargs = '?', type = int, default = None, help = 'Seed for PRNG. Using the same seed value produces the exact same order of operations *in this test script*, i.e. outside of PySyncObj. Everything inside the cluster, e.g. which node is elected leader and when, is essentially still completely random.') args = parser.parse_args() if args.processes < 3: print('Testing with less than 3 processes makes no sense', file = sys.stderr) sys.exit(1) if args.cycles < args.processes * 10: print('Needs at least ten times as many cycles as there are processes to get useful results', file = sys.stderr) sys.exit(1) workingDir = os.path.abspath(os.path.dirname(__file__)) # Seed seed = args.seed if seed is None: seed = random.randint(0, 2**32 - 1) print('Seed: {}'.format(seed)) random.seed(seed) # Generate command to be executed at each cycle commands = [] # list of tuples (proc index, command) # Commands: # 'increment' -- send an increment command to the process, wait until it returns 'incremented' # 'compare' -- compare the value across all processes, verify that the majority has the same, expected value; proc index is irrelevant in this case # 'upgrade' -- quit the process, upgrade the code, restart the process for i in range(args.cycles): cmd = random.choice(('increment', 'increment', 'increment', 'increment', 'compare')) # 80 % increment, 20 % compare proc = random.randrange(args.processes) commands.append((proc, cmd)) upgrades = list(range(args.processes)) random.shuffle(upgrades) # First upgrade at 20 % of the cycles, last at 80 %, equal cycle distance between # This, combined with the cycles >= 10 * processes requirement, also ensures that the upgrades don't overlap. # Each upgrade takes 3 cycles plus the startup time of the new process, which shouldn't be much worse than 1-2 cycles. # 60 % of the cycles must therefore be at least 5 times the number of processes, i.e. cycles >= 5/0.6 * processes = 8.33 * processes. for i in range(args.processes): upgradeCycle = int((0.2 + 0.6 * i / (args.processes - 1)) * args.cycles) commands[upgradeCycle] = (upgrades[i], 'upgrade') # Ensure that this process doesn't receive any increment operations while it's upgrading for j in range(upgradeCycle, upgradeCycle + 3): if commands[j][1] == 'increment': while commands[j][0] == upgrades[i]: commands[j] = (random.randrange(args.processes), 'increment') # Generate node addresses addrs = ['127.0.0.1:{}'.format(42000 + i) for i in range(args.processes)] status = 0 # Set up temporary directory with tempfile.TemporaryDirectory() as tmpdirname: with cd(tmpdirname): os.mkdir('revA') os.mkdir('revB') # Check out revisions into the temporary directory for revArg, revTarget in ((args.revA, 'revA'), (args.revB, 'revB')): if os.path.isdir(os.path.join(workingDir, revArg)): # Copy directory contents to ./revTarget; I like rsync... if subprocess.call(['rsync', '-a', os.path.join(workingDir, revArg, ''), os.path.join(revTarget, '')]) != 0: print('rsync of {} failed'.format(revTarget), file = sys.stderr) sys.exit(1) else: with cd(os.path.join(workingDir, '..')): #TODO: Replace with GIT_DIR environment variable or something gitProc = subprocess.Popen(['git', 'archive', revArg], stdout = subprocess.PIPE) tarProc = subprocess.Popen(['tar', '-x', '-C', os.path.join(tmpdirname, revTarget), '--strip-components', '1', 'pysyncobj'], stdin = gitProc.stdout) gitProc.stdout.close() tarProc.communicate() if tarProc.returncode != 0: print('git or tar of {} failed'.format(revTarget), file = sys.stderr) sys.exit(1) with open(os.path.join(revTarget, 'testrevision.py'), 'w') as fp: fp.write('rev = {!r}'.format(revTarget)) # Create each process's directory and initialise it with the revision A for i in range(args.processes): os.mkdir('proc{}'.format(i)) os.mkdir(os.path.join('proc{}'.format(i), 'pysyncobj')) if subprocess.call(['rsync', '-a', os.path.join('revA', ''), os.path.join('proc{}'.format(i), 'pysyncobj', '')]) != 0: print('rsync of revA to proc{} failed'.format(i), file = sys.stderr) sys.exit(1) if subprocess.call(['rsync', '-a', os.path.join(workingDir, 'proc.py'), os.path.join('proc{}'.format(i), '')]) != 0: print('rsync of proc.py to proc{} failed'.format(i), file = sys.stderr) sys.exit(1) procs = [] try: # Launch processes for i in range(args.processes): with cd('proc{}'.format(i)): procs.append(subprocess.Popen(['python3', 'proc.py', addrs[i]] + [addrs[j] for j in range(args.processes) if j != i], stdin = subprocess.PIPE, stdout = subprocess.PIPE, bufsize = 0)) # Randomly run commands on the custer and upgrade the processes one-by-one, ensuring that everything's still fine after each step counter = 0 # The expected value of the counter restart = -1 # Variable for when to restart a process; set to 3 on the 'upgrade' command, counted down on each command, the upgraded process is restarted when it reaches zero upgradingProcId = None # The procId that is currently upgrading for procId, command in commands: if command == 'increment': assert procId != upgradingProcId, 'previous upgrade hasn''t finished' print('Sending increment to proc{}'.format(procId)) # Send command procs[procId].stdin.write(b'increment\n') procs[procId].stdin.flush() # Wait until process is done with incrementing procs[procId].stdout.readline() counter += 1 elif command == 'compare': print('Comparing') # Compare the *logs* of the processes # Comparing the values of the counter doesn't work because the commands might not have been applied yet. # So if the values don't match, that doesn't mean that replication is broken. # The log reflects what's actually replicated. for i in range(args.processes): if i == upgradingProcId: continue procs[i].stdin.write(b'printlog\n') procs[i].stdin.flush() logs = [procs[i].stdout.readline().strip() if i != upgradingProcId else None for i in range(args.processes)] # Ensure that a majority of the logs are equal; note that this doesn't verify that all increments were actually replicated. ok = False for i in range((args.processes + 1) // 2): count = 1 for j in range(i, args.processes): if logs[i] == logs[j]: count += 1 if count >= args.processes // 2 + 1: ok = True break if not ok: print('Didn''t find at least {} matching logs'.format(args.processes // 2 + 1), file = sys.stderr) for i in range(args.processes): print('proc{} log: {}'.format(i, logs[i].decode('utf-8')), file = sys.stderr) sys.exit(1) elif command == 'upgrade': assert upgradingProcId is None, 'previous upgrade hasn''t finished' print('Taking down proc{} for upgrade'.format(procId)) # Let the process finish gracefully procs[procId].stdin.write(b'quit\n') procs[procId].stdin.flush() procs[procId].wait() # Delete revA code shutil.rmtree(os.path.join('proc{}'.format(procId), 'pysyncobj')) os.mkdir(os.path.join('proc{}'.format(procId), 'pysyncobj')) # Copy revB if subprocess.call(['rsync', '-a', os.path.join('revB', ''), os.path.join('proc{}'.format(procId), 'pysyncobj', '')]) != 0: print('rsync of revB to proc{} failed'.format(procId), file = sys.stderr) sys.exit(1) upgradingProcId = procId restart = 3 restart -= 1 if restart == 0: print('Restarting proc{}'.format(upgradingProcId)) with cd('proc{}'.format(upgradingProcId)): procs[upgradingProcId] = subprocess.Popen(['python3', 'proc.py', addrs[upgradingProcId]] + [addrs[j] for j in range(args.processes) if j != upgradingProcId], stdin = subprocess.PIPE, stdout = subprocess.PIPE, bufsize = 0) upgradingProcId = None print('Final comparison...') # Give the processes some time to catch up time.sleep(5) # Check that all logs are the same, and that all counter values are equal to the expected value for i in range(args.processes): procs[i].stdin.write(b'printlog\n') procs[i].stdin.flush() logs = [procs[i].stdout.readline().strip() for i in range(args.processes)] for i in range(args.processes): procs[i].stdin.write(b'print\n') procs[i].stdin.flush() counters = [int(procs[i].stdout.readline().strip()) for i in range(args.processes)] if not all(x == logs[0] for x in logs): print('ERROR: not all logs are equal', file = sys.stderr) for i in range(args.processes): print('proc{} log: {}'.format(i, logs[i].decode('utf-8')), file = sys.stderr) status = 1 elif not all(x == counter for x in counters): print('ERROR: not all counters are equal to the expected value {}: {}'.format(counter, counters), file = sys.stderr) status = 1 else: print('OK', file = sys.stderr) print('Sending quit command', file = sys.stderr) for i in range(args.processes): procs[i].stdin.write(b'quit\n') for i in range(args.processes): procs[i].communicate() except: print('Killing processes', file = sys.stderr) for proc in procs: proc.kill() raise sys.exit(status)