././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1612316569.724291 portalocker-2.2.1/0000755000076500000240000000000000000000000014115 5ustar00rickstaff00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611419668.0 portalocker-2.2.1/CHANGELOG.rst0000644000076500000240000000256300000000000016144 0ustar00rickstaff00000000000000For newer changes please look at the comments for the Git tags: https://github.com/WoLpH/portalocker/tags For more details the commit log for the master branch could be useful: https://github.com/WoLpH/portalocker/commits/master 1.5: * Moved tests to prevent collisions with other packages 1.4: * Added optional file open parameters 1.3: * Improved documentation * Added file handle to locking exceptions 1.2: * Added signed releases and tags to PyPI and Git 1.1: * Added support for Python 3.6+ * Using real time to calculate timeout 1.0: * Complete code refactor. - Splitting of code in logical classes - 100% test coverage and change in API behaviour - The default behavior of the `Lock` class has changed to append instead of write/truncate. 0.6: * Added msvcrt support for Windows 0.5: * Python 3 support 0.4: * Fixing a few bugs, added coveralls support, switched to py.test and added 100% test coverage. - Fixing exception thrown when fail_when_locked is true - Fixing exception "Lock object has no attribute '_release_lock'" when fail_when_locked is true due to the call to Lock._release_lock() which fails because _release_lock is not defined. 0.3: * Now actually returning the file descriptor from the `Lock` class 0.2: * Added `Lock` class to help prevent cache race conditions 0.1: * Initial release ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1520450718.0 portalocker-2.2.1/LICENSE0000644000076500000240000000456300000000000015132 0ustar00rickstaff00000000000000PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 -------------------------------------------- 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using this software ("Python") in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 Python Software Foundation; All Rights Reserved" are retained in Python alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python. 4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python, Licensee agrees to be bound by the terms and conditions of this License Agreement. ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1573518153.0 portalocker-2.2.1/MANIFEST.in0000644000076500000240000000014200000000000015650 0ustar00rickstaff00000000000000include CHANGELOG.rst include README.rst include LICENSE recursive-include portalocker_tests *.py ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1612316569.7244751 portalocker-2.2.1/PKG-INFO0000644000076500000240000002066200000000000015220 0ustar00rickstaff00000000000000Metadata-Version: 2.1 Name: portalocker Version: 2.2.1 Summary: Wraps the portalocker recipe for easy usage Home-page: https://github.com/WoLpH/portalocker Author: Rick van Hattem Author-email: wolph@wol.ph License: PSF Description: ############################################ portalocker - Cross-platform locking library ############################################ .. image:: https://travis-ci.com/WoLpH/portalocker.svg?branch=master :alt: Linux Test Status :target: https://travis-ci.com/WoLpH/portalocker .. image:: https://ci.appveyor.com/api/projects/status/mgqry98hgpy4prhh?svg=true :alt: Windows Tests Status :target: https://ci.appveyor.com/project/WoLpH/portalocker .. image:: https://coveralls.io/repos/WoLpH/portalocker/badge.svg?branch=master :alt: Coverage Status :target: https://coveralls.io/r/WoLpH/portalocker?branch=master Overview -------- Portalocker is a library to provide an easy API to file locking. An important detail to note is that on Linux and Unix systems the locks are advisory by default. By specifying the `-o mand` option to the mount command it is possible to enable mandatory file locking on Linux. This is generally not recommended however. For more information about the subject: - https://en.wikipedia.org/wiki/File_locking - http://stackoverflow.com/questions/39292051/portalocker-does-not-seem-to-lock - https://stackoverflow.com/questions/12062466/mandatory-file-lock-on-linux The module is currently maintained by Rick van Hattem . The project resides at https://github.com/WoLpH/portalocker . Bugs and feature requests can be submitted there. Patches are also very welcome. Redis Locks ----------- This library now features a lock based on Redis which allows for locks across multiple threads, processes and even distributed locks across multiple computers. It is an extremely reliable Redis lock that is based on pubsub. As opposed to most Redis locking systems based on key/value pairs, this locking method is based on the pubsub system. The big advantage is that if the connection gets killed due to network issues, crashing processes or otherwise, it will still immediately unlock instead of waiting for a lock timeout. Usage is really easy: :: import portalocker lock = portalocker.RedisLock('some_lock_channel_name') with lock: print('do something here') The API is essentially identical to the other ``Lock`` classes so in addition to the ``with`` statement you can also use ``lock.acquire(...)``. Python 2 -------- Python 2 was supported in versions before Portalocker 2.0. If you are still using Python 2, you can run this to install: :: pip install "portalocker<2" Tips ---- On some networked filesystems it might be needed to force a `os.fsync()` before closing the file so it's actually written before another client reads the file. Effectively this comes down to: :: with portalocker.Lock('some_file', 'rb+', timeout=60) as fh: # do what you need to do ... # flush and sync to filesystem fh.flush() os.fsync(fh.fileno()) Links ----- * Documentation - http://portalocker.readthedocs.org/en/latest/ * Source - https://github.com/WoLpH/portalocker * Bug reports - https://github.com/WoLpH/portalocker/issues * Package homepage - https://pypi.python.org/pypi/portalocker * My blog - http://w.wol.ph/ Examples -------- To make sure your cache generation scripts don't race, use the `Lock` class: >>> import portalocker >>> with portalocker.Lock('somefile', timeout=1) as fh: ... print >>fh, 'writing some stuff to my cache...' To customize the opening and locking a manual approach is also possible: >>> import portalocker >>> file = open('somefile', 'r+') >>> portalocker.lock(file, portalocker.EXCLUSIVE) >>> file.seek(12) >>> file.write('foo') >>> file.close() Explicitly unlocking is not needed in most cases but omitting it has been known to cause issues: >>> import portalocker >>> with portalocker.Lock('somefile', timeout=1) as fh: ... print >>fh, 'writing some stuff to my cache...' To customize the opening and locking a manual approach is also possible: >>> import portalocker >>> file = open('somefile', 'r+') >>> portalocker.lock(file, portalocker.EXCLUSIVE) >>> file.seek(12) >>> file.write('foo') >>> file.close() Explicitly unlocking is not needed in most cases but omitting it has been known to cause issues: >>> import portalocker >>> with portalocker.Lock('somefile', timeout=1) as fh: ... print >>fh, 'writing some stuff to my cache...' To customize the opening and locking a manual approach is also possible: >>> import portalocker >>> file = open('somefile', 'r+') >>> portalocker.lock(file, portalocker.LOCK_EX) >>> file.seek(12) >>> file.write('foo') >>> file.close() Explicitly unlocking is not needed in most cases but omitting it has been known to cause issues: https://github.com/AzureAD/microsoft-authentication-extensions-for-python/issues/42#issuecomment-601108266 If needed, it can be done through: >>> portalocker.unlock(file) Do note that your data might still be in a buffer so it is possible that your data is not available until you `flush()` or `close()`. To create a cross platform bounded semaphore across multiple processes you can use the `BoundedSemaphore` class which functions somewhat similar to `threading.BoundedSemaphore`: >>> import portalocker >>> n = 2 >>> timeout = 0.1 >>> semaphore_a = portalocker.BoundedSemaphore(n, timeout=timeout) >>> semaphore_b = portalocker.BoundedSemaphore(n, timeout=timeout) >>> semaphore_c = portalocker.BoundedSemaphore(n, timeout=timeout) >>> semaphore_a.acquire() >>> semaphore_b.acquire() >>> semaphore_c.acquire() Traceback (most recent call last): ... portalocker.exceptions.AlreadyLocked More examples can be found in the `tests `_. Changelog --------- Every release has a ``git tag`` with a commit message for the tag explaining what was added and/or changed. The list of tags/releases including the commit messages can be found here: https://github.com/WoLpH/portalocker/releases License ------- See the `LICENSE `_ file. Keywords: locking,locks,with statement,windows,linux,unix Platform: any Classifier: Intended Audience :: Developers Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Provides-Extra: docs Provides-Extra: tests Provides-Extra: redis ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611970076.0 portalocker-2.2.1/README.rst0000644000076500000240000001377100000000000015615 0ustar00rickstaff00000000000000############################################ portalocker - Cross-platform locking library ############################################ .. image:: https://travis-ci.com/WoLpH/portalocker.svg?branch=master :alt: Linux Test Status :target: https://travis-ci.com/WoLpH/portalocker .. image:: https://ci.appveyor.com/api/projects/status/mgqry98hgpy4prhh?svg=true :alt: Windows Tests Status :target: https://ci.appveyor.com/project/WoLpH/portalocker .. image:: https://coveralls.io/repos/WoLpH/portalocker/badge.svg?branch=master :alt: Coverage Status :target: https://coveralls.io/r/WoLpH/portalocker?branch=master Overview -------- Portalocker is a library to provide an easy API to file locking. An important detail to note is that on Linux and Unix systems the locks are advisory by default. By specifying the `-o mand` option to the mount command it is possible to enable mandatory file locking on Linux. This is generally not recommended however. For more information about the subject: - https://en.wikipedia.org/wiki/File_locking - http://stackoverflow.com/questions/39292051/portalocker-does-not-seem-to-lock - https://stackoverflow.com/questions/12062466/mandatory-file-lock-on-linux The module is currently maintained by Rick van Hattem . The project resides at https://github.com/WoLpH/portalocker . Bugs and feature requests can be submitted there. Patches are also very welcome. Redis Locks ----------- This library now features a lock based on Redis which allows for locks across multiple threads, processes and even distributed locks across multiple computers. It is an extremely reliable Redis lock that is based on pubsub. As opposed to most Redis locking systems based on key/value pairs, this locking method is based on the pubsub system. The big advantage is that if the connection gets killed due to network issues, crashing processes or otherwise, it will still immediately unlock instead of waiting for a lock timeout. Usage is really easy: :: import portalocker lock = portalocker.RedisLock('some_lock_channel_name') with lock: print('do something here') The API is essentially identical to the other ``Lock`` classes so in addition to the ``with`` statement you can also use ``lock.acquire(...)``. Python 2 -------- Python 2 was supported in versions before Portalocker 2.0. If you are still using Python 2, you can run this to install: :: pip install "portalocker<2" Tips ---- On some networked filesystems it might be needed to force a `os.fsync()` before closing the file so it's actually written before another client reads the file. Effectively this comes down to: :: with portalocker.Lock('some_file', 'rb+', timeout=60) as fh: # do what you need to do ... # flush and sync to filesystem fh.flush() os.fsync(fh.fileno()) Links ----- * Documentation - http://portalocker.readthedocs.org/en/latest/ * Source - https://github.com/WoLpH/portalocker * Bug reports - https://github.com/WoLpH/portalocker/issues * Package homepage - https://pypi.python.org/pypi/portalocker * My blog - http://w.wol.ph/ Examples -------- To make sure your cache generation scripts don't race, use the `Lock` class: >>> import portalocker >>> with portalocker.Lock('somefile', timeout=1) as fh: ... print >>fh, 'writing some stuff to my cache...' To customize the opening and locking a manual approach is also possible: >>> import portalocker >>> file = open('somefile', 'r+') >>> portalocker.lock(file, portalocker.EXCLUSIVE) >>> file.seek(12) >>> file.write('foo') >>> file.close() Explicitly unlocking is not needed in most cases but omitting it has been known to cause issues: >>> import portalocker >>> with portalocker.Lock('somefile', timeout=1) as fh: ... print >>fh, 'writing some stuff to my cache...' To customize the opening and locking a manual approach is also possible: >>> import portalocker >>> file = open('somefile', 'r+') >>> portalocker.lock(file, portalocker.EXCLUSIVE) >>> file.seek(12) >>> file.write('foo') >>> file.close() Explicitly unlocking is not needed in most cases but omitting it has been known to cause issues: >>> import portalocker >>> with portalocker.Lock('somefile', timeout=1) as fh: ... print >>fh, 'writing some stuff to my cache...' To customize the opening and locking a manual approach is also possible: >>> import portalocker >>> file = open('somefile', 'r+') >>> portalocker.lock(file, portalocker.LOCK_EX) >>> file.seek(12) >>> file.write('foo') >>> file.close() Explicitly unlocking is not needed in most cases but omitting it has been known to cause issues: https://github.com/AzureAD/microsoft-authentication-extensions-for-python/issues/42#issuecomment-601108266 If needed, it can be done through: >>> portalocker.unlock(file) Do note that your data might still be in a buffer so it is possible that your data is not available until you `flush()` or `close()`. To create a cross platform bounded semaphore across multiple processes you can use the `BoundedSemaphore` class which functions somewhat similar to `threading.BoundedSemaphore`: >>> import portalocker >>> n = 2 >>> timeout = 0.1 >>> semaphore_a = portalocker.BoundedSemaphore(n, timeout=timeout) >>> semaphore_b = portalocker.BoundedSemaphore(n, timeout=timeout) >>> semaphore_c = portalocker.BoundedSemaphore(n, timeout=timeout) >>> semaphore_a.acquire() >>> semaphore_b.acquire() >>> semaphore_c.acquire() Traceback (most recent call last): ... portalocker.exceptions.AlreadyLocked More examples can be found in the `tests `_. Changelog --------- Every release has a ``git tag`` with a commit message for the tag explaining what was added and/or changed. The list of tags/releases including the commit messages can be found here: https://github.com/WoLpH/portalocker/releases License ------- See the `LICENSE `_ file. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1612316569.7212613 portalocker-2.2.1/portalocker/0000755000076500000240000000000000000000000016442 5ustar00rickstaff00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1612314659.0 portalocker-2.2.1/portalocker/__about__.py0000644000076500000240000000034700000000000020726 0ustar00rickstaff00000000000000__package_name__ = 'portalocker' __author__ = 'Rick van Hattem' __email__ = 'wolph@wol.ph' __version__ = '2.2.1' __description__ = '''Wraps the portalocker recipe for easy usage''' __url__ = 'https://github.com/WoLpH/portalocker' ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1612314659.0 portalocker-2.2.1/portalocker/__init__.py0000644000076500000240000000413400000000000020555 0ustar00rickstaff00000000000000from . import __about__ from . import constants from . import exceptions from . import portalocker from . import utils try: # pragma: no cover from .redis import RedisLock except ImportError: # pragma: no cover RedisLock = None # type: ignore #: The package name on Pypi __package_name__ = __about__.__package_name__ #: Current author and maintainer, view the git history for the previous ones __author__ = __about__.__author__ #: Current author's email address __email__ = __about__.__email__ #: Version number __version__ = '2.2.1' #: Package description for Pypi __description__ = __about__.__description__ #: Package homepage __url__ = __about__.__url__ #: Exception thrown when the file is already locked by someone else AlreadyLocked = exceptions.AlreadyLocked #: Exception thrown if an error occurred during locking LockException = exceptions.LockException #: Lock a file. Note that this is an advisory lock on Linux/Unix systems lock = portalocker.lock #: Unlock a file unlock = portalocker.unlock #: Place an exclusive lock. #: Only one process may hold an exclusive lock for a given file at a given #: time. LOCK_EX: constants.LockFlags = constants.LockFlags.EXCLUSIVE #: Place a shared lock. #: More than one process may hold a shared lock for a given file at a given #: time. LOCK_SH: constants.LockFlags = constants.LockFlags.SHARED #: Acquire the lock in a non-blocking fashion. LOCK_NB: constants.LockFlags = constants.LockFlags.NON_BLOCKING #: Remove an existing lock held by this process. LOCK_UN: constants.LockFlags = constants.LockFlags.UNBLOCK #: Locking flags enum LockFlags = constants.LockFlags #: Locking utility class to automatically handle opening with timeouts and #: context wrappers Lock = utils.Lock RLock = utils.RLock BoundedSemaphore = utils.BoundedSemaphore TemporaryFileLock = utils.TemporaryFileLock open_atomic = utils.open_atomic __all__ = [ 'lock', 'unlock', 'LOCK_EX', 'LOCK_SH', 'LOCK_NB', 'LOCK_UN', 'LockFlags', 'LockException', 'Lock', 'RLock', 'AlreadyLocked', 'BoundedSemaphore', 'open_atomic', 'RedisLock', ] ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611419668.0 portalocker-2.2.1/portalocker/constants.py0000644000076500000240000000206600000000000021034 0ustar00rickstaff00000000000000''' Locking constants Lock types: - `EXCLUSIVE` exclusive lock - `SHARED` shared lock Lock flags: - `NON_BLOCKING` non-blocking Manually unlock, only needed internally - `UNBLOCK` unlock ''' import enum import os # The actual tests will execute the code anyhow so the following code can # safely be ignored from the coverage tests if os.name == 'nt': # pragma: no cover import msvcrt LOCK_EX = 0x1 #: exclusive lock LOCK_SH = 0x2 #: shared lock LOCK_NB = 0x4 #: non-blocking LOCK_UN = msvcrt.LK_UNLCK #: unlock elif os.name == 'posix': # pragma: no cover import fcntl LOCK_EX = fcntl.LOCK_EX #: exclusive lock LOCK_SH = fcntl.LOCK_SH #: shared lock LOCK_NB = fcntl.LOCK_NB #: non-blocking LOCK_UN = fcntl.LOCK_UN #: unlock else: # pragma: no cover raise RuntimeError('PortaLocker only defined for nt and posix platforms') class LockFlags(enum.IntFlag): EXCLUSIVE = LOCK_EX #: exclusive lock SHARED = LOCK_SH #: shared lock NON_BLOCKING = LOCK_NB #: non-blocking UNBLOCK = LOCK_UN #: unlock ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611419668.0 portalocker-2.2.1/portalocker/exceptions.py0000644000076500000240000000053400000000000021177 0ustar00rickstaff00000000000000class BaseLockException(Exception): # Error codes: LOCK_FAILED = 1 def __init__(self, *args, fh=None, **kwargs): self.fh = fh Exception.__init__(self, *args, **kwargs) class LockException(BaseLockException): pass class AlreadyLocked(BaseLockException): pass class FileToLarge(BaseLockException): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611419668.0 portalocker-2.2.1/portalocker/portalocker.py0000644000076500000240000001306700000000000021350 0ustar00rickstaff00000000000000import os import sys import typing from . import constants from . import exceptions if os.name == 'nt': # pragma: no cover import win32con import win32file import pywintypes import winerror import msvcrt __overlapped = pywintypes.OVERLAPPED() if sys.version_info.major == 2: lock_length = -1 else: lock_length = int(2**31 - 1) def lock(file_: typing.IO, flags: constants.LockFlags): if flags & constants.LockFlags.SHARED: if sys.version_info.major == 2: if flags & constants.LockFlags.NON_BLOCKING: mode = win32con.LOCKFILE_FAIL_IMMEDIATELY else: mode = 0 else: if flags & constants.LockFlags.NON_BLOCKING: mode = msvcrt.LK_NBRLCK else: mode = msvcrt.LK_RLCK # is there any reason not to reuse the following structure? hfile = win32file._get_osfhandle(file_.fileno()) try: win32file.LockFileEx(hfile, mode, 0, -0x10000, __overlapped) except pywintypes.error as exc_value: # error: (33, 'LockFileEx', 'The process cannot access the file # because another process has locked a portion of the file.') if exc_value.winerror == winerror.ERROR_LOCK_VIOLATION: raise exceptions.LockException( exceptions.LockException.LOCK_FAILED, exc_value.strerror, fh=file_) else: # Q: Are there exceptions/codes we should be dealing with # here? raise else: if flags & constants.LockFlags.NON_BLOCKING: mode = msvcrt.LK_NBLCK else: mode = msvcrt.LK_LOCK # windows locks byte ranges, so make sure to lock from file start try: savepos = file_.tell() if savepos: # [ ] test exclusive lock fails on seek here # [ ] test if shared lock passes this point file_.seek(0) # [x] check if 0 param locks entire file (not documented in # Python) # [x] fails with "IOError: [Errno 13] Permission denied", # but -1 seems to do the trick try: msvcrt.locking(file_.fileno(), mode, lock_length) except IOError as exc_value: # [ ] be more specific here raise exceptions.LockException( exceptions.LockException.LOCK_FAILED, exc_value.strerror, fh=file_) finally: if savepos: file_.seek(savepos) except IOError as exc_value: raise exceptions.LockException( exceptions.LockException.LOCK_FAILED, exc_value.strerror, fh=file_) def unlock(file_: typing.IO): try: savepos = file_.tell() if savepos: file_.seek(0) try: msvcrt.locking(file_.fileno(), constants.LockFlags.UNBLOCK, lock_length) except IOError as exc: exception = exc if exc.strerror == 'Permission denied': hfile = win32file._get_osfhandle(file_.fileno()) try: win32file.UnlockFileEx( hfile, 0, -0x10000, __overlapped) except pywintypes.error as exc: exception = exc if exc.winerror == winerror.ERROR_NOT_LOCKED: # error: (158, 'UnlockFileEx', # 'The segment is already unlocked.') # To match the 'posix' implementation, silently # ignore this error pass else: # Q: Are there exceptions/codes we should be # dealing with here? raise else: raise exceptions.LockException( exceptions.LockException.LOCK_FAILED, exception.strerror, fh=file_) finally: if savepos: file_.seek(savepos) except IOError as exc: raise exceptions.LockException( exceptions.LockException.LOCK_FAILED, exc.strerror, fh=file_) elif os.name == 'posix': # pragma: no cover import fcntl def lock(file_: typing.IO, flags: constants.LockFlags): locking_exceptions = IOError, try: # pragma: no cover locking_exceptions += BlockingIOError, # type: ignore except NameError: # pragma: no cover pass try: fcntl.flock(file_.fileno(), flags) except locking_exceptions as exc_value: # The exception code varies on different systems so we'll catch # every IO error raise exceptions.LockException(exc_value, fh=file_) def unlock(file_: typing.IO, ): fcntl.flock(file_.fileno(), constants.LockFlags.UNBLOCK) else: # pragma: no cover raise RuntimeError('PortaLocker only defined for nt and posix platforms') ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1612314659.0 portalocker-2.2.1/portalocker/redis.py0000644000076500000240000001774500000000000020140 0ustar00rickstaff00000000000000import _thread import json import logging import random import time import typing from typing import Any from typing import Dict from redis import client from . import exceptions from . import utils logger = logging.getLogger(__name__) DEFAULT_UNAVAILABLE_TIMEOUT = 1 DEFAULT_THREAD_SLEEP_TIME = 0.1 class PubSubWorkerThread(client.PubSubWorkerThread): def run(self): try: super().run() except Exception: # pragma: no cover _thread.interrupt_main() raise class RedisLock(utils.LockBase): ''' An extremely reliable Redis lock based on pubsub with a keep-alive thread As opposed to most Redis locking systems based on key/value pairs, this locking method is based on the pubsub system. The big advantage is that if the connection gets killed due to network issues, crashing processes or otherwise, it will still immediately unlock instead of waiting for a lock timeout. To make sure both sides of the lock know about the connection state it is recommended to set the `health_check_interval` when creating the redis connection.. Args: channel: the redis channel to use as locking key. connection: an optional redis connection if you already have one or if you need to specify the redis connection timeout: timeout when trying to acquire a lock check_interval: check interval while waiting fail_when_locked: after the initial lock failed, return an error or lock the file. This does not wait for the timeout. thread_sleep_time: sleep time between fetching messages from redis to prevent a busy/wait loop. In the case of lock conflicts this increases the time it takes to resolve the conflict. This should be smaller than the `check_interval` to be useful. unavailable_timeout: If the conflicting lock is properly connected this should never exceed twice your redis latency. Note that this will increase the wait time possibly beyond your `timeout` and is always executed if a conflict arises. redis_kwargs: The redis connection arguments if no connection is given. The `DEFAULT_REDIS_KWARGS` are used as default, if you want to override these you need to explicitly specify a value (e.g. `health_check_interval=0`) ''' redis_kwargs: Dict[str, Any] thread: typing.Optional[PubSubWorkerThread] channel: str timeout: float connection: typing.Optional[client.Redis] pubsub: typing.Optional[client.PubSub] = None close_connection: bool DEFAULT_REDIS_KWARGS = dict( health_check_interval=10, ) def __init__( self, channel: str, connection: typing.Optional[client.Redis] = None, timeout: typing.Optional[float] = None, check_interval: typing.Optional[float] = None, fail_when_locked: typing.Optional[bool] = False, thread_sleep_time: float = DEFAULT_THREAD_SLEEP_TIME, unavailable_timeout: float = DEFAULT_UNAVAILABLE_TIMEOUT, redis_kwargs: typing.Optional[typing.Dict] = None, ): # We don't want to close connections given as an argument self.close_connection = not connection self.thread = None self.channel = channel self.connection = connection self.thread_sleep_time = thread_sleep_time self.unavailable_timeout = unavailable_timeout self.redis_kwargs = redis_kwargs or dict() for key, value in self.DEFAULT_REDIS_KWARGS.items(): self.redis_kwargs.setdefault(key, value) super(RedisLock, self).__init__(timeout=timeout, check_interval=check_interval, fail_when_locked=fail_when_locked) def get_connection(self) -> client.Redis: if not self.connection: self.connection = client.Redis(**self.redis_kwargs) return self.connection def channel_handler(self, message): if message.get('type') != 'message': # pragma: no cover return try: data = json.loads(message.get('data')) except TypeError: # pragma: no cover logger.debug('TypeError while parsing: %r', message) return self.connection.publish(data['response_channel'], str(time.time())) @property def client_name(self): return self.channel + '-lock' def acquire( self, timeout: float = None, check_interval: float = None, fail_when_locked: typing.Optional[bool] = None): timeout = utils.coalesce(timeout, self.timeout, 0.0) check_interval = utils.coalesce(check_interval, self.check_interval, 0.0) fail_when_locked = utils.coalesce(fail_when_locked, self.fail_when_locked) assert not self.pubsub, 'This lock is already active' connection = self.get_connection() timeout_generator = self._timeout_generator(timeout, check_interval) for _ in timeout_generator: # pragma: no branch subscribers = connection.pubsub_numsub(self.channel)[0][1] if subscribers: logger.debug('Found %d lock subscribers for %s', subscribers, self.channel) if self.check_or_kill_lock( connection, self.unavailable_timeout): # pragma: no branch continue else: # pragma: no cover subscribers = None # Note: this should not be changed to an elif because the if # above can still end up here if not subscribers: connection.client_setname(self.client_name) self.pubsub = connection.pubsub() self.pubsub.subscribe(**{self.channel: self.channel_handler}) self.thread = PubSubWorkerThread( self.pubsub, sleep_time=self.thread_sleep_time) self.thread.start() subscribers = connection.pubsub_numsub(self.channel)[0][1] if subscribers == 1: # pragma: no branch return self else: # pragma: no cover # Race condition, let's try again self.release() if fail_when_locked: # pragma: no cover raise exceptions.AlreadyLocked(exceptions) raise exceptions.AlreadyLocked(exceptions) def check_or_kill_lock(self, connection, timeout): # Random channel name to get messages back from the lock response_channel = f'{self.channel}-{random.random()}' pubsub = connection.pubsub() pubsub.subscribe(response_channel) connection.publish(self.channel, json.dumps(dict( response_channel=response_channel, message='ping', ))) check_interval = min(self.thread_sleep_time, timeout / 10) for _ in self._timeout_generator( timeout, check_interval): # pragma: no branch message = pubsub.get_message(timeout=check_interval) if message: # pragma: no branch pubsub.close() return True for client_ in connection.client_list('pubsub'): # pragma: no cover if client_.get('name') == self.client_name: logger.warning( 'Killing unavailable redis client: %r', client_) connection.client_kill_filter(client_.get('id')) def release(self): if self.thread: # pragma: no branch self.thread.stop() self.thread.join() self.thread = None time.sleep(0.01) if self.pubsub: # pragma: no branch self.pubsub.unsubscribe(self.channel) self.pubsub.close() self.pubsub = None def __del__(self): self.release() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611969105.0 portalocker-2.2.1/portalocker/utils.py0000644000076500000240000003267700000000000020173 0ustar00rickstaff00000000000000import abc import atexit import contextlib import os import pathlib import random import tempfile import time import typing from . import constants from . import exceptions from . import portalocker DEFAULT_TIMEOUT = 5 DEFAULT_CHECK_INTERVAL = 0.25 DEFAULT_FAIL_WHEN_LOCKED = False LOCK_METHOD = constants.LockFlags.EXCLUSIVE | constants.LockFlags.NON_BLOCKING __all__ = [ 'Lock', 'open_atomic', ] Filename = typing.Union[str, pathlib.Path] def coalesce(*args, test_value=None): '''Simple coalescing function that returns the first value that is not equal to the `test_value`. Or `None` if no value is valid. Usually this means that the last given value is the default value. Note that the `test_value` is compared using an identity check (i.e. `value is not test_value`) so changing the `test_value` won't work for all values. >>> coalesce(None, 1) 1 >>> coalesce() >>> coalesce(0, False, True) 0 >>> coalesce(0, False, True, test_value=0) False # This won't work because of the `is not test_value` type testing: >>> coalesce([], dict(spam='eggs'), test_value=[]) [] ''' for arg in args: if arg is not test_value: return arg @contextlib.contextmanager def open_atomic(filename: Filename, binary: bool = True): '''Open a file for atomic writing. Instead of locking this method allows you to write the entire file and move it to the actual location. Note that this makes the assumption that a rename is atomic on your platform which is generally the case but not a guarantee. http://docs.python.org/library/os.html#os.rename >>> filename = 'test_file.txt' >>> if os.path.exists(filename): ... os.remove(filename) >>> with open_atomic(filename) as fh: ... written = fh.write(b'test') >>> assert os.path.exists(filename) >>> os.remove(filename) >>> import pathlib >>> path_filename = pathlib.Path('test_file.txt') >>> with open_atomic(path_filename) as fh: ... written = fh.write(b'test') >>> assert path_filename.exists() >>> path_filename.unlink() ''' # `pathlib.Path` cast in case `path` is a `str` path: pathlib.Path = pathlib.Path(filename) assert not path.exists(), '%r exists' % path # Create the parent directory if it doesn't exist path.parent.mkdir(parents=True, exist_ok=True) temp_fh = tempfile.NamedTemporaryFile( mode=binary and 'wb' or 'w', dir=str(path.parent), delete=False, ) yield temp_fh temp_fh.flush() os.fsync(temp_fh.fileno()) temp_fh.close() try: os.rename(temp_fh.name, path) finally: try: os.remove(temp_fh.name) except Exception: pass class LockBase(abc.ABC): # pragma: no cover #: timeout when trying to acquire a lock timeout: float #: check interval while waiting for `timeout` check_interval: float #: skip the timeout and immediately fail if the initial lock fails fail_when_locked: bool def __init__(self, timeout: typing.Optional[float] = None, check_interval: typing.Optional[float] = None, fail_when_locked: typing.Optional[bool] = None): self.timeout = coalesce(timeout, DEFAULT_TIMEOUT) self.check_interval = coalesce(check_interval, DEFAULT_CHECK_INTERVAL) self.fail_when_locked = coalesce(fail_when_locked, DEFAULT_FAIL_WHEN_LOCKED) @abc.abstractmethod def acquire( self, timeout: float = None, check_interval: float = None, fail_when_locked: bool = None): return NotImplemented def _timeout_generator(self, timeout, check_interval): timeout = coalesce(timeout, self.timeout, 0.0) check_interval = coalesce(check_interval, self.check_interval, 0.0) yield 0 i = 0 start_time = time.perf_counter() while start_time + timeout > time.perf_counter(): i += 1 yield i # Take low lock checks into account to stay within the interval since_start_time = time.perf_counter() - start_time time.sleep(max(0.001, (i * check_interval) - since_start_time)) @abc.abstractmethod def release(self): return NotImplemented def __enter__(self): return self.acquire() def __exit__(self, type_, value, tb): self.release() def __delete__(self, instance): instance.release() class Lock(LockBase): '''Lock manager with build-in timeout Args: filename: filename mode: the open mode, 'a' or 'ab' should be used for writing truncate: use truncate to emulate 'w' mode, None is disabled, 0 is truncate to 0 bytes timeout: timeout when trying to acquire a lock check_interval: check interval while waiting fail_when_locked: after the initial lock failed, return an error or lock the file. This does not wait for the timeout. **file_open_kwargs: The kwargs for the `open(...)` call fail_when_locked is useful when multiple threads/processes can race when creating a file. If set to true than the system will wait till the lock was acquired and then return an AlreadyLocked exception. Note that the file is opened first and locked later. So using 'w' as mode will result in truncate _BEFORE_ the lock is checked. ''' def __init__( self, filename: Filename, mode: str = 'a', timeout: float = DEFAULT_TIMEOUT, check_interval: float = DEFAULT_CHECK_INTERVAL, fail_when_locked: bool = DEFAULT_FAIL_WHEN_LOCKED, flags: constants.LockFlags = LOCK_METHOD, **file_open_kwargs): if 'w' in mode: truncate = True mode = mode.replace('w', 'a') else: truncate = False self.fh: typing.Optional[typing.IO] = None self.filename: str = str(filename) self.mode: str = mode self.truncate: bool = truncate self.timeout: float = timeout self.check_interval: float = check_interval self.fail_when_locked: bool = fail_when_locked self.flags: constants.LockFlags = flags self.file_open_kwargs = file_open_kwargs def acquire( self, timeout: float = None, check_interval: float = None, fail_when_locked: bool = None) -> typing.IO: '''Acquire the locked filehandle''' fail_when_locked = coalesce(fail_when_locked, self.fail_when_locked) # If we already have a filehandle, return it fh = self.fh if fh: return fh # Get a new filehandler fh = self._get_fh() def try_close(): # pragma: no cover # Silently try to close the handle if possible, ignore all issues try: fh.close() except Exception: pass exception = None # Try till the timeout has passed for _ in self._timeout_generator(timeout, check_interval): exception = None try: # Try to lock fh = self._get_lock(fh) break except exceptions.LockException as exc: # Python will automatically remove the variable from memory # unless you save it in a different location exception = exc # We already tried to the get the lock # If fail_when_locked is True, stop trying if fail_when_locked: try_close() raise exceptions.AlreadyLocked(exception) # Wait a bit if exception: try_close() # We got a timeout... reraising raise exceptions.LockException(exception) # Prepare the filehandle (truncate if needed) fh = self._prepare_fh(fh) self.fh = fh return fh def release(self): '''Releases the currently locked file handle''' if self.fh: portalocker.unlock(self.fh) self.fh.close() self.fh = None def _get_fh(self) -> typing.IO: '''Get a new filehandle''' return open(self.filename, self.mode, **self.file_open_kwargs) def _get_lock(self, fh: typing.IO) -> typing.IO: ''' Try to lock the given filehandle returns LockException if it fails''' portalocker.lock(fh, self.flags) return fh def _prepare_fh(self, fh: typing.IO) -> typing.IO: ''' Prepare the filehandle for usage If truncate is a number, the file will be truncated to that amount of bytes ''' if self.truncate: fh.seek(0) fh.truncate(0) return fh class RLock(Lock): ''' A reentrant lock, functions in a similar way to threading.RLock in that it can be acquired multiple times. When the corresponding number of release() calls are made the lock will finally release the underlying file lock. ''' def __init__( self, filename, mode='a', timeout=DEFAULT_TIMEOUT, check_interval=DEFAULT_CHECK_INTERVAL, fail_when_locked=False, flags=LOCK_METHOD): super(RLock, self).__init__(filename, mode, timeout, check_interval, fail_when_locked, flags) self._acquire_count = 0 def acquire( self, timeout: float = None, check_interval: float = None, fail_when_locked: bool = None) -> typing.IO: if self._acquire_count >= 1: fh = self.fh else: fh = super(RLock, self).acquire(timeout, check_interval, fail_when_locked) self._acquire_count += 1 assert fh return fh def release(self): if self._acquire_count == 0: raise exceptions.LockException( "Cannot release more times than acquired") if self._acquire_count == 1: super(RLock, self).release() self._acquire_count -= 1 class TemporaryFileLock(Lock): def __init__(self, filename='.lock', timeout=DEFAULT_TIMEOUT, check_interval=DEFAULT_CHECK_INTERVAL, fail_when_locked=True, flags=LOCK_METHOD): Lock.__init__(self, filename=filename, mode='w', timeout=timeout, check_interval=check_interval, fail_when_locked=fail_when_locked, flags=flags) atexit.register(self.release) def release(self): Lock.release(self) if os.path.isfile(self.filename): # pragma: no branch os.unlink(self.filename) class BoundedSemaphore(LockBase): ''' Bounded semaphore to prevent too many parallel processes from running It's also possible to specify a timeout when acquiring the lock to wait for a resource to become available. This is very similar to threading.BoundedSemaphore but works across multiple processes and across multiple operating systems. >>> semaphore = BoundedSemaphore(2, directory='') >>> str(semaphore.get_filenames()[0]) 'bounded_semaphore.00.lock' >>> str(sorted(semaphore.get_random_filenames())[1]) 'bounded_semaphore.01.lock' ''' lock: typing.Optional[Lock] def __init__( self, maximum: int, name: str = 'bounded_semaphore', filename_pattern: str = '{name}.{number:02d}.lock', directory: str = tempfile.gettempdir(), timeout=DEFAULT_TIMEOUT, check_interval=DEFAULT_CHECK_INTERVAL): self.maximum = maximum self.name = name self.filename_pattern = filename_pattern self.directory = directory self.lock: typing.Optional[Lock] = None self.timeout = timeout self.check_interval = check_interval def get_filenames(self) -> typing.Sequence[pathlib.Path]: return [self.get_filename(n) for n in range(self.maximum)] def get_random_filenames(self) -> typing.Sequence[pathlib.Path]: filenames = list(self.get_filenames()) random.shuffle(filenames) return filenames def get_filename(self, number) -> pathlib.Path: return pathlib.Path(self.directory) / self.filename_pattern.format( name=self.name, number=number, ) def acquire( self, timeout: float = None, check_interval: float = None, fail_when_locked: bool = None) -> typing.Optional[Lock]: assert not self.lock, 'Already locked' filenames = self.get_filenames() print('filenames', filenames) for _ in self._timeout_generator(timeout, check_interval): # pragma: print('trying lock', filenames) # no branch if self.try_lock(filenames): # pragma: no branch return self.lock # pragma: no cover raise exceptions.AlreadyLocked() def try_lock(self, filenames: typing.Sequence[Filename]) -> bool: filename: Filename for filename in filenames: print('trying lock for', filename) self.lock = Lock(filename, fail_when_locked=True) try: self.lock.acquire() print('locked', filename) return True except exceptions.AlreadyLocked: pass return False def release(self): # pragma: no cover self.lock.release() self.lock = None ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1612316569.7224545 portalocker-2.2.1/portalocker.egg-info/0000755000076500000240000000000000000000000020134 5ustar00rickstaff00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1612316569.0 portalocker-2.2.1/portalocker.egg-info/PKG-INFO0000644000076500000240000002066200000000000021237 0ustar00rickstaff00000000000000Metadata-Version: 2.1 Name: portalocker Version: 2.2.1 Summary: Wraps the portalocker recipe for easy usage Home-page: https://github.com/WoLpH/portalocker Author: Rick van Hattem Author-email: wolph@wol.ph License: PSF Description: ############################################ portalocker - Cross-platform locking library ############################################ .. image:: https://travis-ci.com/WoLpH/portalocker.svg?branch=master :alt: Linux Test Status :target: https://travis-ci.com/WoLpH/portalocker .. image:: https://ci.appveyor.com/api/projects/status/mgqry98hgpy4prhh?svg=true :alt: Windows Tests Status :target: https://ci.appveyor.com/project/WoLpH/portalocker .. image:: https://coveralls.io/repos/WoLpH/portalocker/badge.svg?branch=master :alt: Coverage Status :target: https://coveralls.io/r/WoLpH/portalocker?branch=master Overview -------- Portalocker is a library to provide an easy API to file locking. An important detail to note is that on Linux and Unix systems the locks are advisory by default. By specifying the `-o mand` option to the mount command it is possible to enable mandatory file locking on Linux. This is generally not recommended however. For more information about the subject: - https://en.wikipedia.org/wiki/File_locking - http://stackoverflow.com/questions/39292051/portalocker-does-not-seem-to-lock - https://stackoverflow.com/questions/12062466/mandatory-file-lock-on-linux The module is currently maintained by Rick van Hattem . The project resides at https://github.com/WoLpH/portalocker . Bugs and feature requests can be submitted there. Patches are also very welcome. Redis Locks ----------- This library now features a lock based on Redis which allows for locks across multiple threads, processes and even distributed locks across multiple computers. It is an extremely reliable Redis lock that is based on pubsub. As opposed to most Redis locking systems based on key/value pairs, this locking method is based on the pubsub system. The big advantage is that if the connection gets killed due to network issues, crashing processes or otherwise, it will still immediately unlock instead of waiting for a lock timeout. Usage is really easy: :: import portalocker lock = portalocker.RedisLock('some_lock_channel_name') with lock: print('do something here') The API is essentially identical to the other ``Lock`` classes so in addition to the ``with`` statement you can also use ``lock.acquire(...)``. Python 2 -------- Python 2 was supported in versions before Portalocker 2.0. If you are still using Python 2, you can run this to install: :: pip install "portalocker<2" Tips ---- On some networked filesystems it might be needed to force a `os.fsync()` before closing the file so it's actually written before another client reads the file. Effectively this comes down to: :: with portalocker.Lock('some_file', 'rb+', timeout=60) as fh: # do what you need to do ... # flush and sync to filesystem fh.flush() os.fsync(fh.fileno()) Links ----- * Documentation - http://portalocker.readthedocs.org/en/latest/ * Source - https://github.com/WoLpH/portalocker * Bug reports - https://github.com/WoLpH/portalocker/issues * Package homepage - https://pypi.python.org/pypi/portalocker * My blog - http://w.wol.ph/ Examples -------- To make sure your cache generation scripts don't race, use the `Lock` class: >>> import portalocker >>> with portalocker.Lock('somefile', timeout=1) as fh: ... print >>fh, 'writing some stuff to my cache...' To customize the opening and locking a manual approach is also possible: >>> import portalocker >>> file = open('somefile', 'r+') >>> portalocker.lock(file, portalocker.EXCLUSIVE) >>> file.seek(12) >>> file.write('foo') >>> file.close() Explicitly unlocking is not needed in most cases but omitting it has been known to cause issues: >>> import portalocker >>> with portalocker.Lock('somefile', timeout=1) as fh: ... print >>fh, 'writing some stuff to my cache...' To customize the opening and locking a manual approach is also possible: >>> import portalocker >>> file = open('somefile', 'r+') >>> portalocker.lock(file, portalocker.EXCLUSIVE) >>> file.seek(12) >>> file.write('foo') >>> file.close() Explicitly unlocking is not needed in most cases but omitting it has been known to cause issues: >>> import portalocker >>> with portalocker.Lock('somefile', timeout=1) as fh: ... print >>fh, 'writing some stuff to my cache...' To customize the opening and locking a manual approach is also possible: >>> import portalocker >>> file = open('somefile', 'r+') >>> portalocker.lock(file, portalocker.LOCK_EX) >>> file.seek(12) >>> file.write('foo') >>> file.close() Explicitly unlocking is not needed in most cases but omitting it has been known to cause issues: https://github.com/AzureAD/microsoft-authentication-extensions-for-python/issues/42#issuecomment-601108266 If needed, it can be done through: >>> portalocker.unlock(file) Do note that your data might still be in a buffer so it is possible that your data is not available until you `flush()` or `close()`. To create a cross platform bounded semaphore across multiple processes you can use the `BoundedSemaphore` class which functions somewhat similar to `threading.BoundedSemaphore`: >>> import portalocker >>> n = 2 >>> timeout = 0.1 >>> semaphore_a = portalocker.BoundedSemaphore(n, timeout=timeout) >>> semaphore_b = portalocker.BoundedSemaphore(n, timeout=timeout) >>> semaphore_c = portalocker.BoundedSemaphore(n, timeout=timeout) >>> semaphore_a.acquire() >>> semaphore_b.acquire() >>> semaphore_c.acquire() Traceback (most recent call last): ... portalocker.exceptions.AlreadyLocked More examples can be found in the `tests `_. Changelog --------- Every release has a ``git tag`` with a commit message for the tag explaining what was added and/or changed. The list of tags/releases including the commit messages can be found here: https://github.com/WoLpH/portalocker/releases License ------- See the `LICENSE `_ file. Keywords: locking,locks,with statement,windows,linux,unix Platform: any Classifier: Intended Audience :: Developers Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Provides-Extra: docs Provides-Extra: tests Provides-Extra: redis ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1612316569.0 portalocker-2.2.1/portalocker.egg-info/SOURCES.txt0000644000076500000240000000117500000000000022024 0ustar00rickstaff00000000000000CHANGELOG.rst LICENSE MANIFEST.in README.rst setup.cfg setup.py portalocker/__about__.py portalocker/__init__.py portalocker/constants.py portalocker/exceptions.py portalocker/portalocker.py portalocker/redis.py portalocker/utils.py portalocker.egg-info/PKG-INFO portalocker.egg-info/SOURCES.txt portalocker.egg-info/dependency_links.txt portalocker.egg-info/requires.txt portalocker.egg-info/top_level.txt portalocker_tests/__init__.py portalocker_tests/conftest.py portalocker_tests/temporary_file_lock.py portalocker_tests/test_combined.py portalocker_tests/test_redis.py portalocker_tests/test_semaphore.py portalocker_tests/tests.py././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1612316569.0 portalocker-2.2.1/portalocker.egg-info/dependency_links.txt0000644000076500000240000000000100000000000024202 0ustar00rickstaff00000000000000 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1612316569.0 portalocker-2.2.1/portalocker.egg-info/requires.txt0000644000076500000240000000027000000000000022533 0ustar00rickstaff00000000000000 [:platform_system == "Windows"] pywin32!=226 [docs] sphinx>=1.7.1 [redis] redis [tests] pytest>=5.4.1 pytest-cov>=2.8.1 sphinx>=3.0.3 pytest-flake8>=1.0.5 pytest-mypy>=0.8.0 redis ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1612316569.0 portalocker-2.2.1/portalocker.egg-info/top_level.txt0000644000076500000240000000001400000000000022661 0ustar00rickstaff00000000000000portalocker ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1612316569.7240775 portalocker-2.2.1/portalocker_tests/0000755000076500000240000000000000000000000017664 5ustar00rickstaff00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1612314645.0 portalocker-2.2.1/portalocker_tests/__init__.py0000644000076500000240000000000000000000000021763 0ustar00rickstaff00000000000000././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611969105.0 portalocker-2.2.1/portalocker_tests/conftest.py0000644000076500000240000000052600000000000022066 0ustar00rickstaff00000000000000import py import logging import pytest logger = logging.getLogger(__name__) @pytest.fixture def tmpfile(tmpdir_factory): tmpdir = tmpdir_factory.mktemp('temp') filename = tmpdir.join('tmpfile') yield str(filename) try: filename.remove(ignore_errors=True) except (py.error.EBUSY, py.error.ENOENT): pass ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1563042862.0 portalocker-2.2.1/portalocker_tests/temporary_file_lock.py0000644000076500000240000000040000000000000024261 0ustar00rickstaff00000000000000import os import portalocker def test_temporary_file_lock(tmpfile): with portalocker.TemporaryFileLock(tmpfile): pass assert not os.path.isfile(tmpfile) lock = portalocker.TemporaryFileLock(tmpfile) lock.acquire() del lock ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1563042862.0 portalocker-2.2.1/portalocker_tests/test_combined.py0000644000076500000240000000050600000000000023056 0ustar00rickstaff00000000000000import sys def test_combined(tmpdir): from distutils import dist import setup output_file = tmpdir.join('combined.py') combine = setup.Combine(dist.Distribution()) combine.output_file = str(output_file) combine.run() sys.path.append(output_file.dirname) import combined assert combined ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611969105.0 portalocker-2.2.1/portalocker_tests/test_redis.py0000644000076500000240000000466600000000000022417 0ustar00rickstaff00000000000000import _thread import logging import random import time import pytest from redis import client from redis import exceptions import portalocker from portalocker import redis from portalocker import utils logger = logging.getLogger(__name__) try: client.Redis().ping() except (exceptions.ConnectionError, ConnectionRefusedError): pytest.skip('Unable to connect to redis', allow_module_level=True) @pytest.fixture(autouse=True) def set_redis_timeouts(monkeypatch): monkeypatch.setattr(utils, 'DEFAULT_TIMEOUT', 0.0001) monkeypatch.setattr(utils, 'DEFAULT_CHECK_INTERVAL', 0.0005) monkeypatch.setattr(redis, 'DEFAULT_UNAVAILABLE_TIMEOUT', 0.01) monkeypatch.setattr(redis, 'DEFAULT_THREAD_SLEEP_TIME', 0.001) monkeypatch.setattr(_thread, 'interrupt_main', lambda: None) def test_redis_lock(): channel = str(random.random()) lock_a = redis.RedisLock(channel) lock_a.acquire(fail_when_locked=True) time.sleep(0.01) lock_b = redis.RedisLock(channel) try: with pytest.raises(portalocker.AlreadyLocked): lock_b.acquire(fail_when_locked=True) finally: lock_a.release() lock_a.connection.close() @pytest.mark.parametrize('timeout', [None, 0, 0.001]) @pytest.mark.parametrize('check_interval', [None, 0, 0.0005]) def test_redis_lock_timeout(timeout, check_interval): connection = client.Redis() channel = str(random.random()) lock_a = redis.RedisLock(channel) lock_a.acquire(timeout=timeout, check_interval=check_interval) lock_b = redis.RedisLock(channel, connection=connection) with pytest.raises(portalocker.AlreadyLocked): try: lock_b.acquire(timeout=timeout, check_interval=check_interval) finally: lock_a.release() lock_a.connection.close() def test_redis_lock_context(): channel = str(random.random()) lock_a = redis.RedisLock(channel, fail_when_locked=True) with lock_a: time.sleep(0.01) lock_b = redis.RedisLock(channel, fail_when_locked=True) with pytest.raises(portalocker.AlreadyLocked): with lock_b: pass def test_redis_relock(): channel = str(random.random()) lock_a = redis.RedisLock(channel, fail_when_locked=True) with lock_a: time.sleep(0.01) with pytest.raises(AssertionError): lock_a.acquire() time.sleep(0.01) lock_a.release() if __name__ == '__main__': test_redis_lock() ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611419668.0 portalocker-2.2.1/portalocker_tests/test_semaphore.py0000644000076500000240000000152300000000000023261 0ustar00rickstaff00000000000000import random import pytest import portalocker from portalocker import utils @pytest.mark.parametrize('timeout', [None, 0, 0.001]) @pytest.mark.parametrize('check_interval', [None, 0, 0.0005]) def test_bounded_semaphore(timeout, check_interval, monkeypatch): n = 2 name = random.random() monkeypatch.setattr(utils, 'DEFAULT_TIMEOUT', 0.0001) monkeypatch.setattr(utils, 'DEFAULT_CHECK_INTERVAL', 0.0005) semaphore_a = portalocker.BoundedSemaphore(n, name=name, timeout=timeout) semaphore_b = portalocker.BoundedSemaphore(n, name=name, timeout=timeout) semaphore_c = portalocker.BoundedSemaphore(n, name=name, timeout=timeout) semaphore_a.acquire(timeout=timeout) semaphore_b.acquire() with pytest.raises(portalocker.AlreadyLocked): semaphore_c.acquire(check_interval=check_interval, timeout=timeout) ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1596328390.0 portalocker-2.2.1/portalocker_tests/tests.py0000644000076500000240000001337700000000000021413 0ustar00rickstaff00000000000000from __future__ import print_function from __future__ import with_statement import pytest import portalocker from portalocker import utils def test_exceptions(tmpfile): # Open the file 2 times a = open(tmpfile, 'a') b = open(tmpfile, 'a') # Lock exclusive non-blocking lock_flags = portalocker.LOCK_EX | portalocker.LOCK_NB # First lock file a portalocker.lock(a, lock_flags) # Now see if we can lock file b with pytest.raises(portalocker.LockException): portalocker.lock(b, lock_flags) # Cleanup a.close() b.close() def test_utils_base(): class Test(utils.LockBase): pass def test_with_timeout(tmpfile): # Open the file 2 times with pytest.raises(portalocker.AlreadyLocked): with portalocker.Lock(tmpfile, timeout=0.1) as fh: print('writing some stuff to my cache...', file=fh) with portalocker.Lock(tmpfile, timeout=0.1, mode='wb', fail_when_locked=True): pass print('writing more stuff to my cache...', file=fh) def test_without_timeout(tmpfile): # Open the file 2 times with pytest.raises(portalocker.LockException): with portalocker.Lock(tmpfile, timeout=None) as fh: print('writing some stuff to my cache...', file=fh) with portalocker.Lock(tmpfile, timeout=None, mode='w'): pass print('writing more stuff to my cache...', file=fh) def test_without_fail(tmpfile): # Open the file 2 times with pytest.raises(portalocker.LockException): with portalocker.Lock(tmpfile, timeout=0.1) as fh: print('writing some stuff to my cache...', file=fh) lock = portalocker.Lock(tmpfile, timeout=0.1) lock.acquire(check_interval=0.05, fail_when_locked=False) def test_simple(tmpfile): with open(tmpfile, 'w') as fh: fh.write('spam and eggs') fh = open(tmpfile, 'r+') portalocker.lock(fh, portalocker.LOCK_EX) fh.seek(13) fh.write('foo') # Make sure we didn't overwrite the original text fh.seek(0) assert fh.read(13) == 'spam and eggs' portalocker.unlock(fh) fh.close() def test_truncate(tmpfile): with open(tmpfile, 'w') as fh: fh.write('spam and eggs') with portalocker.Lock(tmpfile, mode='a+') as fh: # Make sure we didn't overwrite the original text fh.seek(0) assert fh.read(13) == 'spam and eggs' with portalocker.Lock(tmpfile, mode='w+') as fh: # Make sure we truncated the file assert fh.read() == '' def test_class(tmpfile): lock = portalocker.Lock(tmpfile) lock2 = portalocker.Lock(tmpfile, fail_when_locked=False, timeout=0.01) with lock: lock.acquire() with pytest.raises(portalocker.LockException): with lock2: pass with lock2: pass def test_acquire_release(tmpfile): lock = portalocker.Lock(tmpfile) lock2 = portalocker.Lock(tmpfile, fail_when_locked=False) lock.acquire() # acquire lock when nobody is using it with pytest.raises(portalocker.LockException): # another party should not be able to acquire the lock lock2.acquire(timeout=0.01) # re-acquire a held lock is a no-op lock.acquire() lock.release() # release the lock lock.release() # second release does nothing def test_rlock_acquire_release_count(tmpfile): lock = portalocker.RLock(tmpfile) # Twice acquire h = lock.acquire() assert not h.closed lock.acquire() assert not h.closed # Two release lock.release() assert not h.closed lock.release() assert h.closed def test_rlock_acquire_release(tmpfile): lock = portalocker.RLock(tmpfile) lock2 = portalocker.RLock(tmpfile, fail_when_locked=False) lock.acquire() # acquire lock when nobody is using it with pytest.raises(portalocker.LockException): # another party should not be able to acquire the lock lock2.acquire(timeout=0.01) # Now acquire again lock.acquire() lock.release() # release the lock lock.release() # second release does nothing def test_release_unacquired(tmpfile): with pytest.raises(portalocker.LockException): portalocker.RLock(tmpfile).release() def test_exlusive(tmpfile): with open(tmpfile, 'w') as fh: fh.write('spam and eggs') fh = open(tmpfile, 'r') portalocker.lock(fh, portalocker.LOCK_EX | portalocker.LOCK_NB) # Make sure we can't read the locked file with pytest.raises(portalocker.LockException): with open(tmpfile, 'r') as fh2: portalocker.lock(fh2, portalocker.LOCK_EX | portalocker.LOCK_NB) fh2.read() # Make sure we can't write the locked file with pytest.raises(portalocker.LockException): with open(tmpfile, 'w+') as fh2: portalocker.lock(fh2, portalocker.LOCK_EX | portalocker.LOCK_NB) fh2.write('surprise and fear') # Make sure we can explicitly unlock the file portalocker.unlock(fh) fh.close() def test_shared(tmpfile): with open(tmpfile, 'w') as fh: fh.write('spam and eggs') f = open(tmpfile, 'r') portalocker.lock(f, portalocker.LOCK_SH | portalocker.LOCK_NB) # Make sure we can read the locked file with open(tmpfile, 'r') as fh2: portalocker.lock(fh2, portalocker.LOCK_SH | portalocker.LOCK_NB) assert fh2.read() == 'spam and eggs' # Make sure we can't write the locked file with pytest.raises(portalocker.LockException): with open(tmpfile, 'w+') as fh2: portalocker.lock(fh2, portalocker.LOCK_EX | portalocker.LOCK_NB) fh2.write('surprise and fear') # Make sure we can explicitly unlock the file portalocker.unlock(f) f.close() ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1612316569.7249484 portalocker-2.2.1/setup.cfg0000644000076500000240000000034500000000000015740 0ustar00rickstaff00000000000000[metadata] description-file = README.rst [build_sphinx] source-dir = docs/ build-dir = docs/_build all_files = 1 [upload_sphinx] upload-dir = docs/_build/html [bdist_wheel] universal = 1 [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1611968975.0 portalocker-2.2.1/setup.py0000644000076500000240000001107600000000000015634 0ustar00rickstaff00000000000000from __future__ import print_function import os import re import sys import typing from distutils.version import LooseVersion import setuptools from setuptools import __version__ as setuptools_version from setuptools.command.test import test as TestCommand if LooseVersion(setuptools_version) < LooseVersion('38.3.0'): raise SystemExit( 'Your `setuptools` version is old. ' 'Please upgrade setuptools by running `pip install -U setuptools` ' 'and try again.' ) # To prevent importing about and thereby breaking the coverage info we use this # exec hack about: typing.Dict[str, str] = {} with open('portalocker/__about__.py') as fp: exec(fp.read(), about) tests_require = [ 'pytest>=5.4.1', 'pytest-cov>=2.8.1', 'sphinx>=3.0.3', 'pytest-flake8>=1.0.5', 'pytest-mypy>=0.8.0', 'redis', ] class PyTest(TestCommand): user_options = [('pytest-args=', 'a', "Arguments to pass to pytest")] def initialize_options(self): TestCommand.initialize_options(self) self.pytest_args = '' def run_tests(self): import shlex # import here, cause outside the eggs aren't loaded import pytest errno = pytest.main(shlex.split(self.pytest_args)) sys.exit(errno) class Combine(setuptools.Command): description = 'Build single combined portalocker file' relative_import_re = re.compile(r'^from \. import (?P.+)$', re.MULTILINE) user_options = [ ('output-file=', 'o', 'Path to the combined output file'), ] def initialize_options(self): self.output_file = os.path.join( 'dist', '%(package_name)s_%(version)s.py' % dict( package_name=about['__package_name__'], version=about['__version__'].replace('.', '-'), )) def finalize_options(self): pass def run(self): dirname = os.path.dirname(self.output_file) if dirname and not os.path.isdir(dirname): os.makedirs(dirname) output = open(self.output_file, 'w') print("'''", file=output) with open('README.rst') as fh: output.write(fh.read().rstrip()) print('', file=output) print('', file=output) with open('LICENSE') as fh: output.write(fh.read().rstrip()) print('', file=output) print("'''", file=output) names = set() lines = [] for line in open('portalocker/__init__.py'): match = self.relative_import_re.match(line) if match: names.add(match.group('name')) with open('portalocker/%(name)s.py' % match.groupdict()) as fh: line = fh.read() line = self.relative_import_re.sub('', line) lines.append(line) import_attributes = re.compile(r'\b(%s)\.' % '|'.join(names)) for line in lines[:]: line = import_attributes.sub('', line) output.write(line) print('Wrote combined file to %r' % self.output_file) if __name__ == '__main__': setuptools.setup( name=about['__package_name__'], version=about['__version__'], description=about['__description__'], long_description=open('README.rst').read(), classifiers=[ 'Intended Audience :: Developers', 'Programming Language :: Python', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', ], keywords='locking, locks, with statement, windows, linux, unix', author=about['__author__'], author_email=about['__email__'], url=about['__url__'], license='PSF', packages=setuptools.find_packages(exclude=[ 'examples', 'portalocker_tests']), # zip_safe=False, platforms=['any'], cmdclass={ 'combine': Combine, 'test': PyTest, }, install_requires=[ 'pywin32!=226; platform_system == "Windows"', ], tests_require=tests_require, extras_require=dict( docs=[ 'sphinx>=1.7.1', ], tests=tests_require, redis=[ 'redis', ] ), )