pax_global_header00006660000000000000000000000064136652625420014525gustar00rootroot0000000000000052 comment=2c9f41f46991f94f0626598600d1023d4e12f0bc redis-py-3.5.3/000077500000000000000000000000001366526254200132715ustar00rootroot00000000000000redis-py-3.5.3/.github/000077500000000000000000000000001366526254200146315ustar00rootroot00000000000000redis-py-3.5.3/.github/ISSUE_TEMPLATE.md000066400000000000000000000010571366526254200173410ustar00rootroot00000000000000Thanks for wanting to report an issue you've found in redis-py. Please delete this text and fill in the template below. It is of course not always possible to reduce your code to a small test case, but it's highly appreciated to have as much data as possible. Thank you! **Version**: What redis-py and what redis version is the issue happening on? **Platform**: What platform / version? (For example Python 3.5.1 on Windows 7 / Ubuntu 15.10 / Azure) **Description**: Description of your issue, stack traces from errors and code that reproduces the issue redis-py-3.5.3/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000011431366526254200204310ustar00rootroot00000000000000### Pull Request check-list _Please make sure to review and check all of these items:_ - [ ] Does `$ tox` pass with this change (including linting)? - [ ] Does travis tests pass with this change (enable it first in your forked repo and wait for the travis build to finish)? - [ ] Is the new or changed code fully tested? - [ ] Is a documentation update included (if this change modifies existing APIs, or introduces new ones)? _NOTE: these things are not required to open a PR and can be done afterwards / while the PR is open._ ### Description of change _Please provide a description of the change here._ redis-py-3.5.3/.gitignore000066400000000000000000000001661366526254200152640ustar00rootroot00000000000000*.pyc redis.egg-info build/ dist/ dump.rdb /.tox _build vagrant/.vagrant .python-version .cache .eggs .idea .coverage redis-py-3.5.3/.travis.yml000066400000000000000000000021531366526254200154030ustar00rootroot00000000000000language: python cache: pip matrix: include: - env: TOXENV=flake8 - python: 2.7 env: TOXENV=py27-plain - python: 2.7 env: TOXENV=py27-hiredis - python: 3.5 env: TOXENV=py35-plain - python: 3.5 env: TOXENV=py35-hiredis - python: 3.6 env: TOXENV=py36-plain - python: 3.6 env: TOXENV=py36-hiredis - python: 3.7 env: TOXENV=py37-plain - python: 3.7 env: TOXENV=py37-hiredis - python: 3.8 env: TOXENV=py38-plain - python: 3.8 env: TOXENV=py38-hiredis - python: pypy env: TOXENV=pypy-plain - python: pypy env: TOXENV=pypy-hiredis - python: pypy3 env: TOXENV=pypy3-plain - python: pypy3 env: TOXENV=pypy3-hiredis before_install: - wget https://github.com/antirez/redis/archive/6.0-rc1.tar.gz && mkdir redis_install && tar -xvzf 6.0-rc1.tar.gz -C redis_install && cd redis_install/redis-6.0-rc1 && make && src/redis-server --daemonize yes && cd ../.. - redis-cli info install: - pip install codecov tox script: - tox after_success: - "if [[ $TOXENV != 'flake8' ]]; then codecov; fi" redis-py-3.5.3/CHANGES000066400000000000000000001342301366526254200142670ustar00rootroot00000000000000* (in development) * Restore try/except clauses to __del__ methods. These will be removed in 4.0 when more explicit resource management if enforced. #1339 * Update the master_address when Sentinels promote a new master. #847 * Update SentinelConnectionPool to not forcefully disconnect other in-use connections which can negatively affect threaded applications. #1345 * 3.5.2 (May 14, 2020) * Tune the locking in ConnectionPool.get_connection so that the lock is not held while waiting for the socket to establish and validate the TCP connection. * 3.5.1 (May 9, 2020) * Fix for HSET argument validation to allow any non-None key. Thanks @AleksMat, #1337, #1341 * 3.5.0 (April 29, 2020) * Removed exception trapping from __del__ methods. redis-py objects that hold various resources implement __del__ cleanup methods to release those resources when the object goes out of scope. This provides a fallback for when these objects aren't explicitly closed by user code. Prior to this change any errors encountered in closing these resources would be hidden from the user. Thanks @jdufresne. #1281 * Expanded support for connection strings specifying a username connecting to pre-v6 servers. #1274 * Optimized Lock's blocking_timeout and sleep. If the lock cannot be acquired and the sleep value would cause the loop to sleep beyond blocking_timeout, fail immediately. Thanks @clslgrnc. #1263 * Added support for passing Python memoryviews to Redis command args that expect strings or bytes. The memoryview instance is sent directly to the socket such that there are zero copies made of the underlying data during command packing. Thanks @Cody-G. #1265, #1285 * HSET command now can accept multiple pairs. HMSET has been marked as deprecated now. Thanks to @laixintao #1271 * Don't manually DISCARD when encountering an ExecAbortError. Thanks @nickgaya, #1300/#1301 * Reset the watched state of pipelines after calling exec. This saves a roundtrip to the server by not having to call UNWATCH within Pipeline.reset(). Thanks @nickgaya, #1299/#1302 * Added the KEEPTTL option for the SET command. Thanks @laixintao #1304/#1280 * Added the MEMORY STATS command. #1268 * Lock.extend() now has a new option, `replace_ttl`. When False (the default), Lock.extend() adds the `additional_time` to the lock's existing TTL. When replace_ttl=True, the lock's existing TTL is replaced with the value of `additional_time`. * Add testing and support for PyPy. * 3.4.1 * Move the username argument in the Redis and Connection classes to the end of the argument list. This helps those poor souls that specify all their connection options as non-keyword arguments. #1276 * Prior to ACL support, redis-py ignored the username component of Connection URLs. With ACL support, usernames are no longer ignored and are used to authenticate against an ACL rule. Some cloud vendors with managed Redis instances (like Heroku) provide connection URLs with a username component pre-ACL that is not intended to be used. Sending that username to Redis servers < 6.0.0 results in an error. Attempt to detect this condition and retry the AUTH command with only the password such that authentication continues to work for these users. #1274 * Removed the __eq__ hooks to Redis and ConnectionPool that were added in 3.4.0. This ended up being a bad idea as two separate connection pools be considered equal yet manage a completely separate set of connections. * 3.4.0 * Allow empty pipelines to be executed if there are WATCHed keys. This is a convenient way to test if any of the watched keys changed without actually running any other commands. Thanks @brianmaissy. #1233, #1234 * Removed support for end of life Python 3.4. * Added support for all ACL commands in Redis 6. Thanks @IAmATeaPot418 for helping. * Pipeline instances now always evaluate to True. Prior to this change, pipeline instances relied on __len__ for boolean evaluation which meant that pipelines with no commands on the stack would be considered False. #994 * Client instances and Connection pools now support a 'client_name' argument. If supplied, all connections created will call CLIENT SETNAME as soon as the connection is opened. Thanks to @Habbie for supplying the basis of this change. #802 * Added the 'ssl_check_hostname' argument to specify whether SSL connections should require the server hostname to match the hostname specified in the SSL cert. By default 'ssl_check_hostname' is False for backwards compatibility. #1196 * Slightly optimized command packing. Thanks @Deneby67. #1255 * Added support for the TYPE argument to SCAN. Thanks @netocp. #1220 * Better thread and fork safety in ConnectionPool and BlockingConnectionPool. Added better locking to synchronize critical sections rather than relying on CPython-specific implementation details relating to atomic operations. Adjusted how the pools identify and deal with a fork. Added a ChildDeadlockedError exception that is raised by child processes in the very unlikely chance that a deadlock is encountered. Thanks @gmbnomis, @mdellweg, @yht804421715. #1270, #1138, #1178, #906, #1262 * Added __eq__ hooks to the Redis and ConnectionPool classes. Thanks @brainix. #1240 * 3.3.11 * Further fix for the SSLError -> TimeoutError mapping to work on obscure releases of Python 2.7. * 3.3.10 * Fixed a potential error handling bug for the SSLError -> TimeoutError mapping introduced in 3.3.9. Thanks @zbristow. #1224 * 3.3.9 * Mapped Python 2.7 SSLError to TimeoutError where appropriate. Timeouts should now consistently raise TimeoutErrors on Python 2.7 for both unsecured and secured connections. Thanks @zbristow. #1222 * 3.3.8 * Fixed MONITOR parsing to properly parse IPv6 client addresses, unix socket connections and commands issued from Lua. Thanks @kukey. #1201 * 3.3.7 * Fixed a regression introduced in 3.3.0 where socket.error exceptions (or subclasses) could potentially be raised instead of redis.exceptions.ConnectionError. #1202 * 3.3.6 * Fixed a regression in 3.3.5 that caused PubSub.get_message() to raise a socket.timeout exception when passing a timeout value. #1200 * 3.3.5 * Fix an issue where socket.timeout errors could be handled by the wrong exception handler in Python 2.7. * 3.3.4 * More specifically identify nonblocking read errors for both SSL and non-SSL connections. 3.3.1, 3.3.2 and 3.3.3 on Python 2.7 could potentially mask a ConnectionError. #1197 * 3.3.3 * The SSL module in Python < 2.7.9 handles non-blocking sockets differently than 2.7.9+. This patch accommodates older versions. #1197 * 3.3.2 * Further fixed a regression introduced in 3.3.0 involving SSL and non-blocking sockets. #1197 * 3.3.1 * Fixed a regression introduced in 3.3.0 involving SSL and non-blocking sockets. #1197 * 3.3.0 * Resolve a race condition with the PubSubWorkerThread. #1150 * Cleanup socket read error messages. Thanks Vic Yu. #1159 * Cleanup the Connection's selector correctly. Thanks Bruce Merry. #1153 * Added a Monitor object to make working with MONITOR output easy. Thanks Roey Prat #1033 * Internal cleanup: Removed the legacy Token class which was necessary with older version of Python that are no longer supported. #1066 * Response callbacks are now case insensitive. This allows users that call Redis.execute_command() directly to pass lower-case command names and still get reasonable responses. #1168 * Added support for hiredis-py 1.0.0 encoding error support. This should make the PythonParser and the HiredisParser behave identically when encountering encoding errors. Thanks Brian Candler. #1161/#1162 * All authentication errors now properly raise AuthenticationError. AuthenticationError is now a subclass of ConnectionError, which will cause the connection to be disconnected and cleaned up appropriately. #923 * Add READONLY and READWRITE commands. Thanks @theodesp. #1114 * Remove selectors in favor of nonblocking sockets. Selectors had issues in some environments including eventlet and gevent. This should resolve those issues with no other side effects. * Fixed an issue with XCLAIM and previously claimed but not removed messages. Thanks @thomdask. #1192/#1191 * Allow for single connection client instances. These instances are not thread safe but offer other benefits including a subtle performance increase. * Added extensive health checks that keep the connections lively. Passing the "health_check_interval=N" option to the Redis client class or to a ConnectionPool ensures that a round trip PING/PONG is successful before any command if the underlying connection has been idle for more than N seconds. ConnectionErrors and TimeoutErrors are automatically retried once for health checks. * Changed the PubSubWorkerThread to use a threading.Event object rather than a boolean to control the thread's life cycle. Thanks Timothy Rule. #1194/#1195. * Fixed a bug in Pipeline error handling that would incorrectly retry ConnectionErrors. * 3.2.1 * Fix SentinelConnectionPool to work in multiprocess/forked environments. * 3.2.0 * Added support for `select.poll` to test whether data can be read on a socket. This should allow for significantly more connections to be used with pubsub. Fixes #486/#1115 * Attempt to guarantee that the ConnectionPool hands out healthy connections. Healthy connections are those that have an established socket connection to the Redis server, are ready to accept a command and have no data available to read. Fixes #1127/#886 * Use the socket.IPPROTO_TCP constant instead of socket.SOL_TCP. IPPROTO_TCP is available on more interpreters (Jython for instance). Thanks @Junnplus. #1130 * Fixed a regression introduced in 3.0 that mishandles exceptions not derived from the base Exception class. KeyboardInterrupt and gevent.timeout notable. Thanks Christian Fersch. #1128/#1129 * Significant improvements to handing connections with forked processes. Parent and child processes no longer trample on each others' connections. Thanks to Jay Rolette for the patch and highlighting this issue. #504/#732/#784/#863 * PythonParser no longer closes the associated connection's socket. The connection itself will close the socket. #1108/#1085 * 3.1.0 * Connection URLs must have one of the following schemes: redis://, rediss://, unix://. Thanks @jdupl123. #961/#969 * Fixed an issue with retry_on_timeout logic that caused some TimeoutErrors to be retried. Thanks Aaron Yang. #1022/#1023 * Added support for SNI for SSL. Thanks @oridistor and Roey Prat. #1087 * Fixed ConnectionPool repr for pools with no connections. Thanks Cody Scott. #1043/#995 * Fixed GEOHASH to return a None value when specifying a place that doesn't exist on the server. Thanks @guybe7. #1126 * Fixed XREADGROUP to return an empty dictionary for messages that have been deleted but still exist in the unacknowledged queue. Thanks @xeizmendi. #1116 * Added an owned method to Lock objects. owned returns a boolean indicating whether the current lock instance still owns the lock. Thanks Dave Johansen. #1112 * Allow lock.acquire() to accept an optional token argument. If provided, the token argument is used as the unique value used to claim the lock. Thankd Dave Johansen. #1112 * Added a reacquire method to Lock objects. reacquire attempts to renew the lock such that the timeout is extended to the same value that the lock was initially acquired with. Thanks Ihor Kalnytskyi. #1014 * Stream names found within XREAD and XREADGROUP responses now properly respect the decode_responses flag. * XPENDING_RANGE now requires the user the specify the min, max and count arguments. Newer versions of Redis prevent count from being infinite so it's left to the user to specify these values explicitly. * ZADD now returns None when xx=True and incr=True and an element is specified that doesn't exist in the sorted set. This matches what the server returns in this case. #1084 * Added client_kill_filter that accepts various filters to identify and kill clients. Thanks Theofanis Despoudis. #1098 * Fixed a race condition that occurred when unsubscribing and resubscribing to the same channel or pattern in rapid succession. Thanks Marcin Raczyński. #764 * Added a LockNotOwnedError that is raised when trying to extend or release a lock that is no longer owned. This is a subclass of LockError so previous code should continue to work as expected. Thanks Joshua Harlow. #1095 * Fixed a bug in GEORADIUS that forced decoding of places without respecting the decode_responses option. Thanks Bo Bayles. #1082 * 3.0.1 * Fixed regression with UnixDomainSocketConnection caused by 3.0.0. Thanks Jyrki Muukkonen * Fixed an issue with the new asynchronous flag on flushdb and flushall. Thanks rogeryen * Updated Lock.locked() method to indicate whether *any* process has acquired the lock, not just the current one. This is in line with the behavior of threading.Lock. Thanks Alan Justino da Silva * 3.0.0 BACKWARDS INCOMPATIBLE CHANGES * When using a Lock as a context manager and the lock fails to be acquired a LockError is now raised. This prevents the code block inside the context manager from being executed if the lock could not be acquired. * Renamed LuaLock to Lock. * Removed the pipeline based Lock implementation in favor of the LuaLock implementation. * Only bytes, strings and numbers (ints, longs and floats) are acceptable for keys and values. Previously redis-py attempted to cast other types to str() and store the result. This caused must confusion and frustration when passing boolean values (cast to 'True' and 'False') or None values (cast to 'None'). It is now the user's responsibility to cast all key names and values to bytes, strings or numbers before passing the value to redis-py. * The StrictRedis class has been renamed to Redis. StrictRedis will continue to exist as an alias of Redis for the foreseeable future. * The legacy Redis client class has been removed. It caused much confusion to users. * ZINCRBY arguments 'value' and 'amount' have swapped order to match the the Redis server. The new argument order is: keyname, amount, value. * MGET no longer raises an error if zero keys are passed in. Instead an empty list is returned. * MSET and MSETNX now require all keys/values to be specified in a single dictionary argument named mapping. This was changed to allow for future options to these commands in the future. * ZADD now requires all element names/scores be specified in a single dictionary argument named mapping. This was required to allow the NX, XX, CH and INCR options to be specified. * ssl_cert_reqs now has a default value of 'required' by default. This should make connecting to a remote Redis server over SSL more secure. Thanks u2mejc * Removed support for EOL Python 2.6 and 3.3. Thanks jdufresne OTHER CHANGES * Added missing DECRBY command. Thanks derek-dchu * CLUSTER INFO and CLUSTER NODES responses are now properly decoded to strings. * Added a 'locked()' method to Lock objects. This method returns True if the lock has been acquired and owned by the current process, otherwise False. * EXISTS now supports multiple keys. It's return value is now the number of keys in the list that exist. * Ensure all commands can accept key names as bytes. This fixes issues with BLPOP, BRPOP and SORT. * All errors resulting from bad user input are raised as DataError exceptions. DataError is a subclass of RedisError so this should be transparent to anyone previously catching these. * Added support for NX, XX, CH and INCR options to ZADD * Added support for the MIGRATE command * Added support for the MEMORY USAGE and MEMORY PURGE commands. Thanks Itamar Haber * Added support for the 'asynchronous' argument to FLUSHDB and FLUSHALL commands. Thanks Itamar Haber * Added support for the BITFIELD command. Thanks Charles Leifer and Itamar Haber * Improved performance on pipeline requests with large chunks of data. Thanks tzickel * Fixed test suite to not fail if another client is connected to the server the tests are running against. * Added support for SWAPDB. Thanks Itamar Haber * Added support for all STREAM commands. Thanks Roey Prat and Itamar Haber * SHUTDOWN now accepts the 'save' and 'nosave' arguments. Thanks dwilliams-kenzan * Added support for ZPOPMAX, ZPOPMIN, BZPOPMAX, BZPOPMIN. Thanks Itamar Haber * Added support for the 'type' argument in CLIENT LIST. Thanks Roey Prat * Added support for CLIENT PAUSE. Thanks Roey Prat * Added support for CLIENT ID and CLIENT UNBLOCK. Thanks Itamar Haber * GEODIST now returns a None value when referencing a place that does not exist. Thanks qingping209 * Added a ping() method to pubsub objects. Thanks krishan-carbon * Fixed a bug with keys in the INFO dict that contained ':' symbols. Thanks mzalimeni * Fixed the select system call retry compatibility with Python 2.x. Thanks lddubeau * max_connections is now a valid querystring argument for creating connection pools from URLs. Thanks mmaslowskicc * Added the UNLINK command. Thanks yozel * Added socket_type option to Connection for configurability. Thanks garlicnation * Lock.do_acquire now atomically sets acquires the lock and sets the expire value via set(nx=True, px=timeout). Thanks 23doors * Added 'count' argument to SPOP. Thanks AlirezaSadeghi * Fixed an issue parsing client_list responses that contained an '='. Thanks swilly22 * 2.10.6 * Various performance improvements. Thanks cjsimpson * Fixed a bug with SRANDMEMBER where the behavior for `number=0` did not match the spec. Thanks Alex Wang * Added HSTRLEN command. Thanks Alexander Putilin * Added the TOUCH command. Thanks Anis Jonischkeit * Remove unnecessary calls to the server when registering Lua scripts. Thanks Ben Greenberg * SET's EX and PX arguments now allow values of zero. Thanks huangqiyin * Added PUBSUB {CHANNELS, NUMPAT, NUMSUB} commands. Thanks Angus Pearson * PubSub connections that encounter `InterruptedError`s now retry automatically. Thanks Carlton Gibson and Seth M. Larson * LPUSH and RPUSH commands run on PyPy now correctly returns the number of items of the list. Thanks Jeong YunWon * Added support to automatically retry socket EINTR errors. Thanks Thomas Steinacher * PubSubWorker threads started with `run_in_thread` are now daemonized so the thread shuts down when the running process goes away. Thanks Keith Ainsworth * Added support for GEO commands. Thanks Pau Freixes, Alex DeBrie and Abraham Toriz * Made client construction from URLs smarter. Thanks Tim Savage * Added support for CLUSTER * commands. Thanks Andy Huang * The RESTORE command now accepts an optional `replace` boolean. Thanks Yoshinari Takaoka * Attempt to connect to a new Sentinel if a TimeoutError occurs. Thanks Bo Lopker * Fixed a bug in the client's `__getitem__` where a KeyError would be raised if the value returned by the server is an empty string. Thanks Javier Candeira. * Socket timeouts when connecting to a server are now properly raised as TimeoutErrors. * 2.10.5 * Allow URL encoded parameters in Redis URLs. Characters like a "/" can now be URL encoded and redis-py will correctly decode them. Thanks Paul Keene. * Added support for the WAIT command. Thanks https://github.com/eshizhan * Better shutdown support for the PubSub Worker Thread. It now properly cleans up the connection, unsubscribes from any channels and patterns previously subscribed to and consumes any waiting messages on the socket. * Added the ability to sleep for a brief period in the event of a WatchError occurring. Thanks Joshua Harlow. * Fixed a bug with pipeline error reporting when dealing with characters in error messages that could not be encoded to the connection's character set. Thanks Hendrik Muhs. * Fixed a bug in Sentinel connections that would inadvertently connect to the master when the connection pool resets. Thanks https://github.com/df3n5 * Better timeout support in Pubsub get_message. Thanks Andy Isaacson. * Fixed a bug with the HiredisParser that would cause the parser to get stuck in an endless loop if a specific number of bytes were delivered from the socket. This fix also increases performance of parsing large responses from the Redis server. * Added support for ZREVRANGEBYLEX. * ConnectionErrors are now raised if Redis refuses a connection due to the maxclients limit being exceeded. Thanks Roman Karpovich. * max_connections can now be set when instantiating client instances. Thanks Ohad Perry. * 2.10.4 (skipped due to a PyPI snafu) * 2.10.3 * Fixed a bug with the bytearray support introduced in 2.10.2. Thanks Josh Owen. * 2.10.2 * Added support for Hiredis's new bytearray support. Thanks https://github.com/tzickel * POSSIBLE BACKWARDS INCOMPATBLE CHANGE: Fixed a possible race condition when multiple threads share the same Lock instance with a timeout. Lock tokens are now stored in thread local storage by default. If you have code that acquires a lock in one thread and passes that lock instance to another thread to release it, you need to disable thread local storage. Refer to the doc strings on the Lock class about the thread_local argument information. * Fixed a regression in from_url where "charset" and "errors" weren't valid options. "encoding" and "encoding_errors" are still accepted and preferred. * The "charset" and "errors" options have been deprecated. Passing either to StrictRedis.__init__ or from_url will still work but will also emit a DeprecationWarning. Instead use the "encoding" and "encoding_errors" options. * Fixed a compatibility bug with Python 3 when the server closes a connection. * Added BITPOS command. Thanks https://github.com/jettify. * Fixed a bug when attempting to send large values to Redis in a Pipeline. * 2.10.1 * Fixed a bug where Sentinel connections to a server that's no longer a master and receives a READONLY error will disconnect and reconnect to the master. * 2.10.0 * Discontinued support for Python 2.5. Upgrade. You'll be happier. * The HiRedis parser will now properly raise ConnectionErrors. * Completely refactored PubSub support. Fixes all known PubSub bugs and adds a bunch of new features. Docs can be found in the README under the new "Publish / Subscribe" section. * Added the new HyperLogLog commands (PFADD, PFCOUNT, PFMERGE). Thanks Pepijn de Vos and Vincent Ohprecio. * Updated TTL and PTTL commands with Redis 2.8+ semantics. Thanks Markus Kaiserswerth. * *SCAN commands now return a long (int on Python3) cursor value rather than the string representation. This might be slightly backwards incompatible in code using *SCAN commands loops such as "while cursor != '0':". * Added extra *SCAN commands that return iterators instead of the normal [cursor, data] type. Use scan_iter, hscan_iter, sscan_iter, and zscan_iter for iterators. Thanks Mathieu Longtin. * Added support for SLOWLOG commands. Thanks Rick van Hattem. * Added lexicographical commands ZRANGEBYLEX, ZREMRANGEBYLEX, and ZLEXCOUNT for sorted sets. * Connection objects now support an optional argument, socket_read_size, indicating how much data to read during each socket.recv() call. After benchmarking, increased the default size to 64k, which dramatically improves performance when fetching large values, such as many results in a pipeline or a large (>1MB) string value. * Improved the pack_command and send_packed_command functions to increase performance when sending large (>1MB) values. * Sentinel Connections to master servers now detect when a READONLY error is encountered and disconnect themselves and all other active connections to the same master so that the new master can be discovered. * Fixed Sentinel state parsing on Python 3. * Added support for SENTINEL MONITOR, SENTINEL REMOVE, and SENTINEL SET commands. Thanks Greg Murphy. * INFO ouput that doesn't follow the "key:value" format will now be appended to a key named "__raw__" in the INFO dictionary. Thanks Pedro Larroy. * The "vagrant" directory contains a complete vagrant environment for redis-py developers. The environment runs a Redis master, a Redis slave, and 3 Sentinels. Future iterations of the test sutie will incorporate more integration style tests, ensuring things like failover happen correctly. * It's now possible to create connection pool instances from a URL. StrictRedis.from_url() now uses this feature to create a connection pool instance and use that when creating a new client instance. Thanks https://github.com/chillipino * When creating client instances or connection pool instances from an URL, it's now possible to pass additional options to the connection pool with querystring arguments. * Fixed a bug where some encodings (like utf-16) were unusable on Python 3 as command names and literals would get encoded. * Added an SSLConnection class that allows for secure connections through stunnel or other means. Construct and SSL connection with the sll=True option on client classes, using the rediss:// scheme from an URL, or by passing the SSLConnection class to a connection pool's connection_class argument. Thanks https://github.com/oranagra. * Added a socket_connect_timeout option to control how long to wait while establishing a TCP connection before timing out. This lets the client fail fast when attempting to connect to a downed server while keeping a more lenient timeout for all other socket operations. * Added TCP Keep-alive support by passing use the socket_keepalive=True option. Finer grain control can be achieved using the socket_keepalive_options option which expects a dictionary with any of the keys (socket.TCP_KEEPIDLE, socket.TCP_KEEPCNT, socket.TCP_KEEPINTVL) and integers for values. Thanks Yossi Gottlieb. * Added a `retry_on_timeout` option that controls how socket.timeout errors are handled. By default it is set to False and will cause the client to raise a TimeoutError anytime a socket.timeout is encountered. If `retry_on_timeout` is set to True, the client will retry a command that timed out once like other `socket.error`s. * Completely refactored the Lock system. There is now a LuaLock class that's used when the Redis server is capable of running Lua scripts along with a fallback class for Redis servers < 2.6. The new locks fix several subtle race consider that the old lock could face. In additional, a new method, "extend" is available on lock instances that all a lock owner to extend the amount of time they have the lock for. Thanks to Eli Finkelshteyn and https://github.com/chillipino for contributions. * 2.9.1 * IPv6 support. Thanks https://github.com/amashinchi * 2.9.0 * Performance improvement for packing commands when using the PythonParser. Thanks Guillaume Viot. * Executing an empty pipeline transaction no longer sends MULTI/EXEC to the server. Thanks EliFinkelshteyn. * Errors when authenticating (incorrect password) and selecting a database now close the socket. * Full Sentinel support thanks to Vitja Makarov. Thanks! * Better repr support for client and connection pool instances. Thanks Mark Roberts. * Error messages that the server sends to the client are now included in the client error message. Thanks Sangjin Lim. * Added the SCAN, SSCAN, HSCAN, and ZSCAN commands. Thanks Jingchao Hu. * ResponseErrors generated by pipeline execution provide addition context including the position of the command in the pipeline and the actual command text generated the error. * ConnectionPools now play nicer in threaded environments that fork. Thanks Christian Joergensen. * 2.8.0 * redis-py should play better with gevent when a gevent Timeout is raised. Thanks leifkb. * Added SENTINEL command. Thanks Anna Janackova. * Fixed a bug where pipelines could potentially corrupt a connection if the MULTI command generated a ResponseError. Thanks EliFinkelshteyn for the report. * Connections now call socket.shutdown() prior to socket.close() to ensure communication ends immediately per the note at https://docs.python.org/2/library/socket.html#socket.socket.close Thanks to David Martin for pointing this out. * Lock checks are now based on floats rather than ints. Thanks Vitja Makarov. * 2.7.6 * Added CONFIG RESETSTAT command. Thanks Yossi Gottlieb. * Fixed a bug introduced in 2.7.3 that caused issues with script objects and pipelines. Thanks Carpentier Pierre-Francois. * Converted redis-py's test suite to use the awesome py.test library. * Fixed a bug introduced in 2.7.5 that prevented a ConnectionError from being raised when the Redis server is LOADING data. * Added a BusyLoadingError exception that's raised when the Redis server is starting up and not accepting commands yet. BusyLoadingError subclasses ConnectionError, which this state previously returned. Thanks Yossi Gottlieb. * 2.7.5 * DEL, HDEL and ZREM commands now return the numbers of keys deleted instead of just True/False. * from_url now supports URIs with a port number. Thanks Aaron Westendorf. * 2.7.4 * Added missing INCRBY method. Thanks Krzysztof Dorosz. * SET now accepts the EX, PX, NX and XX options from Redis 2.6.12. These options will generate errors if these options are used when connected to a Redis server < 2.6.12. Thanks George Yoshida. * 2.7.3 * Fixed a bug with BRPOPLPUSH and lists with empty strings. * All empty except: clauses have been replaced to only catch Exception subclasses. This prevents a KeyboardInterrupt from triggering exception handlers. Thanks Lucian Branescu Mihaila. * All exceptions that are the result of redis server errors now share a command Exception subclass, ServerError. Thanks Matt Robenolt. * Prevent DISCARD from being called if MULTI wasn't also called. Thanks Pete Aykroyd. * SREM now returns an integer indicating the number of items removed from the set. Thanks https://github.com/ronniekk. * Fixed a bug with BGSAVE and BGREWRITEAOF response callbacks with Python3. Thanks Nathan Wan. * Added CLIENT GETNAME and CLIENT SETNAME commands. Thanks https://github.com/bitterb. * It's now possible to use len() on a pipeline instance to determine the number of commands that will be executed. Thanks Jon Parise. * Fixed a bug in INFO's parse routine with floating point numbers. Thanks Ali Onur Uyar. * Fixed a bug with BITCOUNT to allow `start` and `end` to both be zero. Thanks Tim Bart. * The transaction() method now accepts a boolean keyword argument, value_from_callable. By default, or if False is passes, the transaction() method will return the value of the pipelines execution. Otherwise, it will return whatever func() returns. * Python3 compatibility fix ensuring we're not already bytes(). Thanks Salimane Adjao Moustapha. * Added PSETEX. Thanks YAMAMOTO Takashi. * Added a BlockingConnectionPool to limit the number of connections that can be created. Thanks James Arthur. * SORT now accepts a `groups` option that if specified, will return tuples of n-length, where n is the number of keys specified in the GET argument. This allows for convenient row-based iteration. Thanks Ionuț Arțăriși. * 2.7.2 * Parse errors are now *always* raised on multi/exec pipelines, regardless of the `raise_on_error` flag. See https://groups.google.com/forum/?hl=en&fromgroups=#!topic/redis-db/VUiEFT8U8U0 for more info. * 2.7.1 * Packaged tests with source code * 2.7.0 * Added BITOP and BITCOUNT commands. Thanks Mark Tozzi. * Added the TIME command. Thanks Jason Knight. * Added support for LUA scripting. Thanks to Angus Peart, Drew Smathers, Issac Kelly, Louis-Philippe Perron, Sean Bleier, Jeffrey Kaditz, and Dvir Volk for various patches and contributions to this feature. * Changed the default error handling in pipelines. By default, the first error in a pipeline will now be raised. A new parameter to the pipeline's execute, `raise_on_error`, can be set to False to keep the old behavior of embeedding the exception instances in the result. * Fixed a bug with pipelines where parse errors won't corrupt the socket. * Added the optional `number` argument to SRANDMEMBER for use with Redis 2.6+ servers. * Added PEXPIRE/PEXPIREAT/PTTL commands. Thanks Luper Rouch. * Added INCRBYFLOAT/HINCRBYFLOAT commands. Thanks Nikita Uvarov. * High precision floating point values won't lose their precision when being sent to the Redis server. Thanks Jason Oster and Oleg Pudeyev. * Added CLIENT LIST/CLIENT KILL commands * 2.6.2 * `from_url` is now available as a classmethod on client classes. Thanks Jon Parise for the patch. * Fixed several encoding errors resulting from the Python 3.x support. * 2.6.1 * Python 3.x support! Big thanks to Alex Grönholm. * Fixed a bug in the PythonParser's read_response that could hide an error from the client (#251). * 2.6.0 * Changed (p)subscribe and (p)unsubscribe to no longer return messages indicating the channel was subscribed/unsubscribed to. These messages are available in the listen() loop instead. This is to prevent the following scenario: * Client A is subscribed to "foo" * Client B publishes message to "foo" * Client A subscribes to channel "bar" at the same time. Prior to this change, the subscribe() call would return the published messages on "foo" rather than the subscription confirmation to "bar". * Added support for GETRANGE, thanks Jean-Philippe Caruana * A new setting "decode_responses" specifies whether return values from Redis commands get decoded automatically using the client's charset value. Thanks to Frankie Dintino for the patch. * 2.4.13 * redis.from_url() can take an URL representing a Redis connection string and return a client object. Thanks Kenneth Reitz for the patch. * 2.4.12 * ConnectionPool is now fork-safe. Thanks Josiah Carson for the patch. * 2.4.11 * AuthenticationError will now be correctly raised if an invalid password is supplied. * If Hiredis is unavailable, the HiredisParser will raise a RedisError if selected manually. * Made the INFO command more tolerant of Redis changes formatting. Fix for #217. * 2.4.10 * Buffer reads from socket in the PythonParser. Fix for a Windows-specific bug (#205). * Added the OBJECT and DEBUG OBJECT commands. * Added __del__ methods for classes that hold on to resources that need to be cleaned up. This should prevent resource leakage when these objects leave scope due to misuse or unhandled exceptions. Thanks David Wolever for the suggestion. * Added the ECHO command for completeness. * Fixed a bug where attempting to subscribe to a PubSub channel of a Redis server that's down would blow out the stack. Fixes #179 and #195. Thanks Ovidiu Predescu for the test case. * StrictRedis's TTL command now returns a -1 when querying a key with no expiration. The Redis class continues to return None. * ZADD and SADD now return integer values indicating the number of items added. Thanks Homer Strong. * Renamed the base client class to StrictRedis, replacing ZADD and LREM in favor of their official argument order. The Redis class is now a subclass of StrictRedis, implementing the legacy redis-py implementations of ZADD and LREM. Docs have been updated to suggesting the use of StrictRedis. * SETEX in StrictRedis is now compliant with official Redis SETEX command. the name, value, time implementation moved to "Redis" for backwards compatibility. * 2.4.9 * Removed socket retry logic in Connection. This is the responsibility of the caller to determine if the command is safe and can be retried. Thanks David Wolver. * Added some extra guards around various types of exceptions being raised when sending or parsing data. Thanks David Wolver and Denis Bilenko. * 2.4.8 * Imported with_statement from __future__ for Python 2.5 compatibility. * 2.4.7 * Fixed a bug where some connections were not getting released back to the connection pool after pipeline execution. * Pipelines can now be used as context managers. This is the preferred way of use to ensure that connections get cleaned up properly. Thanks David Wolever. * Added a convenience method called transaction() on the base Redis class. This method eliminates much of the boilerplate used when using pipelines to watch Redis keys. See the documentation for details on usage. * 2.4.6 * Variadic arguments for SADD, SREM, ZREN, HDEL, LPUSH, and RPUSH. Thanks Raphaël Vinot. * (CRITICAL) Fixed an error in the Hiredis parser that occasionally caused the socket connection to become corrupted and unusable. This became noticeable once connection pools started to be used. * ZRANGE, ZREVRANGE, ZRANGEBYSCORE, and ZREVRANGEBYSCORE now take an additional optional argument, score_cast_func, which is a callable used to cast the score value in the return type. The default is float. * Removed the PUBLISH method from the PubSub class. Connections that are [P]SUBSCRIBEd cannot issue PUBLISH commands, so it doesn't make sense to have it here. * Pipelines now contain WATCH and UNWATCH. Calling WATCH or UNWATCH from the base client class will result in a deprecation warning. After WATCHing one or more keys, the pipeline will be placed in immediate execution mode until UNWATCH or MULTI are called. Refer to the new pipeline docs in the README for more information. Thanks to David Wolever and Randall Leeds for greatly helping with this. * 2.4.5 * The PythonParser now works better when reading zero length strings. * 2.4.4 * Fixed a typo introduced in 2.4.3 * 2.4.3 * Fixed a bug in the UnixDomainSocketConnection caused when trying to form an error message after a socket error. * 2.4.2 * Fixed a bug in pipeline that caused an exception while trying to reconnect after a connection timeout. * 2.4.1 * Fixed a bug in the PythonParser if disconnect is called before connect. * 2.4.0 * WARNING: 2.4 contains several backwards incompatible changes. * Completely refactored Connection objects. Moved much of the Redis protocol packing for requests here, and eliminated the nasty dependencies it had on the client to do AUTH and SELECT commands on connect. * Connection objects now have a parser attribute. Parsers are responsible for reading data Redis sends. Two parsers ship with redis-py: a PythonParser and the HiRedis parser. redis-py will automatically use the HiRedis parser if you have the Python hiredis module installed, otherwise it will fall back to the PythonParser. You can force or the other, or even an external one by passing the `parser_class` argument to ConnectionPool. * Added a UnixDomainSocketConnection for users wanting to talk to the Redis instance running on a local machine only. You can use this connection by passing it to the `connection_class` argument of the ConnectionPool. * Connections no longer derive from threading.local. See threading.local note below. * ConnectionPool has been completely refactored. The ConnectionPool now maintains a list of connections. The redis-py client only hangs on to a ConnectionPool instance, calling get_connection() anytime it needs to send a command. When get_connection() is called, the command name and any keys involved in the command are passed as arguments. Subclasses of ConnectionPool could use this information to identify the shard the keys belong to and return a connection to it. ConnectionPool also implements disconnect() to force all connections in the pool to disconnect from the Redis server. * redis-py no longer support the SELECT command. You can still connect to a specific database by specifying it when instantiating a client instance or by creating a connection pool. If you need to talk to multiple databases within your application, you should use a separate client instance for each database you want to talk to. * Completely refactored Publish/Subscribe support. The subscribe and listen commands are no longer available on the redis-py Client class. Instead, the `pubsub` method returns an instance of the PubSub class which contains all publish/subscribe support. Note, you can still PUBLISH from the redis-py client class if you desire. * Removed support for all previously deprecated commands or options. * redis-py no longer uses threading.local in any way. Since the Client class no longer holds on to a connection, it's no longer needed. You can now pass client instances between threads, and commands run on those threads will retrieve an available connection from the pool, use it and release it. It should now be trivial to use redis-py with eventlet or greenlet. * ZADD now accepts pairs of value=score keyword arguments. This should help resolve the long standing #72. The older value and score arguments have been deprecated in favor of the keyword argument style. * Client instances now get their own copy of RESPONSE_CALLBACKS. The new set_response_callback method adds a user defined callback to the instance. * Support Jython, fixing #97. Thanks to Adam Vandenberg for the patch. * Using __getitem__ now properly raises a KeyError when the key is not found. Thanks Ionuț Arțăriși for the patch. * Newer Redis versions return a LOADING message for some commands while the database is loading from disk during server start. This could cause problems with SELECT. We now force a socket disconnection prior to raising a ResponseError so subsequent connections have to reconnect and re-select the appropriate database. Thanks to Benjamin Anderson for finding this and fixing. * 2.2.4 * WARNING: Potential backwards incompatible change - Changed order of parameters of ZREVRANGEBYSCORE to match those of the actual Redis command. This is only backwards-incompatible if you were passing max and min via keyword args. If passing by normal args, nothing in user code should have to change. Thanks Stéphane Angel for the fix. * Fixed INFO to properly parse the Redis data correctly for both 2.2.x and 2.3+. Thanks Stéphane Angel for the fix. * Lock objects now store their timeout value as a float. This allows floats to be used as timeout values. No changes to existing code required. * WATCH now supports multiple keys. Thanks Rich Schumacher. * Broke out some code that was Python 2.4 incompatible. redis-py should now be usable on 2.4, but this hasn't actually been tested. Thanks Dan Colish for the patch. * Optimized some code using izip and islice. Should have a pretty good speed up on larger data sets. Thanks Dan Colish. * Better error handling when submitting an empty mapping to HMSET. Thanks Dan Colish. * Subscription status is now reset after every (re)connection. * 2.2.3 * Added support for Hiredis. To use, simply "pip install hiredis" or "easy_install hiredis". Thanks for Pieter Noordhuis for the hiredis-py bindings and the patch to redis-py. * The connection class is chosen based on whether hiredis is installed or not. To force the use of the PythonConnection, simply create your own ConnectionPool instance with the connection_class argument assigned to to PythonConnection class. * Added missing command ZREVRANGEBYSCORE. Thanks Jay Baird for the patch. * The INFO command should be parsed correctly on 2.2.x server versions and is backwards compatible with older versions. Thanks Brett Hoerner. * 2.2.2 * Fixed a bug in ZREVRANK where retrieving the rank of a value not in the zset would raise an error. * Fixed a bug in Connection.send where the errno import was getting overwritten by a local variable. * Fixed a bug in SLAVEOF when promoting an existing slave to a master. * Reverted change of download URL back to redis-VERSION.tar.gz. 2.2.1's change of this actually broke Pypi for Pip installs. Sorry! * 2.2.1 * Changed archive name to redis-py-VERSION.tar.gz to not conflict with the Redis server archive. * 2.2.0 * Implemented SLAVEOF * Implemented CONFIG as config_get and config_set * Implemented GETBIT/SETBIT * Implemented BRPOPLPUSH * Implemented STRLEN * Implemented PERSIST * Implemented SETRANGE redis-py-3.5.3/INSTALL000066400000000000000000000001341366526254200143200ustar00rootroot00000000000000 Please use python setup.py install and report errors to Andy McCurdy (sedrik@gmail.com) redis-py-3.5.3/LICENSE000066400000000000000000000020621366526254200142760ustar00rootroot00000000000000Copyright (c) 2012 Andy McCurdy Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. redis-py-3.5.3/MANIFEST.in000066400000000000000000000002171366526254200150270ustar00rootroot00000000000000include CHANGES include INSTALL include LICENSE include README.rst exclude __pycache__ recursive-include tests * recursive-exclude tests *.pyc redis-py-3.5.3/README.rst000066400000000000000000001051561366526254200147700ustar00rootroot00000000000000redis-py ======== The Python interface to the Redis key-value store. .. image:: https://secure.travis-ci.org/andymccurdy/redis-py.svg?branch=master :target: https://travis-ci.org/andymccurdy/redis-py .. image:: https://readthedocs.org/projects/redis-py/badge/?version=stable&style=flat :target: https://redis-py.readthedocs.io/en/stable/ .. image:: https://badge.fury.io/py/redis.svg :target: https://pypi.org/project/redis/ .. image:: https://codecov.io/gh/andymccurdy/redis-py/branch/master/graph/badge.svg :target: https://codecov.io/gh/andymccurdy/redis-py Python 2 Compatibility Note --------------------------- redis-py 3.5.x will be the last version of redis-py that supports Python 2. The 3.5.x line will continue to get bug fixes and security patches that support Python 2 until August 1, 2020. redis-py 4.0 will be the next major version and will require Python 3.5+. Installation ------------ redis-py requires a running Redis server. See `Redis's quickstart `_ for installation instructions. redis-py can be installed using `pip` similar to other Python packages. Do not use `sudo` with `pip`. It is usually good to work in a `virtualenv `_ or `venv `_ to avoid conflicts with other package managers and Python projects. For a quick introduction see `Python Virtual Environments in Five Minutes `_. To install redis-py, simply: .. code-block:: bash $ pip install redis or from source: .. code-block:: bash $ python setup.py install Getting Started --------------- .. code-block:: pycon >>> import redis >>> r = redis.Redis(host='localhost', port=6379, db=0) >>> r.set('foo', 'bar') True >>> r.get('foo') b'bar' By default, all responses are returned as `bytes` in Python 3 and `str` in Python 2. The user is responsible for decoding to Python 3 strings or Python 2 unicode objects. If **all** string responses from a client should be decoded, the user can specify `decode_responses=True` to `Redis.__init__`. In this case, any Redis command that returns a string type will be decoded with the `encoding` specified. Upgrading from redis-py 2.X to 3.0 ---------------------------------- redis-py 3.0 introduces many new features but required a number of backwards incompatible changes to be made in the process. This section attempts to provide an upgrade path for users migrating from 2.X to 3.0. Python Version Support ^^^^^^^^^^^^^^^^^^^^^^ redis-py 3.0 supports Python 2.7 and Python 3.5+. Client Classes: Redis and StrictRedis ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ redis-py 3.0 drops support for the legacy "Redis" client class. "StrictRedis" has been renamed to "Redis" and an alias named "StrictRedis" is provided so that users previously using "StrictRedis" can continue to run unchanged. The 2.X "Redis" class provided alternative implementations of a few commands. This confused users (rightfully so) and caused a number of support issues. To make things easier going forward, it was decided to drop support for these alternate implementations and instead focus on a single client class. 2.X users that are already using StrictRedis don't have to change the class name. StrictRedis will continue to work for the foreseeable future. 2.X users that are using the Redis class will have to make changes if they use any of the following commands: * SETEX: The argument order has changed. The new order is (name, time, value). * LREM: The argument order has changed. The new order is (name, num, value). * TTL and PTTL: The return value is now always an int and matches the official Redis command (>0 indicates the timeout, -1 indicates that the key exists but that it has no expire time set, -2 indicates that the key does not exist) SSL Connections ^^^^^^^^^^^^^^^ redis-py 3.0 changes the default value of the `ssl_cert_reqs` option from `None` to `'required'`. See `Issue 1016 `_. This change enforces hostname validation when accepting a cert from a remote SSL terminator. If the terminator doesn't properly set the hostname on the cert this will cause redis-py 3.0 to raise a ConnectionError. This check can be disabled by setting `ssl_cert_reqs` to `None`. Note that doing so removes the security check. Do so at your own risk. It has been reported that SSL certs received from AWS ElastiCache do not have proper hostnames and turning off hostname verification is currently required. MSET, MSETNX and ZADD ^^^^^^^^^^^^^^^^^^^^^ These commands all accept a mapping of key/value pairs. In redis-py 2.X this mapping could be specified as ``*args`` or as ``**kwargs``. Both of these styles caused issues when Redis introduced optional flags to ZADD. Relying on ``*args`` caused issues with the optional argument order, especially in Python 2.7. Relying on ``**kwargs`` caused potential collision issues of user keys with the argument names in the method signature. To resolve this, redis-py 3.0 has changed these three commands to all accept a single positional argument named mapping that is expected to be a dict. For MSET and MSETNX, the dict is a mapping of key-names -> values. For ZADD, the dict is a mapping of element-names -> score. MSET, MSETNX and ZADD now look like: .. code-block:: python def mset(self, mapping): def msetnx(self, mapping): def zadd(self, name, mapping, nx=False, xx=False, ch=False, incr=False): All 2.X users that use these commands must modify their code to supply keys and values as a dict to these commands. ZINCRBY ^^^^^^^ redis-py 2.X accidentally modified the argument order of ZINCRBY, swapping the order of value and amount. ZINCRBY now looks like: .. code-block:: python def zincrby(self, name, amount, value): All 2.X users that rely on ZINCRBY must swap the order of amount and value for the command to continue to work as intended. Encoding of User Input ^^^^^^^^^^^^^^^^^^^^^^ redis-py 3.0 only accepts user data as bytes, strings or numbers (ints, longs and floats). Attempting to specify a key or a value as any other type will raise a DataError exception. redis-py 2.X attempted to coerce any type of input into a string. While occasionally convenient, this caused all sorts of hidden errors when users passed boolean values (which were coerced to 'True' or 'False'), a None value (which was coerced to 'None') or other values, such as user defined types. All 2.X users should make sure that the keys and values they pass into redis-py are either bytes, strings or numbers. Locks ^^^^^ redis-py 3.0 drops support for the pipeline-based Lock and now only supports the Lua-based lock. In doing so, LuaLock has been renamed to Lock. This also means that redis-py Lock objects require Redis server 2.6 or greater. 2.X users that were explicitly referring to "LuaLock" will have to now refer to "Lock" instead. Locks as Context Managers ^^^^^^^^^^^^^^^^^^^^^^^^^ redis-py 3.0 now raises a LockError when using a lock as a context manager and the lock cannot be acquired within the specified timeout. This is more of a bug fix than a backwards incompatible change. However, given an error is now raised where none was before, this might alarm some users. 2.X users should make sure they're wrapping their lock code in a try/catch like this: .. code-block:: python try: with r.lock('my-lock-key', blocking_timeout=5) as lock: # code you want executed only after the lock has been acquired except LockError: # the lock wasn't acquired API Reference ------------- The `official Redis command documentation `_ does a great job of explaining each command in detail. redis-py attempts to adhere to the official command syntax. There are a few exceptions: * **SELECT**: Not implemented. See the explanation in the Thread Safety section below. * **DEL**: 'del' is a reserved keyword in the Python syntax. Therefore redis-py uses 'delete' instead. * **MULTI/EXEC**: These are implemented as part of the Pipeline class. The pipeline is wrapped with the MULTI and EXEC statements by default when it is executed, which can be disabled by specifying transaction=False. See more about Pipelines below. * **SUBSCRIBE/LISTEN**: Similar to pipelines, PubSub is implemented as a separate class as it places the underlying connection in a state where it can't execute non-pubsub commands. Calling the pubsub method from the Redis client will return a PubSub instance where you can subscribe to channels and listen for messages. You can only call PUBLISH from the Redis client (see `this comment on issue #151 `_ for details). * **SCAN/SSCAN/HSCAN/ZSCAN**: The \*SCAN commands are implemented as they exist in the Redis documentation. In addition, each command has an equivalent iterator method. These are purely for convenience so the user doesn't have to keep track of the cursor while iterating. Use the scan_iter/sscan_iter/hscan_iter/zscan_iter methods for this behavior. More Detail ----------- Connection Pools ^^^^^^^^^^^^^^^^ Behind the scenes, redis-py uses a connection pool to manage connections to a Redis server. By default, each Redis instance you create will in turn create its own connection pool. You can override this behavior and use an existing connection pool by passing an already created connection pool instance to the connection_pool argument of the Redis class. You may choose to do this in order to implement client side sharding or have fine-grain control of how connections are managed. .. code-block:: pycon >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0) >>> r = redis.Redis(connection_pool=pool) Connections ^^^^^^^^^^^ ConnectionPools manage a set of Connection instances. redis-py ships with two types of Connections. The default, Connection, is a normal TCP socket based connection. The UnixDomainSocketConnection allows for clients running on the same device as the server to connect via a unix domain socket. To use a UnixDomainSocketConnection connection, simply pass the unix_socket_path argument, which is a string to the unix domain socket file. Additionally, make sure the unixsocket parameter is defined in your redis.conf file. It's commented out by default. .. code-block:: pycon >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock') You can create your own Connection subclasses as well. This may be useful if you want to control the socket behavior within an async framework. To instantiate a client class using your own connection, you need to create a connection pool, passing your class to the connection_class argument. Other keyword parameters you pass to the pool will be passed to the class specified during initialization. .. code-block:: pycon >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass, your_arg='...', ...) Connections maintain an open socket to the Redis server. Sometimes these sockets are interrupted or disconnected for a variety of reasons. For example, network appliances, load balancers and other services that sit between clients and servers are often configured to kill connections that remain idle for a given threshold. When a connection becomes disconnected, the next command issued on that connection will fail and redis-py will raise a ConnectionError to the caller. This allows each application that uses redis-py to handle errors in a way that's fitting for that specific application. However, constant error handling can be verbose and cumbersome, especially when socket disconnections happen frequently in many production environments. To combat this, redis-py can issue regular health checks to assess the liveliness of a connection just before issuing a command. Users can pass ``health_check_interval=N`` to the Redis or ConnectionPool classes or as a query argument within a Redis URL. The value of ``health_check_interval`` must be an integer. A value of ``0``, the default, disables health checks. Any positive integer will enable health checks. Health checks are performed just before a command is executed if the underlying connection has been idle for more than ``health_check_interval`` seconds. For example, ``health_check_interval=30`` will ensure that a health check is run on any connection that has been idle for 30 or more seconds just before a command is executed on that connection. If your application is running in an environment that disconnects idle connections after 30 seconds you should set the ``health_check_interval`` option to a value less than 30. This option also works on any PubSub connection that is created from a client with ``health_check_interval`` enabled. PubSub users need to ensure that ``get_message()`` or ``listen()`` are called more frequently than ``health_check_interval`` seconds. It is assumed that most workloads already do this. If your PubSub use case doesn't call ``get_message()`` or ``listen()`` frequently, you should call ``pubsub.check_health()`` explicitly on a regularly basis. Parsers ^^^^^^^ Parser classes provide a way to control how responses from the Redis server are parsed. redis-py ships with two parser classes, the PythonParser and the HiredisParser. By default, redis-py will attempt to use the HiredisParser if you have the hiredis module installed and will fallback to the PythonParser otherwise. Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was kind enough to create Python bindings. Using Hiredis can provide up to a 10x speed improvement in parsing responses from the Redis server. The performance increase is most noticeable when retrieving many pieces of data, such as from LRANGE or SMEMBERS operations. Hiredis is available on PyPI, and can be installed via pip just like redis-py. .. code-block:: bash $ pip install hiredis Response Callbacks ^^^^^^^^^^^^^^^^^^ The client class uses a set of callbacks to cast Redis responses to the appropriate Python type. There are a number of these callbacks defined on the Redis client class in a dictionary called RESPONSE_CALLBACKS. Custom callbacks can be added on a per-instance basis using the set_response_callback method. This method accepts two arguments: a command name and the callback. Callbacks added in this manner are only valid on the instance the callback is added to. If you want to define or override a callback globally, you should make a subclass of the Redis client and add your callback to its RESPONSE_CALLBACKS class dictionary. Response callbacks take at least one parameter: the response from the Redis server. Keyword arguments may also be accepted in order to further control how to interpret the response. These keyword arguments are specified during the command's call to execute_command. The ZRANGE implementation demonstrates the use of response callback keyword arguments with its "withscores" argument. Thread Safety ^^^^^^^^^^^^^ Redis client instances can safely be shared between threads. Internally, connection instances are only retrieved from the connection pool during command execution, and returned to the pool directly after. Command execution never modifies state on the client instance. However, there is one caveat: the Redis SELECT command. The SELECT command allows you to switch the database currently in use by the connection. That database remains selected until another is selected or until the connection is closed. This creates an issue in that connections could be returned to the pool that are connected to a different database. As a result, redis-py does not implement the SELECT command on client instances. If you use multiple Redis databases within the same application, you should create a separate client instance (and possibly a separate connection pool) for each database. It is not safe to pass PubSub or Pipeline objects between threads. Pipelines ^^^^^^^^^ Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request. They can be used to dramatically increase the performance of groups of commands by reducing the number of back-and-forth TCP packets between the client and server. Pipelines are quite simple to use: .. code-block:: pycon >>> r = redis.Redis(...) >>> r.set('bing', 'baz') >>> # Use the pipeline() method to create a pipeline instance >>> pipe = r.pipeline() >>> # The following SET commands are buffered >>> pipe.set('foo', 'bar') >>> pipe.get('bing') >>> # the EXECUTE call sends all buffered commands to the server, returning >>> # a list of responses, one for each command. >>> pipe.execute() [True, b'baz'] For ease of use, all commands being buffered into the pipeline return the pipeline object itself. Therefore calls can be chained like: .. code-block:: pycon >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute() [True, True, 6] In addition, pipelines can also ensure the buffered commands are executed atomically as a group. This happens by default. If you want to disable the atomic nature of a pipeline but still want to buffer commands, you can turn off transactions. .. code-block:: pycon >>> pipe = r.pipeline(transaction=False) A common issue occurs when requiring atomic transactions but needing to retrieve values in Redis prior for use within the transaction. For instance, let's assume that the INCR command didn't exist and we need to build an atomic version of INCR in Python. The completely naive implementation could GET the value, increment it in Python, and SET the new value back. However, this is not atomic because multiple clients could be doing this at the same time, each getting the same value from GET. Enter the WATCH command. WATCH provides the ability to monitor one or more keys prior to starting a transaction. If any of those keys change prior the execution of that transaction, the entire transaction will be canceled and a WatchError will be raised. To implement our own client-side INCR command, we could do something like this: .. code-block:: pycon >>> with r.pipeline() as pipe: ... while True: ... try: ... # put a WATCH on the key that holds our sequence value ... pipe.watch('OUR-SEQUENCE-KEY') ... # after WATCHing, the pipeline is put into immediate execution ... # mode until we tell it to start buffering commands again. ... # this allows us to get the current value of our sequence ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... # now we can put the pipeline back into buffered mode with MULTI ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) ... # and finally, execute the pipeline (the set command) ... pipe.execute() ... # if a WatchError wasn't raised during execution, everything ... # we just did happened atomically. ... break ... except WatchError: ... # another client must have changed 'OUR-SEQUENCE-KEY' between ... # the time we started WATCHing it and the pipeline's execution. ... # our best bet is to just retry. ... continue Note that, because the Pipeline must bind to a single connection for the duration of a WATCH, care must be taken to ensure that the connection is returned to the connection pool by calling the reset() method. If the Pipeline is used as a context manager (as in the example above) reset() will be called automatically. Of course you can do this the manual way by explicitly calling reset(): .. code-block:: pycon >>> pipe = r.pipeline() >>> while True: ... try: ... pipe.watch('OUR-SEQUENCE-KEY') ... ... ... pipe.execute() ... break ... except WatchError: ... continue ... finally: ... pipe.reset() A convenience method named "transaction" exists for handling all the boilerplate of handling and retrying watch errors. It takes a callable that should expect a single parameter, a pipeline object, and any number of keys to be WATCHed. Our client-side INCR command above can be written like this, which is much easier to read: .. code-block:: pycon >>> def client_side_incr(pipe): ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) >>> >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY') [True] Be sure to call `pipe.multi()` in the callable passed to `Redis.transaction` prior to any write commands. Publish / Subscribe ^^^^^^^^^^^^^^^^^^^ redis-py includes a `PubSub` object that subscribes to channels and listens for new messages. Creating a `PubSub` object is easy. .. code-block:: pycon >>> r = redis.Redis(...) >>> p = r.pubsub() Once a `PubSub` instance is created, channels and patterns can be subscribed to. .. code-block:: pycon >>> p.subscribe('my-first-channel', 'my-second-channel', ...) >>> p.psubscribe('my-*', ...) The `PubSub` instance is now subscribed to those channels/patterns. The subscription confirmations can be seen by reading messages from the `PubSub` instance. .. code-block:: pycon >>> p.get_message() {'pattern': None, 'type': 'subscribe', 'channel': b'my-second-channel', 'data': 1} >>> p.get_message() {'pattern': None, 'type': 'subscribe', 'channel': b'my-first-channel', 'data': 2} >>> p.get_message() {'pattern': None, 'type': 'psubscribe', 'channel': b'my-*', 'data': 3} Every message read from a `PubSub` instance will be a dictionary with the following keys. * **type**: One of the following: 'subscribe', 'unsubscribe', 'psubscribe', 'punsubscribe', 'message', 'pmessage' * **channel**: The channel [un]subscribed to or the channel a message was published to * **pattern**: The pattern that matched a published message's channel. Will be `None` in all cases except for 'pmessage' types. * **data**: The message data. With [un]subscribe messages, this value will be the number of channels and patterns the connection is currently subscribed to. With [p]message messages, this value will be the actual published message. Let's send a message now. .. code-block:: pycon # the publish method returns the number matching channel and pattern # subscriptions. 'my-first-channel' matches both the 'my-first-channel' # subscription and the 'my-*' pattern subscription, so this message will # be delivered to 2 channels/patterns >>> r.publish('my-first-channel', 'some data') 2 >>> p.get_message() {'channel': b'my-first-channel', 'data': b'some data', 'pattern': None, 'type': 'message'} >>> p.get_message() {'channel': b'my-first-channel', 'data': b'some data', 'pattern': b'my-*', 'type': 'pmessage'} Unsubscribing works just like subscribing. If no arguments are passed to [p]unsubscribe, all channels or patterns will be unsubscribed from. .. code-block:: pycon >>> p.unsubscribe() >>> p.punsubscribe('my-*') >>> p.get_message() {'channel': b'my-second-channel', 'data': 2, 'pattern': None, 'type': 'unsubscribe'} >>> p.get_message() {'channel': b'my-first-channel', 'data': 1, 'pattern': None, 'type': 'unsubscribe'} >>> p.get_message() {'channel': b'my-*', 'data': 0, 'pattern': None, 'type': 'punsubscribe'} redis-py also allows you to register callback functions to handle published messages. Message handlers take a single argument, the message, which is a dictionary just like the examples above. To subscribe to a channel or pattern with a message handler, pass the channel or pattern name as a keyword argument with its value being the callback function. When a message is read on a channel or pattern with a message handler, the message dictionary is created and passed to the message handler. In this case, a `None` value is returned from get_message() since the message was already handled. .. code-block:: pycon >>> def my_handler(message): ... print('MY HANDLER: ', message['data']) >>> p.subscribe(**{'my-channel': my_handler}) # read the subscribe confirmation message >>> p.get_message() {'pattern': None, 'type': 'subscribe', 'channel': b'my-channel', 'data': 1} >>> r.publish('my-channel', 'awesome data') 1 # for the message handler to work, we need tell the instance to read data. # this can be done in several ways (read more below). we'll just use # the familiar get_message() function for now >>> message = p.get_message() MY HANDLER: awesome data # note here that the my_handler callback printed the string above. # `message` is None because the message was handled by our handler. >>> print(message) None If your application is not interested in the (sometimes noisy) subscribe/unsubscribe confirmation messages, you can ignore them by passing `ignore_subscribe_messages=True` to `r.pubsub()`. This will cause all subscribe/unsubscribe messages to be read, but they won't bubble up to your application. .. code-block:: pycon >>> p = r.pubsub(ignore_subscribe_messages=True) >>> p.subscribe('my-channel') >>> p.get_message() # hides the subscribe message and returns None >>> r.publish('my-channel', 'my data') 1 >>> p.get_message() {'channel': b'my-channel', 'data': b'my data', 'pattern': None, 'type': 'message'} There are three different strategies for reading messages. The examples above have been using `pubsub.get_message()`. Behind the scenes, `get_message()` uses the system's 'select' module to quickly poll the connection's socket. If there's data available to be read, `get_message()` will read it, format the message and return it or pass it to a message handler. If there's no data to be read, `get_message()` will immediately return None. This makes it trivial to integrate into an existing event loop inside your application. .. code-block:: pycon >>> while True: >>> message = p.get_message() >>> if message: >>> # do something with the message >>> time.sleep(0.001) # be nice to the system :) Older versions of redis-py only read messages with `pubsub.listen()`. listen() is a generator that blocks until a message is available. If your application doesn't need to do anything else but receive and act on messages received from redis, listen() is an easy way to get up an running. .. code-block:: pycon >>> for message in p.listen(): ... # do something with the message The third option runs an event loop in a separate thread. `pubsub.run_in_thread()` creates a new thread and starts the event loop. The thread object is returned to the caller of `run_in_thread()`. The caller can use the `thread.stop()` method to shut down the event loop and thread. Behind the scenes, this is simply a wrapper around `get_message()` that runs in a separate thread, essentially creating a tiny non-blocking event loop for you. `run_in_thread()` takes an optional `sleep_time` argument. If specified, the event loop will call `time.sleep()` with the value in each iteration of the loop. Note: Since we're running in a separate thread, there's no way to handle messages that aren't automatically handled with registered message handlers. Therefore, redis-py prevents you from calling `run_in_thread()` if you're subscribed to patterns or channels that don't have message handlers attached. .. code-block:: pycon >>> p.subscribe(**{'my-channel': my_handler}) >>> thread = p.run_in_thread(sleep_time=0.001) # the event loop is now running in the background processing messages # when it's time to shut it down... >>> thread.stop() A PubSub object adheres to the same encoding semantics as the client instance it was created from. Any channel or pattern that's unicode will be encoded using the `charset` specified on the client before being sent to Redis. If the client's `decode_responses` flag is set the False (the default), the 'channel', 'pattern' and 'data' values in message dictionaries will be byte strings (str on Python 2, bytes on Python 3). If the client's `decode_responses` is True, then the 'channel', 'pattern' and 'data' values will be automatically decoded to unicode strings using the client's `charset`. PubSub objects remember what channels and patterns they are subscribed to. In the event of a disconnection such as a network error or timeout, the PubSub object will re-subscribe to all prior channels and patterns when reconnecting. Messages that were published while the client was disconnected cannot be delivered. When you're finished with a PubSub object, call its `.close()` method to shutdown the connection. .. code-block:: pycon >>> p = r.pubsub() >>> ... >>> p.close() The PUBSUB set of subcommands CHANNELS, NUMSUB and NUMPAT are also supported: .. code-block:: pycon >>> r.pubsub_channels() [b'foo', b'bar'] >>> r.pubsub_numsub('foo', 'bar') [(b'foo', 9001), (b'bar', 42)] >>> r.pubsub_numsub('baz') [(b'baz', 0)] >>> r.pubsub_numpat() 1204 Monitor ^^^^^^^ redis-py includes a `Monitor` object that streams every command processed by the Redis server. Use `listen()` on the `Monitor` object to block until a command is received. .. code-block:: pycon >>> r = redis.Redis(...) >>> with r.monitor() as m: >>> for command in m.listen(): >>> print(command) Lua Scripting ^^^^^^^^^^^^^ redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are a number of edge cases that make these commands tedious to use in real world scenarios. Therefore, redis-py exposes a Script object that makes scripting much easier to use. To create a Script instance, use the `register_script` function on a client instance passing the Lua code as the first argument. `register_script` returns a Script instance that you can use throughout your code. The following trivial Lua script accepts two parameters: the name of a key and a multiplier value. The script fetches the value stored in the key, multiplies it with the multiplier value and returns the result. .. code-block:: pycon >>> r = redis.Redis() >>> lua = """ ... local value = redis.call('GET', KEYS[1]) ... value = tonumber(value) ... return value * ARGV[1]""" >>> multiply = r.register_script(lua) `multiply` is now a Script instance that is invoked by calling it like a function. Script instances accept the following optional arguments: * **keys**: A list of key names that the script will access. This becomes the KEYS list in Lua. * **args**: A list of argument values. This becomes the ARGV list in Lua. * **client**: A redis-py Client or Pipeline instance that will invoke the script. If client isn't specified, the client that initially created the Script instance (the one that `register_script` was invoked from) will be used. Continuing the example from above: .. code-block:: pycon >>> r.set('foo', 2) >>> multiply(keys=['foo'], args=[5]) 10 The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is passed to the script along with the multiplier value of 5. Lua executes the script and returns the result, 10. Script instances can be executed using a different client instance, even one that points to a completely different Redis server. .. code-block:: pycon >>> r2 = redis.Redis('redis2.example.com') >>> r2.set('foo', 3) >>> multiply(keys=['foo'], args=[5], client=r2) 15 The Script object ensures that the Lua script is loaded into Redis's script cache. In the event of a NOSCRIPT error, it will load the script and retry executing it. Script objects can also be used in pipelines. The pipeline instance should be passed as the client argument when calling the script. Care is taken to ensure that the script is registered in Redis's script cache just prior to pipeline execution. .. code-block:: pycon >>> pipe = r.pipeline() >>> pipe.set('foo', 5) >>> multiply(keys=['foo'], args=[5], client=pipe) >>> pipe.execute() [True, 25] Sentinel support ^^^^^^^^^^^^^^^^ redis-py can be used together with `Redis Sentinel `_ to discover Redis nodes. You need to have at least one Sentinel daemon running in order to use redis-py's Sentinel support. Connecting redis-py to the Sentinel instance(s) is easy. You can use a Sentinel connection to discover the master and slaves network addresses: .. code-block:: pycon >>> from redis.sentinel import Sentinel >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1) >>> sentinel.discover_master('mymaster') ('127.0.0.1', 6379) >>> sentinel.discover_slaves('mymaster') [('127.0.0.1', 6380)] You can also create Redis client connections from a Sentinel instance. You can connect to either the master (for write operations) or a slave (for read-only operations). .. code-block:: pycon >>> master = sentinel.master_for('mymaster', socket_timeout=0.1) >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1) >>> master.set('foo', 'bar') >>> slave.get('foo') b'bar' The master and slave objects are normal Redis instances with their connection pool bound to the Sentinel instance. When a Sentinel backed client attempts to establish a connection, it first queries the Sentinel servers to determine an appropriate host to connect to. If no server is found, a MasterNotFoundError or SlaveNotFoundError is raised. Both exceptions are subclasses of ConnectionError. When trying to connect to a slave client, the Sentinel connection pool will iterate over the list of slaves until it finds one that can be connected to. If no slaves can be connected to, a connection will be established with the master. See `Guidelines for Redis clients with support for Redis Sentinel `_ to learn more about Redis Sentinel. Scan Iterators ^^^^^^^^^^^^^^ The \*SCAN commands introduced in Redis 2.8 can be cumbersome to use. While these commands are fully supported, redis-py also exposes the following methods that return Python iterators for convenience: `scan_iter`, `hscan_iter`, `sscan_iter` and `zscan_iter`. .. code-block:: pycon >>> for key, value in (('A', '1'), ('B', '2'), ('C', '3')): ... r.set(key, value) >>> for key in r.scan_iter(): ... print(key, r.get(key)) A 1 B 2 C 3 Author ^^^^^^ redis-py is developed and maintained by Andy McCurdy (sedrik@gmail.com). It can be found here: https://github.com/andymccurdy/redis-py Special thanks to: * Ludovico Magnocavallo, author of the original Python Redis client, from which some of the socket code is still used. * Alexander Solovyov for ideas on the generic response callback system. * Paul Hubbard for initial packaging support. redis-py-3.5.3/RELEASE000066400000000000000000000004111366526254200142700ustar00rootroot00000000000000Release Process =============== 1. Make sure all tests pass. 2. Make sure CHANGES is up to date. 3. Update redis.__init__.__version__ and commit 4. git tag 5. git push --tag 6. rm dist/* && python setup.py sdist bdist_wheel && twine upload dist/* redis-py-3.5.3/benchmarks/000077500000000000000000000000001366526254200154065ustar00rootroot00000000000000redis-py-3.5.3/benchmarks/__init__.py000066400000000000000000000000001366526254200175050ustar00rootroot00000000000000redis-py-3.5.3/benchmarks/base.py000066400000000000000000000026321366526254200166750ustar00rootroot00000000000000import functools import itertools import redis import sys import timeit from redis._compat import izip class Benchmark(object): ARGUMENTS = () def __init__(self): self._client = None def get_client(self, **kwargs): # eventually make this more robust and take optional args from # argparse if self._client is None or kwargs: defaults = { 'db': 9 } defaults.update(kwargs) pool = redis.ConnectionPool(**kwargs) self._client = redis.Redis(connection_pool=pool) return self._client def setup(self, **kwargs): pass def run(self, **kwargs): pass def run_benchmark(self): group_names = [group['name'] for group in self.ARGUMENTS] group_values = [group['values'] for group in self.ARGUMENTS] for value_set in itertools.product(*group_values): pairs = list(izip(group_names, value_set)) arg_string = ', '.join(['%s=%s' % (p[0], p[1]) for p in pairs]) sys.stdout.write('Benchmark: %s... ' % arg_string) sys.stdout.flush() kwargs = dict(pairs) setup = functools.partial(self.setup, **kwargs) run = functools.partial(self.run, **kwargs) t = timeit.timeit(stmt=run, setup=setup, number=1000) sys.stdout.write('%f\n' % t) sys.stdout.flush() redis-py-3.5.3/benchmarks/basic_operations.py000066400000000000000000000123351366526254200213100ustar00rootroot00000000000000from __future__ import print_function import redis import time import sys from functools import wraps from argparse import ArgumentParser if sys.version_info[0] == 3: long = int def parse_args(): parser = ArgumentParser() parser.add_argument('-n', type=int, help='Total number of requests (default 100000)', default=100000) parser.add_argument('-P', type=int, help=('Pipeline requests.' ' Default 1 (no pipeline).'), default=1) parser.add_argument('-s', type=int, help='Data size of SET/GET value in bytes (default 2)', default=2) args = parser.parse_args() return args def run(): args = parse_args() r = redis.Redis() r.flushall() set_str(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) set_int(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) get_str(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) get_int(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) incr(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) lpush(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) lrange_300(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) lpop(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) hmset(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) def timer(func): @wraps(func) def wrapper(*args, **kwargs): start = time.clock() ret = func(*args, **kwargs) duration = time.clock() - start if 'num' in kwargs: count = kwargs['num'] else: count = args[1] print('{} - {} Requests'.format(func.__name__, count)) print('Duration = {}'.format(duration)) print('Rate = {}'.format(count/duration)) print('') return ret return wrapper @timer def set_str(conn, num, pipeline_size, data_size): if pipeline_size > 1: conn = conn.pipeline() format_str = '{:0<%d}' % data_size set_data = format_str.format('a') for i in range(num): conn.set('set_str:%d' % i, set_data) if pipeline_size > 1 and i % pipeline_size == 0: conn.execute() if pipeline_size > 1: conn.execute() @timer def set_int(conn, num, pipeline_size, data_size): if pipeline_size > 1: conn = conn.pipeline() format_str = '{:0<%d}' % data_size set_data = int(format_str.format('1')) for i in range(num): conn.set('set_int:%d' % i, set_data) if pipeline_size > 1 and i % pipeline_size == 0: conn.execute() if pipeline_size > 1: conn.execute() @timer def get_str(conn, num, pipeline_size, data_size): if pipeline_size > 1: conn = conn.pipeline() for i in range(num): conn.get('set_str:%d' % i) if pipeline_size > 1 and i % pipeline_size == 0: conn.execute() if pipeline_size > 1: conn.execute() @timer def get_int(conn, num, pipeline_size, data_size): if pipeline_size > 1: conn = conn.pipeline() for i in range(num): conn.get('set_int:%d' % i) if pipeline_size > 1 and i % pipeline_size == 0: conn.execute() if pipeline_size > 1: conn.execute() @timer def incr(conn, num, pipeline_size, *args, **kwargs): if pipeline_size > 1: conn = conn.pipeline() for i in range(num): conn.incr('incr_key') if pipeline_size > 1 and i % pipeline_size == 0: conn.execute() if pipeline_size > 1: conn.execute() @timer def lpush(conn, num, pipeline_size, data_size): if pipeline_size > 1: conn = conn.pipeline() format_str = '{:0<%d}' % data_size set_data = int(format_str.format('1')) for i in range(num): conn.lpush('lpush_key', set_data) if pipeline_size > 1 and i % pipeline_size == 0: conn.execute() if pipeline_size > 1: conn.execute() @timer def lrange_300(conn, num, pipeline_size, data_size): if pipeline_size > 1: conn = conn.pipeline() for i in range(num): conn.lrange('lpush_key', i, i+300) if pipeline_size > 1 and i % pipeline_size == 0: conn.execute() if pipeline_size > 1: conn.execute() @timer def lpop(conn, num, pipeline_size, data_size): if pipeline_size > 1: conn = conn.pipeline() for i in range(num): conn.lpop('lpush_key') if pipeline_size > 1 and i % pipeline_size == 0: conn.execute() if pipeline_size > 1: conn.execute() @timer def hmset(conn, num, pipeline_size, data_size): if pipeline_size > 1: conn = conn.pipeline() set_data = {'str_value': 'string', 'int_value': 123456, 'long_value': long(123456), 'float_value': 123456.0} for i in range(num): conn.hmset('hmset_key', set_data) if pipeline_size > 1 and i % pipeline_size == 0: conn.execute() if pipeline_size > 1: conn.execute() if __name__ == '__main__': run() redis-py-3.5.3/benchmarks/command_packer_benchmark.py000066400000000000000000000064731366526254200227470ustar00rootroot00000000000000import socket from redis.connection import (Connection, SYM_STAR, SYM_DOLLAR, SYM_EMPTY, SYM_CRLF) from redis._compat import imap from base import Benchmark class StringJoiningConnection(Connection): def send_packed_command(self, command, check_health=True): "Send an already packed command to the Redis server" if not self._sock: self.connect() try: self._sock.sendall(command) except socket.error as e: self.disconnect() if len(e.args) == 1: _errno, errmsg = 'UNKNOWN', e.args[0] else: _errno, errmsg = e.args raise ConnectionError("Error %s while writing to socket. %s." % (_errno, errmsg)) except Exception: self.disconnect() raise def pack_command(self, *args): "Pack a series of arguments into a value Redis command" args_output = SYM_EMPTY.join([ SYM_EMPTY.join( (SYM_DOLLAR, str(len(k)).encode(), SYM_CRLF, k, SYM_CRLF)) for k in imap(self.encoder.encode, args)]) output = SYM_EMPTY.join( (SYM_STAR, str(len(args)).encode(), SYM_CRLF, args_output)) return output class ListJoiningConnection(Connection): def send_packed_command(self, command, check_health=True): if not self._sock: self.connect() try: if isinstance(command, str): command = [command] for item in command: self._sock.sendall(item) except socket.error as e: self.disconnect() if len(e.args) == 1: _errno, errmsg = 'UNKNOWN', e.args[0] else: _errno, errmsg = e.args raise ConnectionError("Error %s while writing to socket. %s." % (_errno, errmsg)) except Exception: self.disconnect() raise def pack_command(self, *args): output = [] buff = SYM_EMPTY.join( (SYM_STAR, str(len(args)).encode(), SYM_CRLF)) for k in imap(self.encoder.encode, args): if len(buff) > 6000 or len(k) > 6000: buff = SYM_EMPTY.join( (buff, SYM_DOLLAR, str(len(k)).encode(), SYM_CRLF)) output.append(buff) output.append(k) buff = SYM_CRLF else: buff = SYM_EMPTY.join((buff, SYM_DOLLAR, str(len(k)).encode(), SYM_CRLF, k, SYM_CRLF)) output.append(buff) return output class CommandPackerBenchmark(Benchmark): ARGUMENTS = ( { 'name': 'connection_class', 'values': [StringJoiningConnection, ListJoiningConnection] }, { 'name': 'value_size', 'values': [10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000] }, ) def setup(self, connection_class, value_size): self.get_client(connection_class=connection_class) def run(self, connection_class, value_size): r = self.get_client() x = 'a' * value_size r.set('benchmark', x) if __name__ == '__main__': CommandPackerBenchmark().run_benchmark() redis-py-3.5.3/benchmarks/socket_read_size.py000066400000000000000000000016201366526254200212740ustar00rootroot00000000000000from redis.connection import PythonParser, HiredisParser from base import Benchmark class SocketReadBenchmark(Benchmark): ARGUMENTS = ( { 'name': 'parser', 'values': [PythonParser, HiredisParser] }, { 'name': 'value_size', 'values': [10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000] }, { 'name': 'read_size', 'values': [4096, 8192, 16384, 32768, 65536, 131072] } ) def setup(self, value_size, read_size, parser): r = self.get_client(parser_class=parser, socket_read_size=read_size) r.set('benchmark', 'a' * value_size) def run(self, value_size, read_size, parser): r = self.get_client() r.get('benchmark') if __name__ == '__main__': SocketReadBenchmark().run_benchmark() redis-py-3.5.3/build_tools/000077500000000000000000000000001366526254200156105ustar00rootroot00000000000000redis-py-3.5.3/build_tools/.bash_profile000066400000000000000000000000361366526254200202450ustar00rootroot00000000000000PATH=$PATH:/var/lib/redis/bin redis-py-3.5.3/build_tools/bootstrap.sh000077500000000000000000000001121366526254200201560ustar00rootroot00000000000000#!/usr/bin/env bash # need make to build redis sudo apt-get install make redis-py-3.5.3/build_tools/build_redis.sh000077500000000000000000000011761366526254200204410ustar00rootroot00000000000000#!/usr/bin/env bash source /home/vagrant/redis-py/build_tools/redis_vars.sh pushd /home/vagrant uninstall_all_sentinel_instances uninstall_all_redis_instances # create a clean directory for redis rm -rf $REDIS_DIR mkdir -p $REDIS_BIN_DIR mkdir -p $REDIS_CONF_DIR mkdir -p $REDIS_SAVE_DIR # download, unpack and build redis mkdir -p $REDIS_DOWNLOAD_DIR cd $REDIS_DOWNLOAD_DIR rm -f $REDIS_PACKAGE rm -rf $REDIS_BUILD_DIR wget http://download.redis.io/releases/$REDIS_PACKAGE tar zxvf $REDIS_PACKAGE cd $REDIS_BUILD_DIR make cp src/redis-server $REDIS_DIR/bin cp src/redis-cli $REDIS_DIR/bin cp src/redis-sentinel $REDIS_DIR/bin popd redis-py-3.5.3/build_tools/install_redis.sh000077500000000000000000000026621366526254200210110ustar00rootroot00000000000000#!/usr/bin/env bash source /home/vagrant/redis-py/build_tools/redis_vars.sh for filename in `ls $VAGRANT_REDIS_CONF_DIR`; do # cuts the order prefix off of the filename, e.g. 001-master -> master PROCESS_NAME=redis-`echo $filename | cut -f 2- -d -` echo "======================================" echo "INSTALLING REDIS SERVER: $PROCESS_NAME" echo "======================================" # make sure the instance is uninstalled (it should be already) uninstall_instance $PROCESS_NAME # base config mkdir -p $REDIS_CONF_DIR cp $REDIS_BUILD_DIR/redis.conf $REDIS_CONF_DIR/$PROCESS_NAME.conf # override config values from file cat $VAGRANT_REDIS_CONF_DIR/$filename >> $REDIS_CONF_DIR/$PROCESS_NAME.conf # replace placeholder variables in init.d script cp $VAGRANT_DIR/redis_init_script /etc/init.d/$PROCESS_NAME sed -i "s/{{ PROCESS_NAME }}/$PROCESS_NAME/g" /etc/init.d/$PROCESS_NAME # need to read the config file to find out what port this instance will run on port=`grep port $VAGRANT_REDIS_CONF_DIR/$filename | cut -f 2 -d " "` sed -i "s/{{ PORT }}/$port/g" /etc/init.d/$PROCESS_NAME chmod 755 /etc/init.d/$PROCESS_NAME # and tell update-rc.d about it update-rc.d $PROCESS_NAME defaults 98 # save the $PROCESS_NAME into installed instances file echo $PROCESS_NAME >> $REDIS_INSTALLED_INSTANCES_FILE # start redis /etc/init.d/$PROCESS_NAME start done redis-py-3.5.3/build_tools/install_sentinel.sh000077500000000000000000000027201366526254200215170ustar00rootroot00000000000000#!/usr/bin/env bash source /home/vagrant/redis-py/build_tools/redis_vars.sh for filename in `ls $VAGRANT_SENTINEL_CONF_DIR`; do # cuts the order prefix off of the filename, e.g. 001-master -> master PROCESS_NAME=sentinel-`echo $filename | cut -f 2- -d -` echo "=========================================" echo "INSTALLING SENTINEL SERVER: $PROCESS_NAME" echo "=========================================" # make sure the instance is uninstalled (it should be already) uninstall_instance $PROCESS_NAME # base config mkdir -p $REDIS_CONF_DIR cp $REDIS_BUILD_DIR/sentinel.conf $REDIS_CONF_DIR/$PROCESS_NAME.conf # override config values from file cat $VAGRANT_SENTINEL_CONF_DIR/$filename >> $REDIS_CONF_DIR/$PROCESS_NAME.conf # replace placeholder variables in init.d script cp $VAGRANT_DIR/sentinel_init_script /etc/init.d/$PROCESS_NAME sed -i "s/{{ PROCESS_NAME }}/$PROCESS_NAME/g" /etc/init.d/$PROCESS_NAME # need to read the config file to find out what port this instance will run on port=`grep port $VAGRANT_SENTINEL_CONF_DIR/$filename | cut -f 2 -d " "` sed -i "s/{{ PORT }}/$port/g" /etc/init.d/$PROCESS_NAME chmod 755 /etc/init.d/$PROCESS_NAME # and tell update-rc.d about it update-rc.d $PROCESS_NAME defaults 99 # save the $PROCESS_NAME into installed instances file echo $PROCESS_NAME >> $SENTINEL_INSTALLED_INSTANCES_FILE # start redis /etc/init.d/$PROCESS_NAME start done redis-py-3.5.3/build_tools/redis-configs/000077500000000000000000000000001366526254200203445ustar00rootroot00000000000000redis-py-3.5.3/build_tools/redis-configs/001-master000066400000000000000000000002471366526254200220630ustar00rootroot00000000000000pidfile /var/run/redis-master.pid bind * port 6379 daemonize yes unixsocket /tmp/redis_master.sock unixsocketperm 777 dbfilename master.rdb dir /var/lib/redis/backups redis-py-3.5.3/build_tools/redis-configs/002-slave000066400000000000000000000002741366526254200217030ustar00rootroot00000000000000pidfile /var/run/redis-slave.pid bind * port 6380 daemonize yes unixsocket /tmp/redis-slave.sock unixsocketperm 777 dbfilename slave.rdb dir /var/lib/redis/backups slaveof 127.0.0.1 6379 redis-py-3.5.3/build_tools/redis_init_script000077500000000000000000000024241366526254200212550ustar00rootroot00000000000000#!/bin/sh ### BEGIN INIT INFO # Provides: redis-server # Required-Start: $syslog # Required-Stop: $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start redis-server at boot time # Description: Control redis-server. ### END INIT INFO REDISPORT={{ PORT }} PIDFILE=/var/run/{{ PROCESS_NAME }}.pid CONF=/var/lib/redis/conf/{{ PROCESS_NAME }}.conf EXEC=/var/lib/redis/bin/redis-server CLIEXEC=/var/lib/redis/bin/redis-cli case "$1" in start) if [ -f $PIDFILE ] then echo "$PIDFILE exists, process is already running or crashed" else echo "Starting Redis server..." $EXEC $CONF fi ;; stop) if [ ! -f $PIDFILE ] then echo "$PIDFILE does not exist, process is not running" else PID=$(cat $PIDFILE) echo "Stopping ..." $CLIEXEC -p $REDISPORT shutdown while [ -x /proc/${PID} ] do echo "Waiting for Redis to shutdown ..." sleep 1 done echo "Redis stopped" fi ;; *) echo "Please use start or stop as first argument" ;; esac redis-py-3.5.3/build_tools/redis_vars.sh000077500000000000000000000027171366526254200203170ustar00rootroot00000000000000#!/usr/bin/env bash VAGRANT_DIR=/home/vagrant/redis-py/build_tools VAGRANT_REDIS_CONF_DIR=$VAGRANT_DIR/redis-configs VAGRANT_SENTINEL_CONF_DIR=$VAGRANT_DIR/sentinel-configs REDIS_VERSION=3.2.0 REDIS_DOWNLOAD_DIR=/home/vagrant/redis-downloads REDIS_PACKAGE=redis-$REDIS_VERSION.tar.gz REDIS_BUILD_DIR=$REDIS_DOWNLOAD_DIR/redis-$REDIS_VERSION REDIS_DIR=/var/lib/redis REDIS_BIN_DIR=$REDIS_DIR/bin REDIS_CONF_DIR=$REDIS_DIR/conf REDIS_SAVE_DIR=$REDIS_DIR/backups REDIS_INSTALLED_INSTANCES_FILE=$REDIS_DIR/redis-instances SENTINEL_INSTALLED_INSTANCES_FILE=$REDIS_DIR/sentinel-instances function uninstall_instance() { # Expects $1 to be the init.d filename, e.g. redis-nodename or # sentinel-nodename if [ -a /etc/init.d/$1 ]; then echo "======================================" echo "UNINSTALLING REDIS SERVER: $1" echo "======================================" /etc/init.d/$1 stop update-rc.d -f $1 remove rm -f /etc/init.d/$1 fi; rm -f $REDIS_CONF_DIR/$1.conf } function uninstall_all_redis_instances() { if [ -a $REDIS_INSTALLED_INSTANCES_FILE ]; then cat $REDIS_INSTALLED_INSTANCES_FILE | while read line; do uninstall_instance $line; done; fi } function uninstall_all_sentinel_instances() { if [ -a $SENTINEL_INSTALLED_INSTANCES_FILE ]; then cat $SENTINEL_INSTALLED_INSTANCES_FILE | while read line; do uninstall_instance $line; done; fi } redis-py-3.5.3/build_tools/sentinel-configs/000077500000000000000000000000001366526254200210575ustar00rootroot00000000000000redis-py-3.5.3/build_tools/sentinel-configs/001-1000066400000000000000000000002131366526254200214340ustar00rootroot00000000000000pidfile /var/run/sentinel-1.pid port 26379 daemonize yes # short timeout for sentinel tests sentinel down-after-milliseconds mymaster 500 redis-py-3.5.3/build_tools/sentinel-configs/002-2000066400000000000000000000002131366526254200214360ustar00rootroot00000000000000pidfile /var/run/sentinel-2.pid port 26380 daemonize yes # short timeout for sentinel tests sentinel down-after-milliseconds mymaster 500 redis-py-3.5.3/build_tools/sentinel-configs/003-3000066400000000000000000000002131366526254200214400ustar00rootroot00000000000000pidfile /var/run/sentinel-3.pid port 26381 daemonize yes # short timeout for sentinel tests sentinel down-after-milliseconds mymaster 500 redis-py-3.5.3/build_tools/sentinel_init_script000077500000000000000000000024531366526254200217720ustar00rootroot00000000000000#!/bin/sh ### BEGIN INIT INFO # Provides: redis-sentintel # Required-Start: $syslog # Required-Stop: $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start redis-sentinel at boot time # Description: Control redis-sentinel. ### END INIT INFO SENTINELPORT={{ PORT }} PIDFILE=/var/run/{{ PROCESS_NAME }}.pid CONF=/var/lib/redis/conf/{{ PROCESS_NAME }}.conf EXEC=/var/lib/redis/bin/redis-sentinel CLIEXEC=/var/lib/redis/bin/redis-cli case "$1" in start) if [ -f $PIDFILE ] then echo "$PIDFILE exists, process is already running or crashed" else echo "Starting Redis Sentinel..." $EXEC $CONF fi ;; stop) if [ ! -f $PIDFILE ] then echo "$PIDFILE does not exist, process is not running" else PID=$(cat $PIDFILE) echo "Stopping ..." $CLIEXEC -p $SENTINELPORT shutdown while [ -x /proc/${PID} ] do echo "Waiting for Sentinel to shutdown ..." sleep 1 done echo "Sentinel stopped" fi ;; *) echo "Please use start or stop as first argument" ;; esac redis-py-3.5.3/docs/000077500000000000000000000000001366526254200142215ustar00rootroot00000000000000redis-py-3.5.3/docs/Makefile000066400000000000000000000127041366526254200156650ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/redis-py.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/redis-py.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/redis-py" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/redis-py" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." redis-py-3.5.3/docs/_static/000077500000000000000000000000001366526254200156475ustar00rootroot00000000000000redis-py-3.5.3/docs/_static/.keep000066400000000000000000000000001366526254200165620ustar00rootroot00000000000000redis-py-3.5.3/docs/_templates/000077500000000000000000000000001366526254200163565ustar00rootroot00000000000000redis-py-3.5.3/docs/_templates/.keep000066400000000000000000000000001366526254200172710ustar00rootroot00000000000000redis-py-3.5.3/docs/conf.py000066400000000000000000000175251366526254200155320ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # redis-py documentation build configuration file, created by # sphinx-quickstart on Fri Feb 8 00:47:08 2013. # # This file is execfile()d with the current directory set to its containing # dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) sys.path.append(os.path.abspath(os.path.pardir)) # -- General configuration ---------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.viewcode'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'redis-py' copyright = u'2016, Andy McCurdy' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '2.10.5' # The full version, including alpha/beta/rc tags. release = '2.10.5' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'redis-pydoc' # -- Options for LaTeX output ------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', 'redis-py.tex', u'redis-py Documentation', u'Andy McCurdy', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output ------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'redis-py', u'redis-py Documentation', [u'Andy McCurdy'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ----------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'redis-py', u'redis-py Documentation', u'Andy McCurdy', 'redis-py', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' epub_title = u'redis-py' epub_author = u'Andy McCurdy' epub_publisher = u'Andy McCurdy' epub_copyright = u'2011, Andy McCurdy' redis-py-3.5.3/docs/index.rst000066400000000000000000000007301366526254200160620ustar00rootroot00000000000000.. redis-py documentation master file, created by sphinx-quickstart on Thu Jul 28 13:55:57 2011. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to redis-py's documentation! ==================================== Indices and tables ------------------ * :ref:`genindex` * :ref:`modindex` * :ref:`search` Contents: --------- .. toctree:: :maxdepth: 2 .. automodule:: redis :members: redis-py-3.5.3/docs/make.bat000066400000000000000000000117541366526254200156360ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set I18NSPHINXOPTS=%SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\redis-py.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\redis-py.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end redis-py-3.5.3/redis/000077500000000000000000000000001366526254200143775ustar00rootroot00000000000000redis-py-3.5.3/redis/__init__.py000066400000000000000000000022711366526254200165120ustar00rootroot00000000000000from redis.client import Redis, StrictRedis from redis.connection import ( BlockingConnectionPool, ConnectionPool, Connection, SSLConnection, UnixDomainSocketConnection ) from redis.utils import from_url from redis.exceptions import ( AuthenticationError, AuthenticationWrongNumberOfArgsError, BusyLoadingError, ChildDeadlockedError, ConnectionError, DataError, InvalidResponse, PubSubError, ReadOnlyError, RedisError, ResponseError, TimeoutError, WatchError ) def int_or_str(value): try: return int(value) except ValueError: return value __version__ = '3.5.3' VERSION = tuple(map(int_or_str, __version__.split('.'))) __all__ = [ 'AuthenticationError', 'AuthenticationWrongNumberOfArgsError', 'BlockingConnectionPool', 'BusyLoadingError', 'ChildDeadlockedError', 'Connection', 'ConnectionError', 'ConnectionPool', 'DataError', 'from_url', 'InvalidResponse', 'PubSubError', 'ReadOnlyError', 'Redis', 'RedisError', 'ResponseError', 'SSLConnection', 'StrictRedis', 'TimeoutError', 'UnixDomainSocketConnection', 'WatchError', ] redis-py-3.5.3/redis/_compat.py000066400000000000000000000131021366526254200163700ustar00rootroot00000000000000"""Internal module for Python 2 backwards compatibility.""" # flake8: noqa import errno import socket import sys def sendall(sock, *args, **kwargs): return sock.sendall(*args, **kwargs) def shutdown(sock, *args, **kwargs): return sock.shutdown(*args, **kwargs) def ssl_wrap_socket(context, sock, *args, **kwargs): return context.wrap_socket(sock, *args, **kwargs) # For Python older than 3.5, retry EINTR. if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and sys.version_info[1] < 5): # Adapted from https://bugs.python.org/review/23863/patch/14532/54418 import time # Wrapper for handling interruptable system calls. def _retryable_call(s, func, *args, **kwargs): # Some modules (SSL) use the _fileobject wrapper directly and # implement a smaller portion of the socket interface, thus we # need to let them continue to do so. timeout, deadline = None, 0.0 attempted = False try: timeout = s.gettimeout() except AttributeError: pass if timeout: deadline = time.time() + timeout try: while True: if attempted and timeout: now = time.time() if now >= deadline: raise socket.error(errno.EWOULDBLOCK, "timed out") else: # Overwrite the timeout on the socket object # to take into account elapsed time. s.settimeout(deadline - now) try: attempted = True return func(*args, **kwargs) except socket.error as e: if e.args[0] == errno.EINTR: continue raise finally: # Set the existing timeout back for future # calls. if timeout: s.settimeout(timeout) def recv(sock, *args, **kwargs): return _retryable_call(sock, sock.recv, *args, **kwargs) def recv_into(sock, *args, **kwargs): return _retryable_call(sock, sock.recv_into, *args, **kwargs) else: # Python 3.5 and above automatically retry EINTR def recv(sock, *args, **kwargs): return sock.recv(*args, **kwargs) def recv_into(sock, *args, **kwargs): return sock.recv_into(*args, **kwargs) if sys.version_info[0] < 3: # In Python 3, the ssl module raises socket.timeout whereas it raises # SSLError in Python 2. For compatibility between versions, ensure # socket.timeout is raised for both. import functools try: from ssl import SSLError as _SSLError except ImportError: class _SSLError(Exception): """A replacement in case ssl.SSLError is not available.""" pass _EXPECTED_SSL_TIMEOUT_MESSAGES = ( "The handshake operation timed out", "The read operation timed out", "The write operation timed out", ) def _handle_ssl_timeout(func): @functools.wraps(func) def wrapper(*args, **kwargs): try: return func(*args, **kwargs) except _SSLError as e: message = len(e.args) == 1 and unicode(e.args[0]) or '' if any(x in message for x in _EXPECTED_SSL_TIMEOUT_MESSAGES): # Raise socket.timeout for compatibility with Python 3. raise socket.timeout(*e.args) raise return wrapper recv = _handle_ssl_timeout(recv) recv_into = _handle_ssl_timeout(recv_into) sendall = _handle_ssl_timeout(sendall) shutdown = _handle_ssl_timeout(shutdown) ssl_wrap_socket = _handle_ssl_timeout(ssl_wrap_socket) if sys.version_info[0] < 3: from urllib import unquote from urlparse import parse_qs, urlparse from itertools import imap, izip from string import letters as ascii_letters from Queue import Queue # special unicode handling for python2 to avoid UnicodeDecodeError def safe_unicode(obj, *args): """ return the unicode representation of obj """ try: return unicode(obj, *args) except UnicodeDecodeError: # obj is byte string ascii_text = str(obj).encode('string_escape') return unicode(ascii_text) def iteritems(x): return x.iteritems() def iterkeys(x): return x.iterkeys() def itervalues(x): return x.itervalues() def nativestr(x): return x if isinstance(x, str) else x.encode('utf-8', 'replace') def next(x): return x.next() unichr = unichr xrange = xrange basestring = basestring unicode = unicode long = long BlockingIOError = socket.error else: from urllib.parse import parse_qs, unquote, urlparse from string import ascii_letters from queue import Queue def iteritems(x): return iter(x.items()) def iterkeys(x): return iter(x.keys()) def itervalues(x): return iter(x.values()) def nativestr(x): return x if isinstance(x, str) else x.decode('utf-8', 'replace') def safe_unicode(value): if isinstance(value, bytes): value = value.decode('utf-8', 'replace') return str(value) next = next unichr = chr imap = map izip = zip xrange = range basestring = str unicode = str long = int BlockingIOError = BlockingIOError try: # Python 3 from queue import LifoQueue, Empty, Full except ImportError: # Python 2 from Queue import LifoQueue, Empty, Full redis-py-3.5.3/redis/client.py000077500000000000000000004675731366526254200162600ustar00rootroot00000000000000from __future__ import unicode_literals from itertools import chain import datetime import warnings import time import threading import time as mod_time import re import hashlib from redis._compat import (basestring, imap, iteritems, iterkeys, itervalues, izip, long, nativestr, safe_unicode) from redis.connection import (ConnectionPool, UnixDomainSocketConnection, SSLConnection) from redis.lock import Lock from redis.exceptions import ( ConnectionError, DataError, ExecAbortError, NoScriptError, PubSubError, RedisError, ResponseError, TimeoutError, WatchError, ) SYM_EMPTY = b'' EMPTY_RESPONSE = 'EMPTY_RESPONSE' def list_or_args(keys, args): # returns a single new list combining keys and args try: iter(keys) # a string or bytes instance can be iterated, but indicates # keys wasn't passed as a list if isinstance(keys, (basestring, bytes)): keys = [keys] else: keys = list(keys) except TypeError: keys = [keys] if args: keys.extend(args) return keys def timestamp_to_datetime(response): "Converts a unix timestamp to a Python datetime object" if not response: return None try: response = int(response) except ValueError: return None return datetime.datetime.fromtimestamp(response) def string_keys_to_dict(key_string, callback): return dict.fromkeys(key_string.split(), callback) def dict_merge(*dicts): merged = {} for d in dicts: merged.update(d) return merged class CaseInsensitiveDict(dict): "Case insensitive dict implementation. Assumes string keys only." def __init__(self, data): for k, v in iteritems(data): self[k.upper()] = v def __contains__(self, k): return super(CaseInsensitiveDict, self).__contains__(k.upper()) def __delitem__(self, k): super(CaseInsensitiveDict, self).__delitem__(k.upper()) def __getitem__(self, k): return super(CaseInsensitiveDict, self).__getitem__(k.upper()) def get(self, k, default=None): return super(CaseInsensitiveDict, self).get(k.upper(), default) def __setitem__(self, k, v): super(CaseInsensitiveDict, self).__setitem__(k.upper(), v) def update(self, data): data = CaseInsensitiveDict(data) super(CaseInsensitiveDict, self).update(data) def parse_debug_object(response): "Parse the results of Redis's DEBUG OBJECT command into a Python dict" # The 'type' of the object is the first item in the response, but isn't # prefixed with a name response = nativestr(response) response = 'type:' + response response = dict(kv.split(':') for kv in response.split()) # parse some expected int values from the string response # note: this cmd isn't spec'd so these may not appear in all redis versions int_fields = ('refcount', 'serializedlength', 'lru', 'lru_seconds_idle') for field in int_fields: if field in response: response[field] = int(response[field]) return response def parse_object(response, infotype): "Parse the results of an OBJECT command" if infotype in ('idletime', 'refcount'): return int_or_none(response) return response def parse_info(response): "Parse the result of Redis's INFO command into a Python dict" info = {} response = nativestr(response) def get_value(value): if ',' not in value or '=' not in value: try: if '.' in value: return float(value) else: return int(value) except ValueError: return value else: sub_dict = {} for item in value.split(','): k, v = item.rsplit('=', 1) sub_dict[k] = get_value(v) return sub_dict for line in response.splitlines(): if line and not line.startswith('#'): if line.find(':') != -1: # Split, the info fields keys and values. # Note that the value may contain ':'. but the 'host:' # pseudo-command is the only case where the key contains ':' key, value = line.split(':', 1) if key == 'cmdstat_host': key, value = line.rsplit(':', 1) info[key] = get_value(value) else: # if the line isn't splittable, append it to the "__raw__" key info.setdefault('__raw__', []).append(line) return info def parse_memory_stats(response, **kwargs): "Parse the results of MEMORY STATS" stats = pairs_to_dict(response, decode_keys=True, decode_string_values=True) for key, value in iteritems(stats): if key.startswith('db.'): stats[key] = pairs_to_dict(value, decode_keys=True, decode_string_values=True) return stats SENTINEL_STATE_TYPES = { 'can-failover-its-master': int, 'config-epoch': int, 'down-after-milliseconds': int, 'failover-timeout': int, 'info-refresh': int, 'last-hello-message': int, 'last-ok-ping-reply': int, 'last-ping-reply': int, 'last-ping-sent': int, 'master-link-down-time': int, 'master-port': int, 'num-other-sentinels': int, 'num-slaves': int, 'o-down-time': int, 'pending-commands': int, 'parallel-syncs': int, 'port': int, 'quorum': int, 'role-reported-time': int, 's-down-time': int, 'slave-priority': int, 'slave-repl-offset': int, 'voted-leader-epoch': int } def parse_sentinel_state(item): result = pairs_to_dict_typed(item, SENTINEL_STATE_TYPES) flags = set(result['flags'].split(',')) for name, flag in (('is_master', 'master'), ('is_slave', 'slave'), ('is_sdown', 's_down'), ('is_odown', 'o_down'), ('is_sentinel', 'sentinel'), ('is_disconnected', 'disconnected'), ('is_master_down', 'master_down')): result[name] = flag in flags return result def parse_sentinel_master(response): return parse_sentinel_state(imap(nativestr, response)) def parse_sentinel_masters(response): result = {} for item in response: state = parse_sentinel_state(imap(nativestr, item)) result[state['name']] = state return result def parse_sentinel_slaves_and_sentinels(response): return [parse_sentinel_state(imap(nativestr, item)) for item in response] def parse_sentinel_get_master(response): return response and (response[0], int(response[1])) or None def nativestr_if_bytes(value): return nativestr(value) if isinstance(value, bytes) else value def pairs_to_dict(response, decode_keys=False, decode_string_values=False): "Create a dict given a list of key/value pairs" if response is None: return {} if decode_keys or decode_string_values: # the iter form is faster, but I don't know how to make that work # with a nativestr() map keys = response[::2] if decode_keys: keys = imap(nativestr, keys) values = response[1::2] if decode_string_values: values = imap(nativestr_if_bytes, values) return dict(izip(keys, values)) else: it = iter(response) return dict(izip(it, it)) def pairs_to_dict_typed(response, type_info): it = iter(response) result = {} for key, value in izip(it, it): if key in type_info: try: value = type_info[key](value) except Exception: # if for some reason the value can't be coerced, just use # the string value pass result[key] = value return result def zset_score_pairs(response, **options): """ If ``withscores`` is specified in the options, return the response as a list of (value, score) pairs """ if not response or not options.get('withscores'): return response score_cast_func = options.get('score_cast_func', float) it = iter(response) return list(izip(it, imap(score_cast_func, it))) def sort_return_tuples(response, **options): """ If ``groups`` is specified, return the response as a list of n-element tuples with n being the value found in options['groups'] """ if not response or not options.get('groups'): return response n = options['groups'] return list(izip(*[response[i::n] for i in range(n)])) def int_or_none(response): if response is None: return None return int(response) def nativestr_or_none(response): if response is None: return None return nativestr(response) def parse_stream_list(response): if response is None: return None data = [] for r in response: if r is not None: data.append((r[0], pairs_to_dict(r[1]))) else: data.append((None, None)) return data def pairs_to_dict_with_nativestr_keys(response): return pairs_to_dict(response, decode_keys=True) def parse_list_of_dicts(response): return list(imap(pairs_to_dict_with_nativestr_keys, response)) def parse_xclaim(response, **options): if options.get('parse_justid', False): return response return parse_stream_list(response) def parse_xinfo_stream(response): data = pairs_to_dict(response, decode_keys=True) first = data['first-entry'] if first is not None: data['first-entry'] = (first[0], pairs_to_dict(first[1])) last = data['last-entry'] if last is not None: data['last-entry'] = (last[0], pairs_to_dict(last[1])) return data def parse_xread(response): if response is None: return [] return [[r[0], parse_stream_list(r[1])] for r in response] def parse_xpending(response, **options): if options.get('parse_detail', False): return parse_xpending_range(response) consumers = [{'name': n, 'pending': long(p)} for n, p in response[3] or []] return { 'pending': response[0], 'min': response[1], 'max': response[2], 'consumers': consumers } def parse_xpending_range(response): k = ('message_id', 'consumer', 'time_since_delivered', 'times_delivered') return [dict(izip(k, r)) for r in response] def float_or_none(response): if response is None: return None return float(response) def bool_ok(response): return nativestr(response) == 'OK' def parse_zadd(response, **options): if response is None: return None if options.get('as_score'): return float(response) return int(response) def parse_client_list(response, **options): clients = [] for c in nativestr(response).splitlines(): # Values might contain '=' clients.append(dict(pair.split('=', 1) for pair in c.split(' '))) return clients def parse_config_get(response, **options): response = [nativestr(i) if i is not None else None for i in response] return response and pairs_to_dict(response) or {} def parse_scan(response, **options): cursor, r = response return long(cursor), r def parse_hscan(response, **options): cursor, r = response return long(cursor), r and pairs_to_dict(r) or {} def parse_zscan(response, **options): score_cast_func = options.get('score_cast_func', float) cursor, r = response it = iter(r) return long(cursor), list(izip(it, imap(score_cast_func, it))) def parse_slowlog_get(response, **options): space = ' ' if options.get('decode_responses', False) else b' ' return [{ 'id': item[0], 'start_time': int(item[1]), 'duration': int(item[2]), 'command': space.join(item[3]) } for item in response] def parse_cluster_info(response, **options): response = nativestr(response) return dict(line.split(':') for line in response.splitlines() if line) def _parse_node_line(line): line_items = line.split(' ') node_id, addr, flags, master_id, ping, pong, epoch, \ connected = line.split(' ')[:8] slots = [sl.split('-') for sl in line_items[8:]] node_dict = { 'node_id': node_id, 'flags': flags, 'master_id': master_id, 'last_ping_sent': ping, 'last_pong_rcvd': pong, 'epoch': epoch, 'slots': slots, 'connected': True if connected == 'connected' else False } return addr, node_dict def parse_cluster_nodes(response, **options): response = nativestr(response) raw_lines = response if isinstance(response, basestring): raw_lines = response.splitlines() return dict(_parse_node_line(line) for line in raw_lines) def parse_georadius_generic(response, **options): if options['store'] or options['store_dist']: # `store` and `store_diff` cant be combined # with other command arguments. return response if type(response) != list: response_list = [response] else: response_list = response if not options['withdist'] and not options['withcoord']\ and not options['withhash']: # just a bunch of places return response_list cast = { 'withdist': float, 'withcoord': lambda ll: (float(ll[0]), float(ll[1])), 'withhash': int } # zip all output results with each casting functino to get # the properly native Python value. f = [lambda x: x] f += [cast[o] for o in ['withdist', 'withhash', 'withcoord'] if options[o]] return [ list(map(lambda fv: fv[0](fv[1]), zip(f, r))) for r in response_list ] def parse_pubsub_numsub(response, **options): return list(zip(response[0::2], response[1::2])) def parse_client_kill(response, **options): if isinstance(response, (long, int)): return int(response) return nativestr(response) == 'OK' def parse_acl_getuser(response, **options): if response is None: return None data = pairs_to_dict(response, decode_keys=True) # convert everything but user-defined data in 'keys' to native strings data['flags'] = list(map(nativestr, data['flags'])) data['passwords'] = list(map(nativestr, data['passwords'])) data['commands'] = nativestr(data['commands']) # split 'commands' into separate 'categories' and 'commands' lists commands, categories = [], [] for command in data['commands'].split(' '): if '@' in command: categories.append(command) else: commands.append(command) data['commands'] = commands data['categories'] = categories data['enabled'] = 'on' in data['flags'] return data class Redis(object): """ Implementation of the Redis protocol. This abstract class provides a Python interface to all Redis commands and an implementation of the Redis protocol. Connection and Pipeline derive from this, implementing how the commands are sent and received to the Redis server """ RESPONSE_CALLBACKS = dict_merge( string_keys_to_dict( 'AUTH EXPIRE EXPIREAT HEXISTS HMSET MOVE MSETNX PERSIST ' 'PSETEX RENAMENX SISMEMBER SMOVE SETEX SETNX', bool ), string_keys_to_dict( 'BITCOUNT BITPOS DECRBY DEL EXISTS GEOADD GETBIT HDEL HLEN ' 'HSTRLEN INCRBY LINSERT LLEN LPUSHX PFADD PFCOUNT RPUSHX SADD ' 'SCARD SDIFFSTORE SETBIT SETRANGE SINTERSTORE SREM STRLEN ' 'SUNIONSTORE UNLINK XACK XDEL XLEN XTRIM ZCARD ZLEXCOUNT ZREM ' 'ZREMRANGEBYLEX ZREMRANGEBYRANK ZREMRANGEBYSCORE', int ), string_keys_to_dict( 'INCRBYFLOAT HINCRBYFLOAT', float ), string_keys_to_dict( # these return OK, or int if redis-server is >=1.3.4 'LPUSH RPUSH', lambda r: isinstance(r, (long, int)) and r or nativestr(r) == 'OK' ), string_keys_to_dict('SORT', sort_return_tuples), string_keys_to_dict('ZSCORE ZINCRBY GEODIST', float_or_none), string_keys_to_dict( 'FLUSHALL FLUSHDB LSET LTRIM MSET PFMERGE READONLY READWRITE ' 'RENAME SAVE SELECT SHUTDOWN SLAVEOF SWAPDB WATCH UNWATCH ', bool_ok ), string_keys_to_dict('BLPOP BRPOP', lambda r: r and tuple(r) or None), string_keys_to_dict( 'SDIFF SINTER SMEMBERS SUNION', lambda r: r and set(r) or set() ), string_keys_to_dict( 'ZPOPMAX ZPOPMIN ZRANGE ZRANGEBYSCORE ZREVRANGE ZREVRANGEBYSCORE', zset_score_pairs ), string_keys_to_dict('BZPOPMIN BZPOPMAX', \ lambda r: r and (r[0], r[1], float(r[2])) or None), string_keys_to_dict('ZRANK ZREVRANK', int_or_none), string_keys_to_dict('XREVRANGE XRANGE', parse_stream_list), string_keys_to_dict('XREAD XREADGROUP', parse_xread), string_keys_to_dict('BGREWRITEAOF BGSAVE', lambda r: True), { 'ACL CAT': lambda r: list(map(nativestr, r)), 'ACL DELUSER': int, 'ACL GENPASS': nativestr, 'ACL GETUSER': parse_acl_getuser, 'ACL LIST': lambda r: list(map(nativestr, r)), 'ACL LOAD': bool_ok, 'ACL SAVE': bool_ok, 'ACL SETUSER': bool_ok, 'ACL USERS': lambda r: list(map(nativestr, r)), 'ACL WHOAMI': nativestr, 'CLIENT GETNAME': lambda r: r and nativestr(r), 'CLIENT ID': int, 'CLIENT KILL': parse_client_kill, 'CLIENT LIST': parse_client_list, 'CLIENT SETNAME': bool_ok, 'CLIENT UNBLOCK': lambda r: r and int(r) == 1 or False, 'CLIENT PAUSE': bool_ok, 'CLUSTER ADDSLOTS': bool_ok, 'CLUSTER COUNT-FAILURE-REPORTS': lambda x: int(x), 'CLUSTER COUNTKEYSINSLOT': lambda x: int(x), 'CLUSTER DELSLOTS': bool_ok, 'CLUSTER FAILOVER': bool_ok, 'CLUSTER FORGET': bool_ok, 'CLUSTER INFO': parse_cluster_info, 'CLUSTER KEYSLOT': lambda x: int(x), 'CLUSTER MEET': bool_ok, 'CLUSTER NODES': parse_cluster_nodes, 'CLUSTER REPLICATE': bool_ok, 'CLUSTER RESET': bool_ok, 'CLUSTER SAVECONFIG': bool_ok, 'CLUSTER SET-CONFIG-EPOCH': bool_ok, 'CLUSTER SETSLOT': bool_ok, 'CLUSTER SLAVES': parse_cluster_nodes, 'CONFIG GET': parse_config_get, 'CONFIG RESETSTAT': bool_ok, 'CONFIG SET': bool_ok, 'DEBUG OBJECT': parse_debug_object, 'GEOHASH': lambda r: list(map(nativestr_or_none, r)), 'GEOPOS': lambda r: list(map(lambda ll: (float(ll[0]), float(ll[1])) if ll is not None else None, r)), 'GEORADIUS': parse_georadius_generic, 'GEORADIUSBYMEMBER': parse_georadius_generic, 'HGETALL': lambda r: r and pairs_to_dict(r) or {}, 'HSCAN': parse_hscan, 'INFO': parse_info, 'LASTSAVE': timestamp_to_datetime, 'MEMORY PURGE': bool_ok, 'MEMORY STATS': parse_memory_stats, 'MEMORY USAGE': int_or_none, 'OBJECT': parse_object, 'PING': lambda r: nativestr(r) == 'PONG', 'PUBSUB NUMSUB': parse_pubsub_numsub, 'RANDOMKEY': lambda r: r and r or None, 'SCAN': parse_scan, 'SCRIPT EXISTS': lambda r: list(imap(bool, r)), 'SCRIPT FLUSH': bool_ok, 'SCRIPT KILL': bool_ok, 'SCRIPT LOAD': nativestr, 'SENTINEL GET-MASTER-ADDR-BY-NAME': parse_sentinel_get_master, 'SENTINEL MASTER': parse_sentinel_master, 'SENTINEL MASTERS': parse_sentinel_masters, 'SENTINEL MONITOR': bool_ok, 'SENTINEL REMOVE': bool_ok, 'SENTINEL SENTINELS': parse_sentinel_slaves_and_sentinels, 'SENTINEL SET': bool_ok, 'SENTINEL SLAVES': parse_sentinel_slaves_and_sentinels, 'SET': lambda r: r and nativestr(r) == 'OK', 'SLOWLOG GET': parse_slowlog_get, 'SLOWLOG LEN': int, 'SLOWLOG RESET': bool_ok, 'SSCAN': parse_scan, 'TIME': lambda x: (int(x[0]), int(x[1])), 'XCLAIM': parse_xclaim, 'XGROUP CREATE': bool_ok, 'XGROUP DELCONSUMER': int, 'XGROUP DESTROY': bool, 'XGROUP SETID': bool_ok, 'XINFO CONSUMERS': parse_list_of_dicts, 'XINFO GROUPS': parse_list_of_dicts, 'XINFO STREAM': parse_xinfo_stream, 'XPENDING': parse_xpending, 'ZADD': parse_zadd, 'ZSCAN': parse_zscan, } ) @classmethod def from_url(cls, url, db=None, **kwargs): """ Return a Redis client object configured from the given URL For example:: redis://[[username]:[password]]@localhost:6379/0 rediss://[[username]:[password]]@localhost:6379/0 unix://[[username]:[password]]@/path/to/socket.sock?db=0 Three URL schemes are supported: - ```redis://`` `_ creates a normal TCP socket connection - ```rediss://`` `_ creates a SSL wrapped TCP socket connection - ``unix://`` creates a Unix Domain Socket connection There are several ways to specify a database number. The parse function will return the first specified option: 1. A ``db`` querystring option, e.g. redis://localhost?db=0 2. If using the redis:// scheme, the path argument of the url, e.g. redis://localhost/0 3. The ``db`` argument to this function. If none of these options are specified, db=0 is used. Any additional querystring arguments and keyword arguments will be passed along to the ConnectionPool class's initializer. In the case of conflicting arguments, querystring arguments always win. """ connection_pool = ConnectionPool.from_url(url, db=db, **kwargs) return cls(connection_pool=connection_pool) def __init__(self, host='localhost', port=6379, db=0, password=None, socket_timeout=None, socket_connect_timeout=None, socket_keepalive=None, socket_keepalive_options=None, connection_pool=None, unix_socket_path=None, encoding='utf-8', encoding_errors='strict', charset=None, errors=None, decode_responses=False, retry_on_timeout=False, ssl=False, ssl_keyfile=None, ssl_certfile=None, ssl_cert_reqs='required', ssl_ca_certs=None, ssl_check_hostname=False, max_connections=None, single_connection_client=False, health_check_interval=0, client_name=None, username=None): if not connection_pool: if charset is not None: warnings.warn(DeprecationWarning( '"charset" is deprecated. Use "encoding" instead')) encoding = charset if errors is not None: warnings.warn(DeprecationWarning( '"errors" is deprecated. Use "encoding_errors" instead')) encoding_errors = errors kwargs = { 'db': db, 'username': username, 'password': password, 'socket_timeout': socket_timeout, 'encoding': encoding, 'encoding_errors': encoding_errors, 'decode_responses': decode_responses, 'retry_on_timeout': retry_on_timeout, 'max_connections': max_connections, 'health_check_interval': health_check_interval, 'client_name': client_name } # based on input, setup appropriate connection args if unix_socket_path is not None: kwargs.update({ 'path': unix_socket_path, 'connection_class': UnixDomainSocketConnection }) else: # TCP specific options kwargs.update({ 'host': host, 'port': port, 'socket_connect_timeout': socket_connect_timeout, 'socket_keepalive': socket_keepalive, 'socket_keepalive_options': socket_keepalive_options, }) if ssl: kwargs.update({ 'connection_class': SSLConnection, 'ssl_keyfile': ssl_keyfile, 'ssl_certfile': ssl_certfile, 'ssl_cert_reqs': ssl_cert_reqs, 'ssl_ca_certs': ssl_ca_certs, 'ssl_check_hostname': ssl_check_hostname, }) connection_pool = ConnectionPool(**kwargs) self.connection_pool = connection_pool self.connection = None if single_connection_client: self.connection = self.connection_pool.get_connection('_') self.response_callbacks = CaseInsensitiveDict( self.__class__.RESPONSE_CALLBACKS) def __repr__(self): return "%s<%s>" % (type(self).__name__, repr(self.connection_pool)) def set_response_callback(self, command, callback): "Set a custom Response Callback" self.response_callbacks[command] = callback def pipeline(self, transaction=True, shard_hint=None): """ Return a new pipeline object that can queue multiple commands for later execution. ``transaction`` indicates whether all commands should be executed atomically. Apart from making a group of operations atomic, pipelines are useful for reducing the back-and-forth overhead between the client and server. """ return Pipeline( self.connection_pool, self.response_callbacks, transaction, shard_hint) def transaction(self, func, *watches, **kwargs): """ Convenience method for executing the callable `func` as a transaction while watching all keys specified in `watches`. The 'func' callable should expect a single argument which is a Pipeline object. """ shard_hint = kwargs.pop('shard_hint', None) value_from_callable = kwargs.pop('value_from_callable', False) watch_delay = kwargs.pop('watch_delay', None) with self.pipeline(True, shard_hint) as pipe: while True: try: if watches: pipe.watch(*watches) func_value = func(pipe) exec_value = pipe.execute() return func_value if value_from_callable else exec_value except WatchError: if watch_delay is not None and watch_delay > 0: time.sleep(watch_delay) continue def lock(self, name, timeout=None, sleep=0.1, blocking_timeout=None, lock_class=None, thread_local=True): """ Return a new Lock object using key ``name`` that mimics the behavior of threading.Lock. If specified, ``timeout`` indicates a maximum life for the lock. By default, it will remain locked until release() is called. ``sleep`` indicates the amount of time to sleep per loop iteration when the lock is in blocking mode and another client is currently holding the lock. ``blocking_timeout`` indicates the maximum amount of time in seconds to spend trying to acquire the lock. A value of ``None`` indicates continue trying forever. ``blocking_timeout`` can be specified as a float or integer, both representing the number of seconds to wait. ``lock_class`` forces the specified lock implementation. ``thread_local`` indicates whether the lock token is placed in thread-local storage. By default, the token is placed in thread local storage so that a thread only sees its token, not a token set by another thread. Consider the following timeline: time: 0, thread-1 acquires `my-lock`, with a timeout of 5 seconds. thread-1 sets the token to "abc" time: 1, thread-2 blocks trying to acquire `my-lock` using the Lock instance. time: 5, thread-1 has not yet completed. redis expires the lock key. time: 5, thread-2 acquired `my-lock` now that it's available. thread-2 sets the token to "xyz" time: 6, thread-1 finishes its work and calls release(). if the token is *not* stored in thread local storage, then thread-1 would see the token value as "xyz" and would be able to successfully release the thread-2's lock. In some use cases it's necessary to disable thread local storage. For example, if you have code where one thread acquires a lock and passes that lock instance to a worker thread to release later. If thread local storage isn't disabled in this case, the worker thread won't see the token set by the thread that acquired the lock. Our assumption is that these cases aren't common and as such default to using thread local storage. """ if lock_class is None: lock_class = Lock return lock_class(self, name, timeout=timeout, sleep=sleep, blocking_timeout=blocking_timeout, thread_local=thread_local) def pubsub(self, **kwargs): """ Return a Publish/Subscribe object. With this object, you can subscribe to channels and listen for messages that get published to them. """ return PubSub(self.connection_pool, **kwargs) def monitor(self): return Monitor(self.connection_pool) def client(self): return self.__class__(connection_pool=self.connection_pool, single_connection_client=True) def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.close() def __del__(self): self.close() def close(self): conn = self.connection if conn: self.connection = None self.connection_pool.release(conn) # COMMAND EXECUTION AND PROTOCOL PARSING def execute_command(self, *args, **options): "Execute a command and return a parsed response" pool = self.connection_pool command_name = args[0] conn = self.connection or pool.get_connection(command_name, **options) try: conn.send_command(*args) return self.parse_response(conn, command_name, **options) except (ConnectionError, TimeoutError) as e: conn.disconnect() if not (conn.retry_on_timeout and isinstance(e, TimeoutError)): raise conn.send_command(*args) return self.parse_response(conn, command_name, **options) finally: if not self.connection: pool.release(conn) def parse_response(self, connection, command_name, **options): "Parses a response from the Redis server" try: response = connection.read_response() except ResponseError: if EMPTY_RESPONSE in options: return options[EMPTY_RESPONSE] raise if command_name in self.response_callbacks: return self.response_callbacks[command_name](response, **options) return response # SERVER INFORMATION # ACL methods def acl_cat(self, category=None): """ Returns a list of categories or commands within a category. If ``category`` is not supplied, returns a list of all categories. If ``category`` is supplied, returns a list of all commands within that category. """ pieces = [category] if category else [] return self.execute_command('ACL CAT', *pieces) def acl_deluser(self, username): "Delete the ACL for the specified ``username``" return self.execute_command('ACL DELUSER', username) def acl_genpass(self): "Generate a random password value" return self.execute_command('ACL GENPASS') def acl_getuser(self, username): """ Get the ACL details for the specified ``username``. If ``username`` does not exist, return None """ return self.execute_command('ACL GETUSER', username) def acl_list(self): "Return a list of all ACLs on the server" return self.execute_command('ACL LIST') def acl_load(self): """ Load ACL rules from the configured ``aclfile``. Note that the server must be configured with the ``aclfile`` directive to be able to load ACL rules from an aclfile. """ return self.execute_command('ACL LOAD') def acl_save(self): """ Save ACL rules to the configured ``aclfile``. Note that the server must be configured with the ``aclfile`` directive to be able to save ACL rules to an aclfile. """ return self.execute_command('ACL SAVE') def acl_setuser(self, username, enabled=False, nopass=False, passwords=None, hashed_passwords=None, categories=None, commands=None, keys=None, reset=False, reset_keys=False, reset_passwords=False): """ Create or update an ACL user. Create or update the ACL for ``username``. If the user already exists, the existing ACL is completely overwritten and replaced with the specified values. ``enabled`` is a boolean indicating whether the user should be allowed to authenticate or not. Defaults to ``False``. ``nopass`` is a boolean indicating whether the can authenticate without a password. This cannot be True if ``passwords`` are also specified. ``passwords`` if specified is a list of plain text passwords to add to or remove from the user. Each password must be prefixed with a '+' to add or a '-' to remove. For convenience, the value of ``add_passwords`` can be a simple prefixed string when adding or removing a single password. ``hashed_passwords`` if specified is a list of SHA-256 hashed passwords to add to or remove from the user. Each hashed password must be prefixed with a '+' to add or a '-' to remove. For convenience, the value of ``hashed_passwords`` can be a simple prefixed string when adding or removing a single password. ``categories`` if specified is a list of strings representing category permissions. Each string must be prefixed with either a '+' to add the category permission or a '-' to remove the category permission. ``commands`` if specified is a list of strings representing command permissions. Each string must be prefixed with either a '+' to add the command permission or a '-' to remove the command permission. ``keys`` if specified is a list of key patterns to grant the user access to. Keys patterns allow '*' to support wildcard matching. For example, '*' grants access to all keys while 'cache:*' grants access to all keys that are prefixed with 'cache:'. ``keys`` should not be prefixed with a '~'. ``reset`` is a boolean indicating whether the user should be fully reset prior to applying the new ACL. Setting this to True will remove all existing passwords, flags and privileges from the user and then apply the specified rules. If this is False, the user's existing passwords, flags and privileges will be kept and any new specified rules will be applied on top. ``reset_keys`` is a boolean indicating whether the user's key permissions should be reset prior to applying any new key permissions specified in ``keys``. If this is False, the user's existing key permissions will be kept and any new specified key permissions will be applied on top. ``reset_passwords`` is a boolean indicating whether to remove all existing passwords and the 'nopass' flag from the user prior to applying any new passwords specified in 'passwords' or 'hashed_passwords'. If this is False, the user's existing passwords and 'nopass' status will be kept and any new specified passwords or hashed_passwords will be applied on top. """ encoder = self.connection_pool.get_encoder() pieces = [username] if reset: pieces.append(b'reset') if reset_keys: pieces.append(b'resetkeys') if reset_passwords: pieces.append(b'resetpass') if enabled: pieces.append(b'on') else: pieces.append(b'off') if (passwords or hashed_passwords) and nopass: raise DataError('Cannot set \'nopass\' and supply ' '\'passwords\' or \'hashed_passwords\'') if passwords: # as most users will have only one password, allow remove_passwords # to be specified as a simple string or a list passwords = list_or_args(passwords, []) for i, password in enumerate(passwords): password = encoder.encode(password) if password.startswith(b'+'): pieces.append(b'>%s' % password[1:]) elif password.startswith(b'-'): pieces.append(b'<%s' % password[1:]) else: raise DataError('Password %d must be prefixeed with a ' '"+" to add or a "-" to remove' % i) if hashed_passwords: # as most users will have only one password, allow remove_passwords # to be specified as a simple string or a list hashed_passwords = list_or_args(hashed_passwords, []) for i, hashed_password in enumerate(hashed_passwords): hashed_password = encoder.encode(hashed_password) if hashed_password.startswith(b'+'): pieces.append(b'#%s' % hashed_password[1:]) elif hashed_password.startswith(b'-'): pieces.append(b'!%s' % hashed_password[1:]) else: raise DataError('Hashed %d password must be prefixeed ' 'with a "+" to add or a "-" to remove' % i) if nopass: pieces.append(b'nopass') if categories: for category in categories: category = encoder.encode(category) # categories can be prefixed with one of (+@, +, -@, -) if category.startswith(b'+@'): pieces.append(category) elif category.startswith(b'+'): pieces.append(b'+@%s' % category[1:]) elif category.startswith(b'-@'): pieces.append(category) elif category.startswith(b'-'): pieces.append(b'-@%s' % category[1:]) else: raise DataError('Category "%s" must be prefixed with ' '"+" or "-"' % encoder.decode(category, force=True)) if commands: for cmd in commands: cmd = encoder.encode(cmd) if not cmd.startswith(b'+') and not cmd.startswith(b'-'): raise DataError('Command "%s" must be prefixed with ' '"+" or "-"' % encoder.decode(cmd, force=True)) pieces.append(cmd) if keys: for key in keys: key = encoder.encode(key) pieces.append(b'~%s' % key) return self.execute_command('ACL SETUSER', *pieces) def acl_users(self): "Returns a list of all registered users on the server." return self.execute_command('ACL USERS') def acl_whoami(self): "Get the username for the current connection" return self.execute_command('ACL WHOAMI') def bgrewriteaof(self): "Tell the Redis server to rewrite the AOF file from data in memory." return self.execute_command('BGREWRITEAOF') def bgsave(self): """ Tell the Redis server to save its data to disk. Unlike save(), this method is asynchronous and returns immediately. """ return self.execute_command('BGSAVE') def client_kill(self, address): "Disconnects the client at ``address`` (ip:port)" return self.execute_command('CLIENT KILL', address) def client_kill_filter(self, _id=None, _type=None, addr=None, skipme=None): """ Disconnects client(s) using a variety of filter options :param id: Kills a client by its unique ID field :param type: Kills a client by type where type is one of 'normal', 'master', 'slave' or 'pubsub' :param addr: Kills a client by its 'address:port' :param skipme: If True, then the client calling the command will not get killed even if it is identified by one of the filter options. If skipme is not provided, the server defaults to skipme=True """ args = [] if _type is not None: client_types = ('normal', 'master', 'slave', 'pubsub') if str(_type).lower() not in client_types: raise DataError("CLIENT KILL type must be one of %r" % ( client_types,)) args.extend((b'TYPE', _type)) if skipme is not None: if not isinstance(skipme, bool): raise DataError("CLIENT KILL skipme must be a bool") if skipme: args.extend((b'SKIPME', b'YES')) else: args.extend((b'SKIPME', b'NO')) if _id is not None: args.extend((b'ID', _id)) if addr is not None: args.extend((b'ADDR', addr)) if not args: raise DataError("CLIENT KILL ... ... " " must specify at least one filter") return self.execute_command('CLIENT KILL', *args) def client_list(self, _type=None): """ Returns a list of currently connected clients. If type of client specified, only that type will be returned. :param _type: optional. one of the client types (normal, master, replica, pubsub) """ "Returns a list of currently connected clients" if _type is not None: client_types = ('normal', 'master', 'replica', 'pubsub') if str(_type).lower() not in client_types: raise DataError("CLIENT LIST _type must be one of %r" % ( client_types,)) return self.execute_command('CLIENT LIST', b'TYPE', _type) return self.execute_command('CLIENT LIST') def client_getname(self): "Returns the current connection name" return self.execute_command('CLIENT GETNAME') def client_id(self): "Returns the current connection id" return self.execute_command('CLIENT ID') def client_setname(self, name): "Sets the current connection name" return self.execute_command('CLIENT SETNAME', name) def client_unblock(self, client_id, error=False): """ Unblocks a connection by its client id. If ``error`` is True, unblocks the client with a special error message. If ``error`` is False (default), the client is unblocked using the regular timeout mechanism. """ args = ['CLIENT UNBLOCK', int(client_id)] if error: args.append(b'ERROR') return self.execute_command(*args) def client_pause(self, timeout): """ Suspend all the Redis clients for the specified amount of time :param timeout: milliseconds to pause clients """ if not isinstance(timeout, (int, long)): raise DataError("CLIENT PAUSE timeout must be an integer") return self.execute_command('CLIENT PAUSE', str(timeout)) def readwrite(self): "Disables read queries for a connection to a Redis Cluster slave node" return self.execute_command('READWRITE') def readonly(self): "Enables read queries for a connection to a Redis Cluster replica node" return self.execute_command('READONLY') def config_get(self, pattern="*"): "Return a dictionary of configuration based on the ``pattern``" return self.execute_command('CONFIG GET', pattern) def config_set(self, name, value): "Set config item ``name`` with ``value``" return self.execute_command('CONFIG SET', name, value) def config_resetstat(self): "Reset runtime statistics" return self.execute_command('CONFIG RESETSTAT') def config_rewrite(self): "Rewrite config file with the minimal change to reflect running config" return self.execute_command('CONFIG REWRITE') def dbsize(self): "Returns the number of keys in the current database" return self.execute_command('DBSIZE') def debug_object(self, key): "Returns version specific meta information about a given key" return self.execute_command('DEBUG OBJECT', key) def echo(self, value): "Echo the string back from the server" return self.execute_command('ECHO', value) def flushall(self, asynchronous=False): """ Delete all keys in all databases on the current host. ``asynchronous`` indicates whether the operation is executed asynchronously by the server. """ args = [] if asynchronous: args.append(b'ASYNC') return self.execute_command('FLUSHALL', *args) def flushdb(self, asynchronous=False): """ Delete all keys in the current database. ``asynchronous`` indicates whether the operation is executed asynchronously by the server. """ args = [] if asynchronous: args.append(b'ASYNC') return self.execute_command('FLUSHDB', *args) def swapdb(self, first, second): "Swap two databases" return self.execute_command('SWAPDB', first, second) def info(self, section=None): """ Returns a dictionary containing information about the Redis server The ``section`` option can be used to select a specific section of information The section option is not supported by older versions of Redis Server, and will generate ResponseError """ if section is None: return self.execute_command('INFO') else: return self.execute_command('INFO', section) def lastsave(self): """ Return a Python datetime object representing the last time the Redis database was saved to disk """ return self.execute_command('LASTSAVE') def migrate(self, host, port, keys, destination_db, timeout, copy=False, replace=False, auth=None): """ Migrate 1 or more keys from the current Redis server to a different server specified by the ``host``, ``port`` and ``destination_db``. The ``timeout``, specified in milliseconds, indicates the maximum time the connection between the two servers can be idle before the command is interrupted. If ``copy`` is True, the specified ``keys`` are NOT deleted from the source server. If ``replace`` is True, this operation will overwrite the keys on the destination server if they exist. If ``auth`` is specified, authenticate to the destination server with the password provided. """ keys = list_or_args(keys, []) if not keys: raise DataError('MIGRATE requires at least one key') pieces = [] if copy: pieces.append(b'COPY') if replace: pieces.append(b'REPLACE') if auth: pieces.append(b'AUTH') pieces.append(auth) pieces.append(b'KEYS') pieces.extend(keys) return self.execute_command('MIGRATE', host, port, '', destination_db, timeout, *pieces) def object(self, infotype, key): "Return the encoding, idletime, or refcount about the key" return self.execute_command('OBJECT', infotype, key, infotype=infotype) def memory_stats(self): "Return a dictionary of memory stats" return self.execute_command('MEMORY STATS') def memory_usage(self, key, samples=None): """ Return the total memory usage for key, its value and associated administrative overheads. For nested data structures, ``samples`` is the number of elements to sample. If left unspecified, the server's default is 5. Use 0 to sample all elements. """ args = [] if isinstance(samples, int): args.extend([b'SAMPLES', samples]) return self.execute_command('MEMORY USAGE', key, *args) def memory_purge(self): "Attempts to purge dirty pages for reclamation by allocator" return self.execute_command('MEMORY PURGE') def ping(self): "Ping the Redis server" return self.execute_command('PING') def save(self): """ Tell the Redis server to save its data to disk, blocking until the save is complete """ return self.execute_command('SAVE') def sentinel(self, *args): "Redis Sentinel's SENTINEL command." warnings.warn( DeprecationWarning('Use the individual sentinel_* methods')) def sentinel_get_master_addr_by_name(self, service_name): "Returns a (host, port) pair for the given ``service_name``" return self.execute_command('SENTINEL GET-MASTER-ADDR-BY-NAME', service_name) def sentinel_master(self, service_name): "Returns a dictionary containing the specified masters state." return self.execute_command('SENTINEL MASTER', service_name) def sentinel_masters(self): "Returns a list of dictionaries containing each master's state." return self.execute_command('SENTINEL MASTERS') def sentinel_monitor(self, name, ip, port, quorum): "Add a new master to Sentinel to be monitored" return self.execute_command('SENTINEL MONITOR', name, ip, port, quorum) def sentinel_remove(self, name): "Remove a master from Sentinel's monitoring" return self.execute_command('SENTINEL REMOVE', name) def sentinel_sentinels(self, service_name): "Returns a list of sentinels for ``service_name``" return self.execute_command('SENTINEL SENTINELS', service_name) def sentinel_set(self, name, option, value): "Set Sentinel monitoring parameters for a given master" return self.execute_command('SENTINEL SET', name, option, value) def sentinel_slaves(self, service_name): "Returns a list of slaves for ``service_name``" return self.execute_command('SENTINEL SLAVES', service_name) def shutdown(self, save=False, nosave=False): """Shutdown the Redis server. If Redis has persistence configured, data will be flushed before shutdown. If the "save" option is set, a data flush will be attempted even if there is no persistence configured. If the "nosave" option is set, no data flush will be attempted. The "save" and "nosave" options cannot both be set. """ if save and nosave: raise DataError('SHUTDOWN save and nosave cannot both be set') args = ['SHUTDOWN'] if save: args.append('SAVE') if nosave: args.append('NOSAVE') try: self.execute_command(*args) except ConnectionError: # a ConnectionError here is expected return raise RedisError("SHUTDOWN seems to have failed.") def slaveof(self, host=None, port=None): """ Set the server to be a replicated slave of the instance identified by the ``host`` and ``port``. If called without arguments, the instance is promoted to a master instead. """ if host is None and port is None: return self.execute_command('SLAVEOF', b'NO', b'ONE') return self.execute_command('SLAVEOF', host, port) def slowlog_get(self, num=None): """ Get the entries from the slowlog. If ``num`` is specified, get the most recent ``num`` items. """ args = ['SLOWLOG GET'] if num is not None: args.append(num) decode_responses = self.connection_pool.connection_kwargs.get( 'decode_responses', False) return self.execute_command(*args, decode_responses=decode_responses) def slowlog_len(self): "Get the number of items in the slowlog" return self.execute_command('SLOWLOG LEN') def slowlog_reset(self): "Remove all items in the slowlog" return self.execute_command('SLOWLOG RESET') def time(self): """ Returns the server time as a 2-item tuple of ints: (seconds since epoch, microseconds into this second). """ return self.execute_command('TIME') def wait(self, num_replicas, timeout): """ Redis synchronous replication That returns the number of replicas that processed the query when we finally have at least ``num_replicas``, or when the ``timeout`` was reached. """ return self.execute_command('WAIT', num_replicas, timeout) # BASIC KEY COMMANDS def append(self, key, value): """ Appends the string ``value`` to the value at ``key``. If ``key`` doesn't already exist, create it with a value of ``value``. Returns the new length of the value at ``key``. """ return self.execute_command('APPEND', key, value) def bitcount(self, key, start=None, end=None): """ Returns the count of set bits in the value of ``key``. Optional ``start`` and ``end`` paramaters indicate which bytes to consider """ params = [key] if start is not None and end is not None: params.append(start) params.append(end) elif (start is not None and end is None) or \ (end is not None and start is None): raise DataError("Both start and end must be specified") return self.execute_command('BITCOUNT', *params) def bitfield(self, key, default_overflow=None): """ Return a BitFieldOperation instance to conveniently construct one or more bitfield operations on ``key``. """ return BitFieldOperation(self, key, default_overflow=default_overflow) def bitop(self, operation, dest, *keys): """ Perform a bitwise operation using ``operation`` between ``keys`` and store the result in ``dest``. """ return self.execute_command('BITOP', operation, dest, *keys) def bitpos(self, key, bit, start=None, end=None): """ Return the position of the first bit set to 1 or 0 in a string. ``start`` and ``end`` difines search range. The range is interpreted as a range of bytes and not a range of bits, so start=0 and end=2 means to look at the first three bytes. """ if bit not in (0, 1): raise DataError('bit must be 0 or 1') params = [key, bit] start is not None and params.append(start) if start is not None and end is not None: params.append(end) elif start is None and end is not None: raise DataError("start argument is not set, " "when end is specified") return self.execute_command('BITPOS', *params) def decr(self, name, amount=1): """ Decrements the value of ``key`` by ``amount``. If no key exists, the value will be initialized as 0 - ``amount`` """ # An alias for ``decr()``, because it is already implemented # as DECRBY redis command. return self.decrby(name, amount) def decrby(self, name, amount=1): """ Decrements the value of ``key`` by ``amount``. If no key exists, the value will be initialized as 0 - ``amount`` """ return self.execute_command('DECRBY', name, amount) def delete(self, *names): "Delete one or more keys specified by ``names``" return self.execute_command('DEL', *names) def __delitem__(self, name): self.delete(name) def dump(self, name): """ Return a serialized version of the value stored at the specified key. If key does not exist a nil bulk reply is returned. """ return self.execute_command('DUMP', name) def exists(self, *names): "Returns the number of ``names`` that exist" return self.execute_command('EXISTS', *names) __contains__ = exists def expire(self, name, time): """ Set an expire flag on key ``name`` for ``time`` seconds. ``time`` can be represented by an integer or a Python timedelta object. """ if isinstance(time, datetime.timedelta): time = int(time.total_seconds()) return self.execute_command('EXPIRE', name, time) def expireat(self, name, when): """ Set an expire flag on key ``name``. ``when`` can be represented as an integer indicating unix time or a Python datetime object. """ if isinstance(when, datetime.datetime): when = int(mod_time.mktime(when.timetuple())) return self.execute_command('EXPIREAT', name, when) def get(self, name): """ Return the value at key ``name``, or None if the key doesn't exist """ return self.execute_command('GET', name) def __getitem__(self, name): """ Return the value at key ``name``, raises a KeyError if the key doesn't exist. """ value = self.get(name) if value is not None: return value raise KeyError(name) def getbit(self, name, offset): "Returns a boolean indicating the value of ``offset`` in ``name``" return self.execute_command('GETBIT', name, offset) def getrange(self, key, start, end): """ Returns the substring of the string value stored at ``key``, determined by the offsets ``start`` and ``end`` (both are inclusive) """ return self.execute_command('GETRANGE', key, start, end) def getset(self, name, value): """ Sets the value at key ``name`` to ``value`` and returns the old value at key ``name`` atomically. """ return self.execute_command('GETSET', name, value) def incr(self, name, amount=1): """ Increments the value of ``key`` by ``amount``. If no key exists, the value will be initialized as ``amount`` """ return self.incrby(name, amount) def incrby(self, name, amount=1): """ Increments the value of ``key`` by ``amount``. If no key exists, the value will be initialized as ``amount`` """ # An alias for ``incr()``, because it is already implemented # as INCRBY redis command. return self.execute_command('INCRBY', name, amount) def incrbyfloat(self, name, amount=1.0): """ Increments the value at key ``name`` by floating ``amount``. If no key exists, the value will be initialized as ``amount`` """ return self.execute_command('INCRBYFLOAT', name, amount) def keys(self, pattern='*'): "Returns a list of keys matching ``pattern``" return self.execute_command('KEYS', pattern) def mget(self, keys, *args): """ Returns a list of values ordered identically to ``keys`` """ args = list_or_args(keys, args) options = {} if not args: options[EMPTY_RESPONSE] = [] return self.execute_command('MGET', *args, **options) def mset(self, mapping): """ Sets key/values based on a mapping. Mapping is a dictionary of key/value pairs. Both keys and values should be strings or types that can be cast to a string via str(). """ items = [] for pair in iteritems(mapping): items.extend(pair) return self.execute_command('MSET', *items) def msetnx(self, mapping): """ Sets key/values based on a mapping if none of the keys are already set. Mapping is a dictionary of key/value pairs. Both keys and values should be strings or types that can be cast to a string via str(). Returns a boolean indicating if the operation was successful. """ items = [] for pair in iteritems(mapping): items.extend(pair) return self.execute_command('MSETNX', *items) def move(self, name, db): "Moves the key ``name`` to a different Redis database ``db``" return self.execute_command('MOVE', name, db) def persist(self, name): "Removes an expiration on ``name``" return self.execute_command('PERSIST', name) def pexpire(self, name, time): """ Set an expire flag on key ``name`` for ``time`` milliseconds. ``time`` can be represented by an integer or a Python timedelta object. """ if isinstance(time, datetime.timedelta): time = int(time.total_seconds() * 1000) return self.execute_command('PEXPIRE', name, time) def pexpireat(self, name, when): """ Set an expire flag on key ``name``. ``when`` can be represented as an integer representing unix time in milliseconds (unix time * 1000) or a Python datetime object. """ if isinstance(when, datetime.datetime): ms = int(when.microsecond / 1000) when = int(mod_time.mktime(when.timetuple())) * 1000 + ms return self.execute_command('PEXPIREAT', name, when) def psetex(self, name, time_ms, value): """ Set the value of key ``name`` to ``value`` that expires in ``time_ms`` milliseconds. ``time_ms`` can be represented by an integer or a Python timedelta object """ if isinstance(time_ms, datetime.timedelta): time_ms = int(time_ms.total_seconds() * 1000) return self.execute_command('PSETEX', name, time_ms, value) def pttl(self, name): "Returns the number of milliseconds until the key ``name`` will expire" return self.execute_command('PTTL', name) def randomkey(self): "Returns the name of a random key" return self.execute_command('RANDOMKEY') def rename(self, src, dst): """ Rename key ``src`` to ``dst`` """ return self.execute_command('RENAME', src, dst) def renamenx(self, src, dst): "Rename key ``src`` to ``dst`` if ``dst`` doesn't already exist" return self.execute_command('RENAMENX', src, dst) def restore(self, name, ttl, value, replace=False): """ Create a key using the provided serialized value, previously obtained using DUMP. """ params = [name, ttl, value] if replace: params.append('REPLACE') return self.execute_command('RESTORE', *params) def set(self, name, value, ex=None, px=None, nx=False, xx=False, keepttl=False): """ Set the value at key ``name`` to ``value`` ``ex`` sets an expire flag on key ``name`` for ``ex`` seconds. ``px`` sets an expire flag on key ``name`` for ``px`` milliseconds. ``nx`` if set to True, set the value at key ``name`` to ``value`` only if it does not exist. ``xx`` if set to True, set the value at key ``name`` to ``value`` only if it already exists. ``keepttl`` if True, retain the time to live associated with the key. (Available since Redis 6.0) """ pieces = [name, value] if ex is not None: pieces.append('EX') if isinstance(ex, datetime.timedelta): ex = int(ex.total_seconds()) pieces.append(ex) if px is not None: pieces.append('PX') if isinstance(px, datetime.timedelta): px = int(px.total_seconds() * 1000) pieces.append(px) if nx: pieces.append('NX') if xx: pieces.append('XX') if keepttl: pieces.append('KEEPTTL') return self.execute_command('SET', *pieces) def __setitem__(self, name, value): self.set(name, value) def setbit(self, name, offset, value): """ Flag the ``offset`` in ``name`` as ``value``. Returns a boolean indicating the previous value of ``offset``. """ value = value and 1 or 0 return self.execute_command('SETBIT', name, offset, value) def setex(self, name, time, value): """ Set the value of key ``name`` to ``value`` that expires in ``time`` seconds. ``time`` can be represented by an integer or a Python timedelta object. """ if isinstance(time, datetime.timedelta): time = int(time.total_seconds()) return self.execute_command('SETEX', name, time, value) def setnx(self, name, value): "Set the value of key ``name`` to ``value`` if key doesn't exist" return self.execute_command('SETNX', name, value) def setrange(self, name, offset, value): """ Overwrite bytes in the value of ``name`` starting at ``offset`` with ``value``. If ``offset`` plus the length of ``value`` exceeds the length of the original value, the new value will be larger than before. If ``offset`` exceeds the length of the original value, null bytes will be used to pad between the end of the previous value and the start of what's being injected. Returns the length of the new string. """ return self.execute_command('SETRANGE', name, offset, value) def strlen(self, name): "Return the number of bytes stored in the value of ``name``" return self.execute_command('STRLEN', name) def substr(self, name, start, end=-1): """ Return a substring of the string at key ``name``. ``start`` and ``end`` are 0-based integers specifying the portion of the string to return. """ return self.execute_command('SUBSTR', name, start, end) def touch(self, *args): """ Alters the last access time of a key(s) ``*args``. A key is ignored if it does not exist. """ return self.execute_command('TOUCH', *args) def ttl(self, name): "Returns the number of seconds until the key ``name`` will expire" return self.execute_command('TTL', name) def type(self, name): "Returns the type of key ``name``" return self.execute_command('TYPE', name) def watch(self, *names): """ Watches the values at keys ``names``, or None if the key doesn't exist """ warnings.warn(DeprecationWarning('Call WATCH from a Pipeline object')) def unwatch(self): """ Unwatches the value at key ``name``, or None of the key doesn't exist """ warnings.warn( DeprecationWarning('Call UNWATCH from a Pipeline object')) def unlink(self, *names): "Unlink one or more keys specified by ``names``" return self.execute_command('UNLINK', *names) # LIST COMMANDS def blpop(self, keys, timeout=0): """ LPOP a value off of the first non-empty list named in the ``keys`` list. If none of the lists in ``keys`` has a value to LPOP, then block for ``timeout`` seconds, or until a value gets pushed on to one of the lists. If timeout is 0, then block indefinitely. """ if timeout is None: timeout = 0 keys = list_or_args(keys, None) keys.append(timeout) return self.execute_command('BLPOP', *keys) def brpop(self, keys, timeout=0): """ RPOP a value off of the first non-empty list named in the ``keys`` list. If none of the lists in ``keys`` has a value to RPOP, then block for ``timeout`` seconds, or until a value gets pushed on to one of the lists. If timeout is 0, then block indefinitely. """ if timeout is None: timeout = 0 keys = list_or_args(keys, None) keys.append(timeout) return self.execute_command('BRPOP', *keys) def brpoplpush(self, src, dst, timeout=0): """ Pop a value off the tail of ``src``, push it on the head of ``dst`` and then return it. This command blocks until a value is in ``src`` or until ``timeout`` seconds elapse, whichever is first. A ``timeout`` value of 0 blocks forever. """ if timeout is None: timeout = 0 return self.execute_command('BRPOPLPUSH', src, dst, timeout) def lindex(self, name, index): """ Return the item from list ``name`` at position ``index`` Negative indexes are supported and will return an item at the end of the list """ return self.execute_command('LINDEX', name, index) def linsert(self, name, where, refvalue, value): """ Insert ``value`` in list ``name`` either immediately before or after [``where``] ``refvalue`` Returns the new length of the list on success or -1 if ``refvalue`` is not in the list. """ return self.execute_command('LINSERT', name, where, refvalue, value) def llen(self, name): "Return the length of the list ``name``" return self.execute_command('LLEN', name) def lpop(self, name): "Remove and return the first item of the list ``name``" return self.execute_command('LPOP', name) def lpush(self, name, *values): "Push ``values`` onto the head of the list ``name``" return self.execute_command('LPUSH', name, *values) def lpushx(self, name, value): "Push ``value`` onto the head of the list ``name`` if ``name`` exists" return self.execute_command('LPUSHX', name, value) def lrange(self, name, start, end): """ Return a slice of the list ``name`` between position ``start`` and ``end`` ``start`` and ``end`` can be negative numbers just like Python slicing notation """ return self.execute_command('LRANGE', name, start, end) def lrem(self, name, count, value): """ Remove the first ``count`` occurrences of elements equal to ``value`` from the list stored at ``name``. The count argument influences the operation in the following ways: count > 0: Remove elements equal to value moving from head to tail. count < 0: Remove elements equal to value moving from tail to head. count = 0: Remove all elements equal to value. """ return self.execute_command('LREM', name, count, value) def lset(self, name, index, value): "Set ``position`` of list ``name`` to ``value``" return self.execute_command('LSET', name, index, value) def ltrim(self, name, start, end): """ Trim the list ``name``, removing all values not within the slice between ``start`` and ``end`` ``start`` and ``end`` can be negative numbers just like Python slicing notation """ return self.execute_command('LTRIM', name, start, end) def rpop(self, name): "Remove and return the last item of the list ``name``" return self.execute_command('RPOP', name) def rpoplpush(self, src, dst): """ RPOP a value off of the ``src`` list and atomically LPUSH it on to the ``dst`` list. Returns the value. """ return self.execute_command('RPOPLPUSH', src, dst) def rpush(self, name, *values): "Push ``values`` onto the tail of the list ``name``" return self.execute_command('RPUSH', name, *values) def rpushx(self, name, value): "Push ``value`` onto the tail of the list ``name`` if ``name`` exists" return self.execute_command('RPUSHX', name, value) def sort(self, name, start=None, num=None, by=None, get=None, desc=False, alpha=False, store=None, groups=False): """ Sort and return the list, set or sorted set at ``name``. ``start`` and ``num`` allow for paging through the sorted data ``by`` allows using an external key to weight and sort the items. Use an "*" to indicate where in the key the item value is located ``get`` allows for returning items from external keys rather than the sorted data itself. Use an "*" to indicate where in the key the item value is located ``desc`` allows for reversing the sort ``alpha`` allows for sorting lexicographically rather than numerically ``store`` allows for storing the result of the sort into the key ``store`` ``groups`` if set to True and if ``get`` contains at least two elements, sort will return a list of tuples, each containing the values fetched from the arguments to ``get``. """ if (start is not None and num is None) or \ (num is not None and start is None): raise DataError("``start`` and ``num`` must both be specified") pieces = [name] if by is not None: pieces.append(b'BY') pieces.append(by) if start is not None and num is not None: pieces.append(b'LIMIT') pieces.append(start) pieces.append(num) if get is not None: # If get is a string assume we want to get a single value. # Otherwise assume it's an interable and we want to get multiple # values. We can't just iterate blindly because strings are # iterable. if isinstance(get, (bytes, basestring)): pieces.append(b'GET') pieces.append(get) else: for g in get: pieces.append(b'GET') pieces.append(g) if desc: pieces.append(b'DESC') if alpha: pieces.append(b'ALPHA') if store is not None: pieces.append(b'STORE') pieces.append(store) if groups: if not get or isinstance(get, (bytes, basestring)) or len(get) < 2: raise DataError('when using "groups" the "get" argument ' 'must be specified and contain at least ' 'two keys') options = {'groups': len(get) if groups else None} return self.execute_command('SORT', *pieces, **options) # SCAN COMMANDS def scan(self, cursor=0, match=None, count=None, _type=None): """ Incrementally return lists of key names. Also return a cursor indicating the scan position. ``match`` allows for filtering the keys by pattern ``count`` provides a hint to Redis about the number of keys to return per batch. ``_type`` filters the returned values by a particular Redis type. Stock Redis instances allow for the following types: HASH, LIST, SET, STREAM, STRING, ZSET Additionally, Redis modules can expose other types as well. """ pieces = [cursor] if match is not None: pieces.extend([b'MATCH', match]) if count is not None: pieces.extend([b'COUNT', count]) if _type is not None: pieces.extend([b'TYPE', _type]) return self.execute_command('SCAN', *pieces) def scan_iter(self, match=None, count=None, _type=None): """ Make an iterator using the SCAN command so that the client doesn't need to remember the cursor position. ``match`` allows for filtering the keys by pattern ``count`` provides a hint to Redis about the number of keys to return per batch. ``_type`` filters the returned values by a particular Redis type. Stock Redis instances allow for the following types: HASH, LIST, SET, STREAM, STRING, ZSET Additionally, Redis modules can expose other types as well. """ cursor = '0' while cursor != 0: cursor, data = self.scan(cursor=cursor, match=match, count=count, _type=_type) for item in data: yield item def sscan(self, name, cursor=0, match=None, count=None): """ Incrementally return lists of elements in a set. Also return a cursor indicating the scan position. ``match`` allows for filtering the keys by pattern ``count`` allows for hint the minimum number of returns """ pieces = [name, cursor] if match is not None: pieces.extend([b'MATCH', match]) if count is not None: pieces.extend([b'COUNT', count]) return self.execute_command('SSCAN', *pieces) def sscan_iter(self, name, match=None, count=None): """ Make an iterator using the SSCAN command so that the client doesn't need to remember the cursor position. ``match`` allows for filtering the keys by pattern ``count`` allows for hint the minimum number of returns """ cursor = '0' while cursor != 0: cursor, data = self.sscan(name, cursor=cursor, match=match, count=count) for item in data: yield item def hscan(self, name, cursor=0, match=None, count=None): """ Incrementally return key/value slices in a hash. Also return a cursor indicating the scan position. ``match`` allows for filtering the keys by pattern ``count`` allows for hint the minimum number of returns """ pieces = [name, cursor] if match is not None: pieces.extend([b'MATCH', match]) if count is not None: pieces.extend([b'COUNT', count]) return self.execute_command('HSCAN', *pieces) def hscan_iter(self, name, match=None, count=None): """ Make an iterator using the HSCAN command so that the client doesn't need to remember the cursor position. ``match`` allows for filtering the keys by pattern ``count`` allows for hint the minimum number of returns """ cursor = '0' while cursor != 0: cursor, data = self.hscan(name, cursor=cursor, match=match, count=count) for item in data.items(): yield item def zscan(self, name, cursor=0, match=None, count=None, score_cast_func=float): """ Incrementally return lists of elements in a sorted set. Also return a cursor indicating the scan position. ``match`` allows for filtering the keys by pattern ``count`` allows for hint the minimum number of returns ``score_cast_func`` a callable used to cast the score return value """ pieces = [name, cursor] if match is not None: pieces.extend([b'MATCH', match]) if count is not None: pieces.extend([b'COUNT', count]) options = {'score_cast_func': score_cast_func} return self.execute_command('ZSCAN', *pieces, **options) def zscan_iter(self, name, match=None, count=None, score_cast_func=float): """ Make an iterator using the ZSCAN command so that the client doesn't need to remember the cursor position. ``match`` allows for filtering the keys by pattern ``count`` allows for hint the minimum number of returns ``score_cast_func`` a callable used to cast the score return value """ cursor = '0' while cursor != 0: cursor, data = self.zscan(name, cursor=cursor, match=match, count=count, score_cast_func=score_cast_func) for item in data: yield item # SET COMMANDS def sadd(self, name, *values): "Add ``value(s)`` to set ``name``" return self.execute_command('SADD', name, *values) def scard(self, name): "Return the number of elements in set ``name``" return self.execute_command('SCARD', name) def sdiff(self, keys, *args): "Return the difference of sets specified by ``keys``" args = list_or_args(keys, args) return self.execute_command('SDIFF', *args) def sdiffstore(self, dest, keys, *args): """ Store the difference of sets specified by ``keys`` into a new set named ``dest``. Returns the number of keys in the new set. """ args = list_or_args(keys, args) return self.execute_command('SDIFFSTORE', dest, *args) def sinter(self, keys, *args): "Return the intersection of sets specified by ``keys``" args = list_or_args(keys, args) return self.execute_command('SINTER', *args) def sinterstore(self, dest, keys, *args): """ Store the intersection of sets specified by ``keys`` into a new set named ``dest``. Returns the number of keys in the new set. """ args = list_or_args(keys, args) return self.execute_command('SINTERSTORE', dest, *args) def sismember(self, name, value): "Return a boolean indicating if ``value`` is a member of set ``name``" return self.execute_command('SISMEMBER', name, value) def smembers(self, name): "Return all members of the set ``name``" return self.execute_command('SMEMBERS', name) def smove(self, src, dst, value): "Move ``value`` from set ``src`` to set ``dst`` atomically" return self.execute_command('SMOVE', src, dst, value) def spop(self, name, count=None): "Remove and return a random member of set ``name``" args = (count is not None) and [count] or [] return self.execute_command('SPOP', name, *args) def srandmember(self, name, number=None): """ If ``number`` is None, returns a random member of set ``name``. If ``number`` is supplied, returns a list of ``number`` random members of set ``name``. Note this is only available when running Redis 2.6+. """ args = (number is not None) and [number] or [] return self.execute_command('SRANDMEMBER', name, *args) def srem(self, name, *values): "Remove ``values`` from set ``name``" return self.execute_command('SREM', name, *values) def sunion(self, keys, *args): "Return the union of sets specified by ``keys``" args = list_or_args(keys, args) return self.execute_command('SUNION', *args) def sunionstore(self, dest, keys, *args): """ Store the union of sets specified by ``keys`` into a new set named ``dest``. Returns the number of keys in the new set. """ args = list_or_args(keys, args) return self.execute_command('SUNIONSTORE', dest, *args) # STREAMS COMMANDS def xack(self, name, groupname, *ids): """ Acknowledges the successful processing of one or more messages. name: name of the stream. groupname: name of the consumer group. *ids: message ids to acknowlege. """ return self.execute_command('XACK', name, groupname, *ids) def xadd(self, name, fields, id='*', maxlen=None, approximate=True): """ Add to a stream. name: name of the stream fields: dict of field/value pairs to insert into the stream id: Location to insert this record. By default it is appended. maxlen: truncate old stream members beyond this size approximate: actual stream length may be slightly more than maxlen """ pieces = [] if maxlen is not None: if not isinstance(maxlen, (int, long)) or maxlen < 1: raise DataError('XADD maxlen must be a positive integer') pieces.append(b'MAXLEN') if approximate: pieces.append(b'~') pieces.append(str(maxlen)) pieces.append(id) if not isinstance(fields, dict) or len(fields) == 0: raise DataError('XADD fields must be a non-empty dict') for pair in iteritems(fields): pieces.extend(pair) return self.execute_command('XADD', name, *pieces) def xclaim(self, name, groupname, consumername, min_idle_time, message_ids, idle=None, time=None, retrycount=None, force=False, justid=False): """ Changes the ownership of a pending message. name: name of the stream. groupname: name of the consumer group. consumername: name of a consumer that claims the message. min_idle_time: filter messages that were idle less than this amount of milliseconds message_ids: non-empty list or tuple of message IDs to claim idle: optional. Set the idle time (last time it was delivered) of the message in ms time: optional integer. This is the same as idle but instead of a relative amount of milliseconds, it sets the idle time to a specific Unix time (in milliseconds). retrycount: optional integer. set the retry counter to the specified value. This counter is incremented every time a message is delivered again. force: optional boolean, false by default. Creates the pending message entry in the PEL even if certain specified IDs are not already in the PEL assigned to a different client. justid: optional boolean, false by default. Return just an array of IDs of messages successfully claimed, without returning the actual message """ if not isinstance(min_idle_time, (int, long)) or min_idle_time < 0: raise DataError("XCLAIM min_idle_time must be a non negative " "integer") if not isinstance(message_ids, (list, tuple)) or not message_ids: raise DataError("XCLAIM message_ids must be a non empty list or " "tuple of message IDs to claim") kwargs = {} pieces = [name, groupname, consumername, str(min_idle_time)] pieces.extend(list(message_ids)) if idle is not None: if not isinstance(idle, (int, long)): raise DataError("XCLAIM idle must be an integer") pieces.extend((b'IDLE', str(idle))) if time is not None: if not isinstance(time, (int, long)): raise DataError("XCLAIM time must be an integer") pieces.extend((b'TIME', str(time))) if retrycount is not None: if not isinstance(retrycount, (int, long)): raise DataError("XCLAIM retrycount must be an integer") pieces.extend((b'RETRYCOUNT', str(retrycount))) if force: if not isinstance(force, bool): raise DataError("XCLAIM force must be a boolean") pieces.append(b'FORCE') if justid: if not isinstance(justid, bool): raise DataError("XCLAIM justid must be a boolean") pieces.append(b'JUSTID') kwargs['parse_justid'] = True return self.execute_command('XCLAIM', *pieces, **kwargs) def xdel(self, name, *ids): """ Deletes one or more messages from a stream. name: name of the stream. *ids: message ids to delete. """ return self.execute_command('XDEL', name, *ids) def xgroup_create(self, name, groupname, id='$', mkstream=False): """ Create a new consumer group associated with a stream. name: name of the stream. groupname: name of the consumer group. id: ID of the last item in the stream to consider already delivered. """ pieces = ['XGROUP CREATE', name, groupname, id] if mkstream: pieces.append(b'MKSTREAM') return self.execute_command(*pieces) def xgroup_delconsumer(self, name, groupname, consumername): """ Remove a specific consumer from a consumer group. Returns the number of pending messages that the consumer had before it was deleted. name: name of the stream. groupname: name of the consumer group. consumername: name of consumer to delete """ return self.execute_command('XGROUP DELCONSUMER', name, groupname, consumername) def xgroup_destroy(self, name, groupname): """ Destroy a consumer group. name: name of the stream. groupname: name of the consumer group. """ return self.execute_command('XGROUP DESTROY', name, groupname) def xgroup_setid(self, name, groupname, id): """ Set the consumer group last delivered ID to something else. name: name of the stream. groupname: name of the consumer group. id: ID of the last item in the stream to consider already delivered. """ return self.execute_command('XGROUP SETID', name, groupname, id) def xinfo_consumers(self, name, groupname): """ Returns general information about the consumers in the group. name: name of the stream. groupname: name of the consumer group. """ return self.execute_command('XINFO CONSUMERS', name, groupname) def xinfo_groups(self, name): """ Returns general information about the consumer groups of the stream. name: name of the stream. """ return self.execute_command('XINFO GROUPS', name) def xinfo_stream(self, name): """ Returns general information about the stream. name: name of the stream. """ return self.execute_command('XINFO STREAM', name) def xlen(self, name): """ Returns the number of elements in a given stream. """ return self.execute_command('XLEN', name) def xpending(self, name, groupname): """ Returns information about pending messages of a group. name: name of the stream. groupname: name of the consumer group. """ return self.execute_command('XPENDING', name, groupname) def xpending_range(self, name, groupname, min, max, count, consumername=None): """ Returns information about pending messages, in a range. name: name of the stream. groupname: name of the consumer group. min: minimum stream ID. max: maximum stream ID. count: number of messages to return consumername: name of a consumer to filter by (optional). """ pieces = [name, groupname] if min is not None or max is not None or count is not None: if min is None or max is None or count is None: raise DataError("XPENDING must be provided with min, max " "and count parameters, or none of them. ") if not isinstance(count, (int, long)) or count < -1: raise DataError("XPENDING count must be a integer >= -1") pieces.extend((min, max, str(count))) if consumername is not None: if min is None or max is None or count is None: raise DataError("if XPENDING is provided with consumername," " it must be provided with min, max and" " count parameters") pieces.append(consumername) return self.execute_command('XPENDING', *pieces, parse_detail=True) def xrange(self, name, min='-', max='+', count=None): """ Read stream values within an interval. name: name of the stream. start: first stream ID. defaults to '-', meaning the earliest available. finish: last stream ID. defaults to '+', meaning the latest available. count: if set, only return this many items, beginning with the earliest available. """ pieces = [min, max] if count is not None: if not isinstance(count, (int, long)) or count < 1: raise DataError('XRANGE count must be a positive integer') pieces.append(b'COUNT') pieces.append(str(count)) return self.execute_command('XRANGE', name, *pieces) def xread(self, streams, count=None, block=None): """ Block and monitor multiple streams for new data. streams: a dict of stream names to stream IDs, where IDs indicate the last ID already seen. count: if set, only return this many items, beginning with the earliest available. block: number of milliseconds to wait, if nothing already present. """ pieces = [] if block is not None: if not isinstance(block, (int, long)) or block < 0: raise DataError('XREAD block must be a non-negative integer') pieces.append(b'BLOCK') pieces.append(str(block)) if count is not None: if not isinstance(count, (int, long)) or count < 1: raise DataError('XREAD count must be a positive integer') pieces.append(b'COUNT') pieces.append(str(count)) if not isinstance(streams, dict) or len(streams) == 0: raise DataError('XREAD streams must be a non empty dict') pieces.append(b'STREAMS') keys, values = izip(*iteritems(streams)) pieces.extend(keys) pieces.extend(values) return self.execute_command('XREAD', *pieces) def xreadgroup(self, groupname, consumername, streams, count=None, block=None, noack=False): """ Read from a stream via a consumer group. groupname: name of the consumer group. consumername: name of the requesting consumer. streams: a dict of stream names to stream IDs, where IDs indicate the last ID already seen. count: if set, only return this many items, beginning with the earliest available. block: number of milliseconds to wait, if nothing already present. noack: do not add messages to the PEL """ pieces = [b'GROUP', groupname, consumername] if count is not None: if not isinstance(count, (int, long)) or count < 1: raise DataError("XREADGROUP count must be a positive integer") pieces.append(b'COUNT') pieces.append(str(count)) if block is not None: if not isinstance(block, (int, long)) or block < 0: raise DataError("XREADGROUP block must be a non-negative " "integer") pieces.append(b'BLOCK') pieces.append(str(block)) if noack: pieces.append(b'NOACK') if not isinstance(streams, dict) or len(streams) == 0: raise DataError('XREADGROUP streams must be a non empty dict') pieces.append(b'STREAMS') pieces.extend(streams.keys()) pieces.extend(streams.values()) return self.execute_command('XREADGROUP', *pieces) def xrevrange(self, name, max='+', min='-', count=None): """ Read stream values within an interval, in reverse order. name: name of the stream start: first stream ID. defaults to '+', meaning the latest available. finish: last stream ID. defaults to '-', meaning the earliest available. count: if set, only return this many items, beginning with the latest available. """ pieces = [max, min] if count is not None: if not isinstance(count, (int, long)) or count < 1: raise DataError('XREVRANGE count must be a positive integer') pieces.append(b'COUNT') pieces.append(str(count)) return self.execute_command('XREVRANGE', name, *pieces) def xtrim(self, name, maxlen, approximate=True): """ Trims old messages from a stream. name: name of the stream. maxlen: truncate old stream messages beyond this size approximate: actual stream length may be slightly more than maxlen """ pieces = [b'MAXLEN'] if approximate: pieces.append(b'~') pieces.append(maxlen) return self.execute_command('XTRIM', name, *pieces) # SORTED SET COMMANDS def zadd(self, name, mapping, nx=False, xx=False, ch=False, incr=False): """ Set any number of element-name, score pairs to the key ``name``. Pairs are specified as a dict of element-names keys to score values. ``nx`` forces ZADD to only create new elements and not to update scores for elements that already exist. ``xx`` forces ZADD to only update scores of elements that already exist. New elements will not be added. ``ch`` modifies the return value to be the numbers of elements changed. Changed elements include new elements that were added and elements whose scores changed. ``incr`` modifies ZADD to behave like ZINCRBY. In this mode only a single element/score pair can be specified and the score is the amount the existing score will be incremented by. When using this mode the return value of ZADD will be the new score of the element. The return value of ZADD varies based on the mode specified. With no options, ZADD returns the number of new elements added to the sorted set. """ if not mapping: raise DataError("ZADD requires at least one element/score pair") if nx and xx: raise DataError("ZADD allows either 'nx' or 'xx', not both") if incr and len(mapping) != 1: raise DataError("ZADD option 'incr' only works when passing a " "single element/score pair") pieces = [] options = {} if nx: pieces.append(b'NX') if xx: pieces.append(b'XX') if ch: pieces.append(b'CH') if incr: pieces.append(b'INCR') options['as_score'] = True for pair in iteritems(mapping): pieces.append(pair[1]) pieces.append(pair[0]) return self.execute_command('ZADD', name, *pieces, **options) def zcard(self, name): "Return the number of elements in the sorted set ``name``" return self.execute_command('ZCARD', name) def zcount(self, name, min, max): """ Returns the number of elements in the sorted set at key ``name`` with a score between ``min`` and ``max``. """ return self.execute_command('ZCOUNT', name, min, max) def zincrby(self, name, amount, value): "Increment the score of ``value`` in sorted set ``name`` by ``amount``" return self.execute_command('ZINCRBY', name, amount, value) def zinterstore(self, dest, keys, aggregate=None): """ Intersect multiple sorted sets specified by ``keys`` into a new sorted set, ``dest``. Scores in the destination will be aggregated based on the ``aggregate``, or SUM if none is provided. """ return self._zaggregate('ZINTERSTORE', dest, keys, aggregate) def zlexcount(self, name, min, max): """ Return the number of items in the sorted set ``name`` between the lexicographical range ``min`` and ``max``. """ return self.execute_command('ZLEXCOUNT', name, min, max) def zpopmax(self, name, count=None): """ Remove and return up to ``count`` members with the highest scores from the sorted set ``name``. """ args = (count is not None) and [count] or [] options = { 'withscores': True } return self.execute_command('ZPOPMAX', name, *args, **options) def zpopmin(self, name, count=None): """ Remove and return up to ``count`` members with the lowest scores from the sorted set ``name``. """ args = (count is not None) and [count] or [] options = { 'withscores': True } return self.execute_command('ZPOPMIN', name, *args, **options) def bzpopmax(self, keys, timeout=0): """ ZPOPMAX a value off of the first non-empty sorted set named in the ``keys`` list. If none of the sorted sets in ``keys`` has a value to ZPOPMAX, then block for ``timeout`` seconds, or until a member gets added to one of the sorted sets. If timeout is 0, then block indefinitely. """ if timeout is None: timeout = 0 keys = list_or_args(keys, None) keys.append(timeout) return self.execute_command('BZPOPMAX', *keys) def bzpopmin(self, keys, timeout=0): """ ZPOPMIN a value off of the first non-empty sorted set named in the ``keys`` list. If none of the sorted sets in ``keys`` has a value to ZPOPMIN, then block for ``timeout`` seconds, or until a member gets added to one of the sorted sets. If timeout is 0, then block indefinitely. """ if timeout is None: timeout = 0 keys = list_or_args(keys, None) keys.append(timeout) return self.execute_command('BZPOPMIN', *keys) def zrange(self, name, start, end, desc=False, withscores=False, score_cast_func=float): """ Return a range of values from sorted set ``name`` between ``start`` and ``end`` sorted in ascending order. ``start`` and ``end`` can be negative, indicating the end of the range. ``desc`` a boolean indicating whether to sort the results descendingly ``withscores`` indicates to return the scores along with the values. The return type is a list of (value, score) pairs ``score_cast_func`` a callable used to cast the score return value """ if desc: return self.zrevrange(name, start, end, withscores, score_cast_func) pieces = ['ZRANGE', name, start, end] if withscores: pieces.append(b'WITHSCORES') options = { 'withscores': withscores, 'score_cast_func': score_cast_func } return self.execute_command(*pieces, **options) def zrangebylex(self, name, min, max, start=None, num=None): """ Return the lexicographical range of values from sorted set ``name`` between ``min`` and ``max``. If ``start`` and ``num`` are specified, then return a slice of the range. """ if (start is not None and num is None) or \ (num is not None and start is None): raise DataError("``start`` and ``num`` must both be specified") pieces = ['ZRANGEBYLEX', name, min, max] if start is not None and num is not None: pieces.extend([b'LIMIT', start, num]) return self.execute_command(*pieces) def zrevrangebylex(self, name, max, min, start=None, num=None): """ Return the reversed lexicographical range of values from sorted set ``name`` between ``max`` and ``min``. If ``start`` and ``num`` are specified, then return a slice of the range. """ if (start is not None and num is None) or \ (num is not None and start is None): raise DataError("``start`` and ``num`` must both be specified") pieces = ['ZREVRANGEBYLEX', name, max, min] if start is not None and num is not None: pieces.extend([b'LIMIT', start, num]) return self.execute_command(*pieces) def zrangebyscore(self, name, min, max, start=None, num=None, withscores=False, score_cast_func=float): """ Return a range of values from the sorted set ``name`` with scores between ``min`` and ``max``. If ``start`` and ``num`` are specified, then return a slice of the range. ``withscores`` indicates to return the scores along with the values. The return type is a list of (value, score) pairs `score_cast_func`` a callable used to cast the score return value """ if (start is not None and num is None) or \ (num is not None and start is None): raise DataError("``start`` and ``num`` must both be specified") pieces = ['ZRANGEBYSCORE', name, min, max] if start is not None and num is not None: pieces.extend([b'LIMIT', start, num]) if withscores: pieces.append(b'WITHSCORES') options = { 'withscores': withscores, 'score_cast_func': score_cast_func } return self.execute_command(*pieces, **options) def zrank(self, name, value): """ Returns a 0-based value indicating the rank of ``value`` in sorted set ``name`` """ return self.execute_command('ZRANK', name, value) def zrem(self, name, *values): "Remove member ``values`` from sorted set ``name``" return self.execute_command('ZREM', name, *values) def zremrangebylex(self, name, min, max): """ Remove all elements in the sorted set ``name`` between the lexicographical range specified by ``min`` and ``max``. Returns the number of elements removed. """ return self.execute_command('ZREMRANGEBYLEX', name, min, max) def zremrangebyrank(self, name, min, max): """ Remove all elements in the sorted set ``name`` with ranks between ``min`` and ``max``. Values are 0-based, ordered from smallest score to largest. Values can be negative indicating the highest scores. Returns the number of elements removed """ return self.execute_command('ZREMRANGEBYRANK', name, min, max) def zremrangebyscore(self, name, min, max): """ Remove all elements in the sorted set ``name`` with scores between ``min`` and ``max``. Returns the number of elements removed. """ return self.execute_command('ZREMRANGEBYSCORE', name, min, max) def zrevrange(self, name, start, end, withscores=False, score_cast_func=float): """ Return a range of values from sorted set ``name`` between ``start`` and ``end`` sorted in descending order. ``start`` and ``end`` can be negative, indicating the end of the range. ``withscores`` indicates to return the scores along with the values The return type is a list of (value, score) pairs ``score_cast_func`` a callable used to cast the score return value """ pieces = ['ZREVRANGE', name, start, end] if withscores: pieces.append(b'WITHSCORES') options = { 'withscores': withscores, 'score_cast_func': score_cast_func } return self.execute_command(*pieces, **options) def zrevrangebyscore(self, name, max, min, start=None, num=None, withscores=False, score_cast_func=float): """ Return a range of values from the sorted set ``name`` with scores between ``min`` and ``max`` in descending order. If ``start`` and ``num`` are specified, then return a slice of the range. ``withscores`` indicates to return the scores along with the values. The return type is a list of (value, score) pairs ``score_cast_func`` a callable used to cast the score return value """ if (start is not None and num is None) or \ (num is not None and start is None): raise DataError("``start`` and ``num`` must both be specified") pieces = ['ZREVRANGEBYSCORE', name, max, min] if start is not None and num is not None: pieces.extend([b'LIMIT', start, num]) if withscores: pieces.append(b'WITHSCORES') options = { 'withscores': withscores, 'score_cast_func': score_cast_func } return self.execute_command(*pieces, **options) def zrevrank(self, name, value): """ Returns a 0-based value indicating the descending rank of ``value`` in sorted set ``name`` """ return self.execute_command('ZREVRANK', name, value) def zscore(self, name, value): "Return the score of element ``value`` in sorted set ``name``" return self.execute_command('ZSCORE', name, value) def zunionstore(self, dest, keys, aggregate=None): """ Union multiple sorted sets specified by ``keys`` into a new sorted set, ``dest``. Scores in the destination will be aggregated based on the ``aggregate``, or SUM if none is provided. """ return self._zaggregate('ZUNIONSTORE', dest, keys, aggregate) def _zaggregate(self, command, dest, keys, aggregate=None): pieces = [command, dest, len(keys)] if isinstance(keys, dict): keys, weights = iterkeys(keys), itervalues(keys) else: weights = None pieces.extend(keys) if weights: pieces.append(b'WEIGHTS') pieces.extend(weights) if aggregate: pieces.append(b'AGGREGATE') pieces.append(aggregate) return self.execute_command(*pieces) # HYPERLOGLOG COMMANDS def pfadd(self, name, *values): "Adds the specified elements to the specified HyperLogLog." return self.execute_command('PFADD', name, *values) def pfcount(self, *sources): """ Return the approximated cardinality of the set observed by the HyperLogLog at key(s). """ return self.execute_command('PFCOUNT', *sources) def pfmerge(self, dest, *sources): "Merge N different HyperLogLogs into a single one." return self.execute_command('PFMERGE', dest, *sources) # HASH COMMANDS def hdel(self, name, *keys): "Delete ``keys`` from hash ``name``" return self.execute_command('HDEL', name, *keys) def hexists(self, name, key): "Returns a boolean indicating if ``key`` exists within hash ``name``" return self.execute_command('HEXISTS', name, key) def hget(self, name, key): "Return the value of ``key`` within the hash ``name``" return self.execute_command('HGET', name, key) def hgetall(self, name): "Return a Python dict of the hash's name/value pairs" return self.execute_command('HGETALL', name) def hincrby(self, name, key, amount=1): "Increment the value of ``key`` in hash ``name`` by ``amount``" return self.execute_command('HINCRBY', name, key, amount) def hincrbyfloat(self, name, key, amount=1.0): """ Increment the value of ``key`` in hash ``name`` by floating ``amount`` """ return self.execute_command('HINCRBYFLOAT', name, key, amount) def hkeys(self, name): "Return the list of keys within hash ``name``" return self.execute_command('HKEYS', name) def hlen(self, name): "Return the number of elements in hash ``name``" return self.execute_command('HLEN', name) def hset(self, name, key=None, value=None, mapping=None): """ Set ``key`` to ``value`` within hash ``name``, ``mapping`` accepts a dict of key/value pairs that that will be added to hash ``name``. Returns the number of fields that were added. """ if key is None and not mapping: raise DataError("'hset' with no key value pairs") items = [] if key is not None: items.extend((key, value)) if mapping: for pair in mapping.items(): items.extend(pair) return self.execute_command('HSET', name, *items) def hsetnx(self, name, key, value): """ Set ``key`` to ``value`` within hash ``name`` if ``key`` does not exist. Returns 1 if HSETNX created a field, otherwise 0. """ return self.execute_command('HSETNX', name, key, value) def hmset(self, name, mapping): """ Set key to value within hash ``name`` for each corresponding key and value from the ``mapping`` dict. """ warnings.warn( '%s.hmset() is deprecated. Use %s.hset() instead.' % (self.__class__.__name__, self.__class__.__name__), DeprecationWarning, stacklevel=2, ) if not mapping: raise DataError("'hmset' with 'mapping' of length 0") items = [] for pair in iteritems(mapping): items.extend(pair) return self.execute_command('HMSET', name, *items) def hmget(self, name, keys, *args): "Returns a list of values ordered identically to ``keys``" args = list_or_args(keys, args) return self.execute_command('HMGET', name, *args) def hvals(self, name): "Return the list of values within hash ``name``" return self.execute_command('HVALS', name) def hstrlen(self, name, key): """ Return the number of bytes stored in the value of ``key`` within hash ``name`` """ return self.execute_command('HSTRLEN', name, key) def publish(self, channel, message): """ Publish ``message`` on ``channel``. Returns the number of subscribers the message was delivered to. """ return self.execute_command('PUBLISH', channel, message) def pubsub_channels(self, pattern='*'): """ Return a list of channels that have at least one subscriber """ return self.execute_command('PUBSUB CHANNELS', pattern) def pubsub_numpat(self): """ Returns the number of subscriptions to patterns """ return self.execute_command('PUBSUB NUMPAT') def pubsub_numsub(self, *args): """ Return a list of (channel, number of subscribers) tuples for each channel given in ``*args`` """ return self.execute_command('PUBSUB NUMSUB', *args) def cluster(self, cluster_arg, *args): return self.execute_command('CLUSTER %s' % cluster_arg.upper(), *args) def eval(self, script, numkeys, *keys_and_args): """ Execute the Lua ``script``, specifying the ``numkeys`` the script will touch and the key names and argument values in ``keys_and_args``. Returns the result of the script. In practice, use the object returned by ``register_script``. This function exists purely for Redis API completion. """ return self.execute_command('EVAL', script, numkeys, *keys_and_args) def evalsha(self, sha, numkeys, *keys_and_args): """ Use the ``sha`` to execute a Lua script already registered via EVAL or SCRIPT LOAD. Specify the ``numkeys`` the script will touch and the key names and argument values in ``keys_and_args``. Returns the result of the script. In practice, use the object returned by ``register_script``. This function exists purely for Redis API completion. """ return self.execute_command('EVALSHA', sha, numkeys, *keys_and_args) def script_exists(self, *args): """ Check if a script exists in the script cache by specifying the SHAs of each script as ``args``. Returns a list of boolean values indicating if if each already script exists in the cache. """ return self.execute_command('SCRIPT EXISTS', *args) def script_flush(self): "Flush all scripts from the script cache" return self.execute_command('SCRIPT FLUSH') def script_kill(self): "Kill the currently executing Lua script" return self.execute_command('SCRIPT KILL') def script_load(self, script): "Load a Lua ``script`` into the script cache. Returns the SHA." return self.execute_command('SCRIPT LOAD', script) def register_script(self, script): """ Register a Lua ``script`` specifying the ``keys`` it will touch. Returns a Script object that is callable and hides the complexity of deal with scripts, keys, and shas. This is the preferred way to work with Lua scripts. """ return Script(self, script) # GEO COMMANDS def geoadd(self, name, *values): """ Add the specified geospatial items to the specified key identified by the ``name`` argument. The Geospatial items are given as ordered members of the ``values`` argument, each item or place is formed by the triad longitude, latitude and name. """ if len(values) % 3 != 0: raise DataError("GEOADD requires places with lon, lat and name" " values") return self.execute_command('GEOADD', name, *values) def geodist(self, name, place1, place2, unit=None): """ Return the distance between ``place1`` and ``place2`` members of the ``name`` key. The units must be one of the following : m, km mi, ft. By default meters are used. """ pieces = [name, place1, place2] if unit and unit not in ('m', 'km', 'mi', 'ft'): raise DataError("GEODIST invalid unit") elif unit: pieces.append(unit) return self.execute_command('GEODIST', *pieces) def geohash(self, name, *values): """ Return the geo hash string for each item of ``values`` members of the specified key identified by the ``name`` argument. """ return self.execute_command('GEOHASH', name, *values) def geopos(self, name, *values): """ Return the positions of each item of ``values`` as members of the specified key identified by the ``name`` argument. Each position is represented by the pairs lon and lat. """ return self.execute_command('GEOPOS', name, *values) def georadius(self, name, longitude, latitude, radius, unit=None, withdist=False, withcoord=False, withhash=False, count=None, sort=None, store=None, store_dist=None): """ Return the members of the specified key identified by the ``name`` argument which are within the borders of the area specified with the ``latitude`` and ``longitude`` location and the maximum distance from the center specified by the ``radius`` value. The units must be one of the following : m, km mi, ft. By default ``withdist`` indicates to return the distances of each place. ``withcoord`` indicates to return the latitude and longitude of each place. ``withhash`` indicates to return the geohash string of each place. ``count`` indicates to return the number of elements up to N. ``sort`` indicates to return the places in a sorted way, ASC for nearest to fairest and DESC for fairest to nearest. ``store`` indicates to save the places names in a sorted set named with a specific key, each element of the destination sorted set is populated with the score got from the original geo sorted set. ``store_dist`` indicates to save the places names in a sorted set named with a specific key, instead of ``store`` the sorted set destination score is set with the distance. """ return self._georadiusgeneric('GEORADIUS', name, longitude, latitude, radius, unit=unit, withdist=withdist, withcoord=withcoord, withhash=withhash, count=count, sort=sort, store=store, store_dist=store_dist) def georadiusbymember(self, name, member, radius, unit=None, withdist=False, withcoord=False, withhash=False, count=None, sort=None, store=None, store_dist=None): """ This command is exactly like ``georadius`` with the sole difference that instead of taking, as the center of the area to query, a longitude and latitude value, it takes the name of a member already existing inside the geospatial index represented by the sorted set. """ return self._georadiusgeneric('GEORADIUSBYMEMBER', name, member, radius, unit=unit, withdist=withdist, withcoord=withcoord, withhash=withhash, count=count, sort=sort, store=store, store_dist=store_dist) def _georadiusgeneric(self, command, *args, **kwargs): pieces = list(args) if kwargs['unit'] and kwargs['unit'] not in ('m', 'km', 'mi', 'ft'): raise DataError("GEORADIUS invalid unit") elif kwargs['unit']: pieces.append(kwargs['unit']) else: pieces.append('m',) for arg_name, byte_repr in ( ('withdist', b'WITHDIST'), ('withcoord', b'WITHCOORD'), ('withhash', b'WITHHASH')): if kwargs[arg_name]: pieces.append(byte_repr) if kwargs['count']: pieces.extend([b'COUNT', kwargs['count']]) if kwargs['sort']: if kwargs['sort'] == 'ASC': pieces.append(b'ASC') elif kwargs['sort'] == 'DESC': pieces.append(b'DESC') else: raise DataError("GEORADIUS invalid sort") if kwargs['store'] and kwargs['store_dist']: raise DataError("GEORADIUS store and store_dist cant be set" " together") if kwargs['store']: pieces.extend([b'STORE', kwargs['store']]) if kwargs['store_dist']: pieces.extend([b'STOREDIST', kwargs['store_dist']]) return self.execute_command(command, *pieces, **kwargs) StrictRedis = Redis class Monitor(object): """ Monitor is useful for handling the MONITOR command to the redis server. next_command() method returns one command from monitor listen() method yields commands from monitor. """ monitor_re = re.compile(r'\[(\d+) (.*)\] (.*)') command_re = re.compile(r'"(.*?)(? conn.next_health_check: conn.send_command('PING', self.HEALTH_CHECK_MESSAGE, check_health=False) def _normalize_keys(self, data): """ normalize channel/pattern names to be either bytes or strings based on whether responses are automatically decoded. this saves us from coercing the value for each message coming in. """ encode = self.encoder.encode decode = self.encoder.decode return {decode(encode(k)): v for k, v in iteritems(data)} def psubscribe(self, *args, **kwargs): """ Subscribe to channel patterns. Patterns supplied as keyword arguments expect a pattern name as the key and a callable as the value. A pattern's callable will be invoked automatically when a message is received on that pattern rather than producing a message via ``listen()``. """ if args: args = list_or_args(args[0], args[1:]) new_patterns = dict.fromkeys(args) new_patterns.update(kwargs) ret_val = self.execute_command('PSUBSCRIBE', *iterkeys(new_patterns)) # update the patterns dict AFTER we send the command. we don't want to # subscribe twice to these patterns, once for the command and again # for the reconnection. new_patterns = self._normalize_keys(new_patterns) self.patterns.update(new_patterns) self.pending_unsubscribe_patterns.difference_update(new_patterns) return ret_val def punsubscribe(self, *args): """ Unsubscribe from the supplied patterns. If empty, unsubscribe from all patterns. """ if args: args = list_or_args(args[0], args[1:]) patterns = self._normalize_keys(dict.fromkeys(args)) else: patterns = self.patterns self.pending_unsubscribe_patterns.update(patterns) return self.execute_command('PUNSUBSCRIBE', *args) def subscribe(self, *args, **kwargs): """ Subscribe to channels. Channels supplied as keyword arguments expect a channel name as the key and a callable as the value. A channel's callable will be invoked automatically when a message is received on that channel rather than producing a message via ``listen()`` or ``get_message()``. """ if args: args = list_or_args(args[0], args[1:]) new_channels = dict.fromkeys(args) new_channels.update(kwargs) ret_val = self.execute_command('SUBSCRIBE', *iterkeys(new_channels)) # update the channels dict AFTER we send the command. we don't want to # subscribe twice to these channels, once for the command and again # for the reconnection. new_channels = self._normalize_keys(new_channels) self.channels.update(new_channels) self.pending_unsubscribe_channels.difference_update(new_channels) return ret_val def unsubscribe(self, *args): """ Unsubscribe from the supplied channels. If empty, unsubscribe from all channels """ if args: args = list_or_args(args[0], args[1:]) channels = self._normalize_keys(dict.fromkeys(args)) else: channels = self.channels self.pending_unsubscribe_channels.update(channels) return self.execute_command('UNSUBSCRIBE', *args) def listen(self): "Listen for messages on channels this client has been subscribed to" while self.subscribed: response = self.handle_message(self.parse_response(block=True)) if response is not None: yield response def get_message(self, ignore_subscribe_messages=False, timeout=0): """ Get the next message if one is available, otherwise None. If timeout is specified, the system will wait for `timeout` seconds before returning. Timeout should be specified as a floating point number. """ response = self.parse_response(block=False, timeout=timeout) if response: return self.handle_message(response, ignore_subscribe_messages) return None def ping(self, message=None): """ Ping the Redis server """ message = '' if message is None else message return self.execute_command('PING', message) def handle_message(self, response, ignore_subscribe_messages=False): """ Parses a pub/sub message. If the channel or pattern was subscribed to with a message handler, the handler is invoked instead of a parsed message being returned. """ message_type = nativestr(response[0]) if message_type == 'pmessage': message = { 'type': message_type, 'pattern': response[1], 'channel': response[2], 'data': response[3] } elif message_type == 'pong': message = { 'type': message_type, 'pattern': None, 'channel': None, 'data': response[1] } else: message = { 'type': message_type, 'pattern': None, 'channel': response[1], 'data': response[2] } # if this is an unsubscribe message, remove it from memory if message_type in self.UNSUBSCRIBE_MESSAGE_TYPES: if message_type == 'punsubscribe': pattern = response[1] if pattern in self.pending_unsubscribe_patterns: self.pending_unsubscribe_patterns.remove(pattern) self.patterns.pop(pattern, None) else: channel = response[1] if channel in self.pending_unsubscribe_channels: self.pending_unsubscribe_channels.remove(channel) self.channels.pop(channel, None) if message_type in self.PUBLISH_MESSAGE_TYPES: # if there's a message handler, invoke it if message_type == 'pmessage': handler = self.patterns.get(message['pattern'], None) else: handler = self.channels.get(message['channel'], None) if handler: handler(message) return None elif message_type != 'pong': # this is a subscribe/unsubscribe message. ignore if we don't # want them if ignore_subscribe_messages or self.ignore_subscribe_messages: return None return message def run_in_thread(self, sleep_time=0, daemon=False): for channel, handler in iteritems(self.channels): if handler is None: raise PubSubError("Channel: '%s' has no handler registered" % channel) for pattern, handler in iteritems(self.patterns): if handler is None: raise PubSubError("Pattern: '%s' has no handler registered" % pattern) thread = PubSubWorkerThread(self, sleep_time, daemon=daemon) thread.start() return thread class PubSubWorkerThread(threading.Thread): def __init__(self, pubsub, sleep_time, daemon=False): super(PubSubWorkerThread, self).__init__() self.daemon = daemon self.pubsub = pubsub self.sleep_time = sleep_time self._running = threading.Event() def run(self): if self._running.is_set(): return self._running.set() pubsub = self.pubsub sleep_time = self.sleep_time while self._running.is_set(): pubsub.get_message(ignore_subscribe_messages=True, timeout=sleep_time) pubsub.close() def stop(self): # trip the flag so the run loop exits. the run loop will # close the pubsub connection, which disconnects the socket # and returns the connection to the pool. self._running.clear() class Pipeline(Redis): """ Pipelines provide a way to transmit multiple commands to the Redis server in one transmission. This is convenient for batch processing, such as saving all the values in a list to Redis. All commands executed within a pipeline are wrapped with MULTI and EXEC calls. This guarantees all commands executed in the pipeline will be executed atomically. Any command raising an exception does *not* halt the execution of subsequent commands in the pipeline. Instead, the exception is caught and its instance is placed into the response list returned by execute(). Code iterating over the response list should be able to deal with an instance of an exception as a potential value. In general, these will be ResponseError exceptions, such as those raised when issuing a command on a key of a different datatype. """ UNWATCH_COMMANDS = {'DISCARD', 'EXEC', 'UNWATCH'} def __init__(self, connection_pool, response_callbacks, transaction, shard_hint): self.connection_pool = connection_pool self.connection = None self.response_callbacks = response_callbacks self.transaction = transaction self.shard_hint = shard_hint self.watching = False self.reset() def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.reset() def __del__(self): try: self.reset() except Exception: pass def __len__(self): return len(self.command_stack) def __nonzero__(self): "Pipeline instances should always evaluate to True on Python 2.7" return True def __bool__(self): "Pipeline instances should always evaluate to True on Python 3+" return True def reset(self): self.command_stack = [] self.scripts = set() # make sure to reset the connection state in the event that we were # watching something if self.watching and self.connection: try: # call this manually since our unwatch or # immediate_execute_command methods can call reset() self.connection.send_command('UNWATCH') self.connection.read_response() except ConnectionError: # disconnect will also remove any previous WATCHes self.connection.disconnect() # clean up the other instance attributes self.watching = False self.explicit_transaction = False # we can safely return the connection to the pool here since we're # sure we're no longer WATCHing anything if self.connection: self.connection_pool.release(self.connection) self.connection = None def multi(self): """ Start a transactional block of the pipeline after WATCH commands are issued. End the transactional block with `execute`. """ if self.explicit_transaction: raise RedisError('Cannot issue nested calls to MULTI') if self.command_stack: raise RedisError('Commands without an initial WATCH have already ' 'been issued') self.explicit_transaction = True def execute_command(self, *args, **kwargs): if (self.watching or args[0] == 'WATCH') and \ not self.explicit_transaction: return self.immediate_execute_command(*args, **kwargs) return self.pipeline_execute_command(*args, **kwargs) def immediate_execute_command(self, *args, **options): """ Execute a command immediately, but don't auto-retry on a ConnectionError if we're already WATCHing a variable. Used when issuing WATCH or subsequent commands retrieving their values but before MULTI is called. """ command_name = args[0] conn = self.connection # if this is the first call, we need a connection if not conn: conn = self.connection_pool.get_connection(command_name, self.shard_hint) self.connection = conn try: conn.send_command(*args) return self.parse_response(conn, command_name, **options) except (ConnectionError, TimeoutError) as e: conn.disconnect() # if we were already watching a variable, the watch is no longer # valid since this connection has died. raise a WatchError, which # indicates the user should retry this transaction. if self.watching: self.reset() raise WatchError("A ConnectionError occurred on while " "watching one or more keys") # if retry_on_timeout is not set, or the error is not # a TimeoutError, raise it if not (conn.retry_on_timeout and isinstance(e, TimeoutError)): self.reset() raise # retry_on_timeout is set, this is a TimeoutError and we are not # already WATCHing any variables. retry the command. try: conn.send_command(*args) return self.parse_response(conn, command_name, **options) except (ConnectionError, TimeoutError): # a subsequent failure should simply be raised self.reset() raise def pipeline_execute_command(self, *args, **options): """ Stage a command to be executed when execute() is next called Returns the current Pipeline object back so commands can be chained together, such as: pipe = pipe.set('foo', 'bar').incr('baz').decr('bang') At some other point, you can then run: pipe.execute(), which will execute all commands queued in the pipe. """ self.command_stack.append((args, options)) return self def _execute_transaction(self, connection, commands, raise_on_error): cmds = chain([(('MULTI', ), {})], commands, [(('EXEC', ), {})]) all_cmds = connection.pack_commands([args for args, options in cmds if EMPTY_RESPONSE not in options]) connection.send_packed_command(all_cmds) errors = [] # parse off the response for MULTI # NOTE: we need to handle ResponseErrors here and continue # so that we read all the additional command messages from # the socket try: self.parse_response(connection, '_') except ResponseError as e: errors.append((0, e)) # and all the other commands for i, command in enumerate(commands): if EMPTY_RESPONSE in command[1]: errors.append((i, command[1][EMPTY_RESPONSE])) else: try: self.parse_response(connection, '_') except ResponseError as e: self.annotate_exception(e, i + 1, command[0]) errors.append((i, e)) # parse the EXEC. try: response = self.parse_response(connection, '_') except ExecAbortError: if errors: raise errors[0][1] raise # EXEC clears any watched keys self.watching = False if response is None: raise WatchError("Watched variable changed.") # put any parse errors into the response for i, e in errors: response.insert(i, e) if len(response) != len(commands): self.connection.disconnect() raise ResponseError("Wrong number of response items from " "pipeline execution") # find any errors in the response and raise if necessary if raise_on_error: self.raise_first_error(commands, response) # We have to run response callbacks manually data = [] for r, cmd in izip(response, commands): if not isinstance(r, Exception): args, options = cmd command_name = args[0] if command_name in self.response_callbacks: r = self.response_callbacks[command_name](r, **options) data.append(r) return data def _execute_pipeline(self, connection, commands, raise_on_error): # build up all commands into a single request to increase network perf all_cmds = connection.pack_commands([args for args, _ in commands]) connection.send_packed_command(all_cmds) response = [] for args, options in commands: try: response.append( self.parse_response(connection, args[0], **options)) except ResponseError as e: response.append(e) if raise_on_error: self.raise_first_error(commands, response) return response def raise_first_error(self, commands, response): for i, r in enumerate(response): if isinstance(r, ResponseError): self.annotate_exception(r, i + 1, commands[i][0]) raise r def annotate_exception(self, exception, number, command): cmd = ' '.join(imap(safe_unicode, command)) msg = 'Command # %d (%s) of pipeline caused error: %s' % ( number, cmd, safe_unicode(exception.args[0])) exception.args = (msg,) + exception.args[1:] def parse_response(self, connection, command_name, **options): result = Redis.parse_response( self, connection, command_name, **options) if command_name in self.UNWATCH_COMMANDS: self.watching = False elif command_name == 'WATCH': self.watching = True return result def load_scripts(self): # make sure all scripts that are about to be run on this pipeline exist scripts = list(self.scripts) immediate = self.immediate_execute_command shas = [s.sha for s in scripts] # we can't use the normal script_* methods because they would just # get buffered in the pipeline. exists = immediate('SCRIPT EXISTS', *shas) if not all(exists): for s, exist in izip(scripts, exists): if not exist: s.sha = immediate('SCRIPT LOAD', s.script) def execute(self, raise_on_error=True): "Execute all the commands in the current pipeline" stack = self.command_stack if not stack and not self.watching: return [] if self.scripts: self.load_scripts() if self.transaction or self.explicit_transaction: execute = self._execute_transaction else: execute = self._execute_pipeline conn = self.connection if not conn: conn = self.connection_pool.get_connection('MULTI', self.shard_hint) # assign to self.connection so reset() releases the connection # back to the pool after we're done self.connection = conn try: return execute(conn, stack, raise_on_error) except (ConnectionError, TimeoutError) as e: conn.disconnect() # if we were watching a variable, the watch is no longer valid # since this connection has died. raise a WatchError, which # indicates the user should retry this transaction. if self.watching: raise WatchError("A ConnectionError occurred on while " "watching one or more keys") # if retry_on_timeout is not set, or the error is not # a TimeoutError, raise it if not (conn.retry_on_timeout and isinstance(e, TimeoutError)): raise # retry a TimeoutError when retry_on_timeout is set return execute(conn, stack, raise_on_error) finally: self.reset() def watch(self, *names): "Watches the values at keys ``names``" if self.explicit_transaction: raise RedisError('Cannot issue a WATCH after a MULTI') return self.execute_command('WATCH', *names) def unwatch(self): "Unwatches all previously specified keys" return self.watching and self.execute_command('UNWATCH') or True class Script(object): "An executable Lua script object returned by ``register_script``" def __init__(self, registered_client, script): self.registered_client = registered_client self.script = script # Precalculate and store the SHA1 hex digest of the script. if isinstance(script, basestring): # We need the encoding from the client in order to generate an # accurate byte representation of the script encoder = registered_client.connection_pool.get_encoder() script = encoder.encode(script) self.sha = hashlib.sha1(script).hexdigest() def __call__(self, keys=[], args=[], client=None): "Execute the script, passing any required ``args``" if client is None: client = self.registered_client args = tuple(keys) + tuple(args) # make sure the Redis server knows about the script if isinstance(client, Pipeline): # Make sure the pipeline can register the script before executing. client.scripts.add(self) try: return client.evalsha(self.sha, len(keys), *args) except NoScriptError: # Maybe the client is pointed to a differnet server than the client # that created this instance? # Overwrite the sha just in case there was a discrepancy. self.sha = client.script_load(self.script) return client.evalsha(self.sha, len(keys), *args) class BitFieldOperation(object): """ Command builder for BITFIELD commands. """ def __init__(self, client, key, default_overflow=None): self.client = client self.key = key self._default_overflow = default_overflow self.reset() def reset(self): """ Reset the state of the instance to when it was constructed """ self.operations = [] self._last_overflow = 'WRAP' self.overflow(self._default_overflow or self._last_overflow) def overflow(self, overflow): """ Update the overflow algorithm of successive INCRBY operations :param overflow: Overflow algorithm, one of WRAP, SAT, FAIL. See the Redis docs for descriptions of these algorithmsself. :returns: a :py:class:`BitFieldOperation` instance. """ overflow = overflow.upper() if overflow != self._last_overflow: self._last_overflow = overflow self.operations.append(('OVERFLOW', overflow)) return self def incrby(self, fmt, offset, increment, overflow=None): """ Increment a bitfield by a given amount. :param fmt: format-string for the bitfield being updated, e.g. 'u8' for an unsigned 8-bit integer. :param offset: offset (in number of bits). If prefixed with a '#', this is an offset multiplier, e.g. given the arguments fmt='u8', offset='#2', the offset will be 16. :param int increment: value to increment the bitfield by. :param str overflow: overflow algorithm. Defaults to WRAP, but other acceptable values are SAT and FAIL. See the Redis docs for descriptions of these algorithms. :returns: a :py:class:`BitFieldOperation` instance. """ if overflow is not None: self.overflow(overflow) self.operations.append(('INCRBY', fmt, offset, increment)) return self def get(self, fmt, offset): """ Get the value of a given bitfield. :param fmt: format-string for the bitfield being read, e.g. 'u8' for an unsigned 8-bit integer. :param offset: offset (in number of bits). If prefixed with a '#', this is an offset multiplier, e.g. given the arguments fmt='u8', offset='#2', the offset will be 16. :returns: a :py:class:`BitFieldOperation` instance. """ self.operations.append(('GET', fmt, offset)) return self def set(self, fmt, offset, value): """ Set the value of a given bitfield. :param fmt: format-string for the bitfield being read, e.g. 'u8' for an unsigned 8-bit integer. :param offset: offset (in number of bits). If prefixed with a '#', this is an offset multiplier, e.g. given the arguments fmt='u8', offset='#2', the offset will be 16. :param int value: value to set at the given position. :returns: a :py:class:`BitFieldOperation` instance. """ self.operations.append(('SET', fmt, offset, value)) return self @property def command(self): cmd = ['BITFIELD', self.key] for ops in self.operations: cmd.extend(ops) return cmd def execute(self): """ Execute the operation(s) in a single BITFIELD command. The return value is a list of values corresponding to each operation. If the client used to create this instance was a pipeline, the list of values will be present within the pipeline's execute. """ command = self.command self.reset() return self.client.execute_command(*command) redis-py-3.5.3/redis/connection.py000077500000000000000000001552221366526254200171220ustar00rootroot00000000000000from __future__ import unicode_literals from distutils.version import StrictVersion from itertools import chain from time import time import errno import io import os import socket import threading import warnings from redis._compat import (xrange, imap, unicode, long, nativestr, basestring, iteritems, LifoQueue, Empty, Full, urlparse, parse_qs, recv, recv_into, unquote, BlockingIOError, sendall, shutdown, ssl_wrap_socket) from redis.exceptions import ( AuthenticationError, AuthenticationWrongNumberOfArgsError, BusyLoadingError, ChildDeadlockedError, ConnectionError, DataError, ExecAbortError, InvalidResponse, NoPermissionError, NoScriptError, ReadOnlyError, RedisError, ResponseError, TimeoutError, ) from redis.utils import HIREDIS_AVAILABLE try: import ssl ssl_available = True except ImportError: ssl_available = False NONBLOCKING_EXCEPTION_ERROR_NUMBERS = { BlockingIOError: errno.EWOULDBLOCK, } if ssl_available: if hasattr(ssl, 'SSLWantReadError'): NONBLOCKING_EXCEPTION_ERROR_NUMBERS[ssl.SSLWantReadError] = 2 NONBLOCKING_EXCEPTION_ERROR_NUMBERS[ssl.SSLWantWriteError] = 2 else: NONBLOCKING_EXCEPTION_ERROR_NUMBERS[ssl.SSLError] = 2 # In Python 2.7 a socket.error is raised for a nonblocking read. # The _compat module aliases BlockingIOError to socket.error to be # Python 2/3 compatible. # However this means that all socket.error exceptions need to be handled # properly within these exception handlers. # We need to make sure socket.error is included in these handlers and # provide a dummy error number that will never match a real exception. if socket.error not in NONBLOCKING_EXCEPTION_ERROR_NUMBERS: NONBLOCKING_EXCEPTION_ERROR_NUMBERS[socket.error] = -999999 NONBLOCKING_EXCEPTIONS = tuple(NONBLOCKING_EXCEPTION_ERROR_NUMBERS.keys()) if HIREDIS_AVAILABLE: import hiredis hiredis_version = StrictVersion(hiredis.__version__) HIREDIS_SUPPORTS_CALLABLE_ERRORS = \ hiredis_version >= StrictVersion('0.1.3') HIREDIS_SUPPORTS_BYTE_BUFFER = \ hiredis_version >= StrictVersion('0.1.4') HIREDIS_SUPPORTS_ENCODING_ERRORS = \ hiredis_version >= StrictVersion('1.0.0') if not HIREDIS_SUPPORTS_BYTE_BUFFER: msg = ("redis-py works best with hiredis >= 0.1.4. You're running " "hiredis %s. Please consider upgrading." % hiredis.__version__) warnings.warn(msg) HIREDIS_USE_BYTE_BUFFER = True # only use byte buffer if hiredis supports it if not HIREDIS_SUPPORTS_BYTE_BUFFER: HIREDIS_USE_BYTE_BUFFER = False SYM_STAR = b'*' SYM_DOLLAR = b'$' SYM_CRLF = b'\r\n' SYM_EMPTY = b'' SERVER_CLOSED_CONNECTION_ERROR = "Connection closed by server." SENTINEL = object() class Encoder(object): "Encode strings to bytes-like and decode bytes-like to strings" def __init__(self, encoding, encoding_errors, decode_responses): self.encoding = encoding self.encoding_errors = encoding_errors self.decode_responses = decode_responses def encode(self, value): "Return a bytestring or bytes-like representation of the value" if isinstance(value, (bytes, memoryview)): return value elif isinstance(value, bool): # special case bool since it is a subclass of int raise DataError("Invalid input of type: 'bool'. Convert to a " "bytes, string, int or float first.") elif isinstance(value, float): value = repr(value).encode() elif isinstance(value, (int, long)): # python 2 repr() on longs is '123L', so use str() instead value = str(value).encode() elif not isinstance(value, basestring): # a value we don't know how to deal with. throw an error typename = type(value).__name__ raise DataError("Invalid input of type: '%s'. Convert to a " "bytes, string, int or float first." % typename) if isinstance(value, unicode): value = value.encode(self.encoding, self.encoding_errors) return value def decode(self, value, force=False): "Return a unicode string from the bytes-like representation" if self.decode_responses or force: if isinstance(value, memoryview): value = value.tobytes() if isinstance(value, bytes): value = value.decode(self.encoding, self.encoding_errors) return value class BaseParser(object): EXCEPTION_CLASSES = { 'ERR': { 'max number of clients reached': ConnectionError, 'Client sent AUTH, but no password is set': AuthenticationError, 'invalid password': AuthenticationError, # some Redis server versions report invalid command syntax # in lowercase 'wrong number of arguments for \'auth\' command': AuthenticationWrongNumberOfArgsError, # some Redis server versions report invalid command syntax # in uppercase 'wrong number of arguments for \'AUTH\' command': AuthenticationWrongNumberOfArgsError, }, 'EXECABORT': ExecAbortError, 'LOADING': BusyLoadingError, 'NOSCRIPT': NoScriptError, 'READONLY': ReadOnlyError, 'NOAUTH': AuthenticationError, 'NOPERM': NoPermissionError, } def parse_error(self, response): "Parse an error response" error_code = response.split(' ')[0] if error_code in self.EXCEPTION_CLASSES: response = response[len(error_code) + 1:] exception_class = self.EXCEPTION_CLASSES[error_code] if isinstance(exception_class, dict): exception_class = exception_class.get(response, ResponseError) return exception_class(response) return ResponseError(response) class SocketBuffer(object): def __init__(self, socket, socket_read_size, socket_timeout): self._sock = socket self.socket_read_size = socket_read_size self.socket_timeout = socket_timeout self._buffer = io.BytesIO() # number of bytes written to the buffer from the socket self.bytes_written = 0 # number of bytes read from the buffer self.bytes_read = 0 @property def length(self): return self.bytes_written - self.bytes_read def _read_from_socket(self, length=None, timeout=SENTINEL, raise_on_timeout=True): sock = self._sock socket_read_size = self.socket_read_size buf = self._buffer buf.seek(self.bytes_written) marker = 0 custom_timeout = timeout is not SENTINEL try: if custom_timeout: sock.settimeout(timeout) while True: data = recv(self._sock, socket_read_size) # an empty string indicates the server shutdown the socket if isinstance(data, bytes) and len(data) == 0: raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) buf.write(data) data_length = len(data) self.bytes_written += data_length marker += data_length if length is not None and length > marker: continue return True except socket.timeout: if raise_on_timeout: raise TimeoutError("Timeout reading from socket") return False except NONBLOCKING_EXCEPTIONS as ex: # if we're in nonblocking mode and the recv raises a # blocking error, simply return False indicating that # there's no data to be read. otherwise raise the # original exception. allowed = NONBLOCKING_EXCEPTION_ERROR_NUMBERS.get(ex.__class__, -1) if not raise_on_timeout and ex.errno == allowed: return False raise ConnectionError("Error while reading from socket: %s" % (ex.args,)) finally: if custom_timeout: sock.settimeout(self.socket_timeout) def can_read(self, timeout): return bool(self.length) or \ self._read_from_socket(timeout=timeout, raise_on_timeout=False) def read(self, length): length = length + 2 # make sure to read the \r\n terminator # make sure we've read enough data from the socket if length > self.length: self._read_from_socket(length - self.length) self._buffer.seek(self.bytes_read) data = self._buffer.read(length) self.bytes_read += len(data) # purge the buffer when we've consumed it all so it doesn't # grow forever if self.bytes_read == self.bytes_written: self.purge() return data[:-2] def readline(self): buf = self._buffer buf.seek(self.bytes_read) data = buf.readline() while not data.endswith(SYM_CRLF): # there's more data in the socket that we need self._read_from_socket() buf.seek(self.bytes_read) data = buf.readline() self.bytes_read += len(data) # purge the buffer when we've consumed it all so it doesn't # grow forever if self.bytes_read == self.bytes_written: self.purge() return data[:-2] def purge(self): self._buffer.seek(0) self._buffer.truncate() self.bytes_written = 0 self.bytes_read = 0 def close(self): try: self.purge() self._buffer.close() except Exception: # issue #633 suggests the purge/close somehow raised a # BadFileDescriptor error. Perhaps the client ran out of # memory or something else? It's probably OK to ignore # any error being raised from purge/close since we're # removing the reference to the instance below. pass self._buffer = None self._sock = None class PythonParser(BaseParser): "Plain Python parsing class" def __init__(self, socket_read_size): self.socket_read_size = socket_read_size self.encoder = None self._sock = None self._buffer = None def __del__(self): try: self.on_disconnect() except Exception: pass def on_connect(self, connection): "Called when the socket connects" self._sock = connection._sock self._buffer = SocketBuffer(self._sock, self.socket_read_size, connection.socket_timeout) self.encoder = connection.encoder def on_disconnect(self): "Called when the socket disconnects" self._sock = None if self._buffer is not None: self._buffer.close() self._buffer = None self.encoder = None def can_read(self, timeout): return self._buffer and self._buffer.can_read(timeout) def read_response(self): raw = self._buffer.readline() if not raw: raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) byte, response = raw[:1], raw[1:] if byte not in (b'-', b'+', b':', b'$', b'*'): raise InvalidResponse("Protocol Error: %r" % raw) # server returned an error if byte == b'-': response = nativestr(response) error = self.parse_error(response) # if the error is a ConnectionError, raise immediately so the user # is notified if isinstance(error, ConnectionError): raise error # otherwise, we're dealing with a ResponseError that might belong # inside a pipeline response. the connection's read_response() # and/or the pipeline's execute() will raise this error if # necessary, so just return the exception instance here. return error # single value elif byte == b'+': pass # int value elif byte == b':': response = long(response) # bulk response elif byte == b'$': length = int(response) if length == -1: return None response = self._buffer.read(length) # multi-bulk response elif byte == b'*': length = int(response) if length == -1: return None response = [self.read_response() for i in xrange(length)] if isinstance(response, bytes): response = self.encoder.decode(response) return response class HiredisParser(BaseParser): "Parser class for connections using Hiredis" def __init__(self, socket_read_size): if not HIREDIS_AVAILABLE: raise RedisError("Hiredis is not installed") self.socket_read_size = socket_read_size if HIREDIS_USE_BYTE_BUFFER: self._buffer = bytearray(socket_read_size) def __del__(self): try: self.on_disconnect() except Exception: pass def on_connect(self, connection): self._sock = connection._sock self._socket_timeout = connection.socket_timeout kwargs = { 'protocolError': InvalidResponse, 'replyError': self.parse_error, } # hiredis < 0.1.3 doesn't support functions that create exceptions if not HIREDIS_SUPPORTS_CALLABLE_ERRORS: kwargs['replyError'] = ResponseError if connection.encoder.decode_responses: kwargs['encoding'] = connection.encoder.encoding if HIREDIS_SUPPORTS_ENCODING_ERRORS: kwargs['errors'] = connection.encoder.encoding_errors self._reader = hiredis.Reader(**kwargs) self._next_response = False def on_disconnect(self): self._sock = None self._reader = None self._next_response = False def can_read(self, timeout): if not self._reader: raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) if self._next_response is False: self._next_response = self._reader.gets() if self._next_response is False: return self.read_from_socket(timeout=timeout, raise_on_timeout=False) return True def read_from_socket(self, timeout=SENTINEL, raise_on_timeout=True): sock = self._sock custom_timeout = timeout is not SENTINEL try: if custom_timeout: sock.settimeout(timeout) if HIREDIS_USE_BYTE_BUFFER: bufflen = recv_into(self._sock, self._buffer) if bufflen == 0: raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) self._reader.feed(self._buffer, 0, bufflen) else: buffer = recv(self._sock, self.socket_read_size) # an empty string indicates the server shutdown the socket if not isinstance(buffer, bytes) or len(buffer) == 0: raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) self._reader.feed(buffer) # data was read from the socket and added to the buffer. # return True to indicate that data was read. return True except socket.timeout: if raise_on_timeout: raise TimeoutError("Timeout reading from socket") return False except NONBLOCKING_EXCEPTIONS as ex: # if we're in nonblocking mode and the recv raises a # blocking error, simply return False indicating that # there's no data to be read. otherwise raise the # original exception. allowed = NONBLOCKING_EXCEPTION_ERROR_NUMBERS.get(ex.__class__, -1) if not raise_on_timeout and ex.errno == allowed: return False raise ConnectionError("Error while reading from socket: %s" % (ex.args,)) finally: if custom_timeout: sock.settimeout(self._socket_timeout) def read_response(self): if not self._reader: raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR) # _next_response might be cached from a can_read() call if self._next_response is not False: response = self._next_response self._next_response = False return response response = self._reader.gets() while response is False: self.read_from_socket() response = self._reader.gets() # if an older version of hiredis is installed, we need to attempt # to convert ResponseErrors to their appropriate types. if not HIREDIS_SUPPORTS_CALLABLE_ERRORS: if isinstance(response, ResponseError): response = self.parse_error(response.args[0]) elif isinstance(response, list) and response and \ isinstance(response[0], ResponseError): response[0] = self.parse_error(response[0].args[0]) # if the response is a ConnectionError or the response is a list and # the first item is a ConnectionError, raise it as something bad # happened if isinstance(response, ConnectionError): raise response elif isinstance(response, list) and response and \ isinstance(response[0], ConnectionError): raise response[0] return response if HIREDIS_AVAILABLE: DefaultParser = HiredisParser else: DefaultParser = PythonParser class Connection(object): "Manages TCP communication to and from a Redis server" def __init__(self, host='localhost', port=6379, db=0, password=None, socket_timeout=None, socket_connect_timeout=None, socket_keepalive=False, socket_keepalive_options=None, socket_type=0, retry_on_timeout=False, encoding='utf-8', encoding_errors='strict', decode_responses=False, parser_class=DefaultParser, socket_read_size=65536, health_check_interval=0, client_name=None, username=None): self.pid = os.getpid() self.host = host self.port = int(port) self.db = db self.username = username self.client_name = client_name self.password = password self.socket_timeout = socket_timeout self.socket_connect_timeout = socket_connect_timeout or socket_timeout self.socket_keepalive = socket_keepalive self.socket_keepalive_options = socket_keepalive_options or {} self.socket_type = socket_type self.retry_on_timeout = retry_on_timeout self.health_check_interval = health_check_interval self.next_health_check = 0 self.encoder = Encoder(encoding, encoding_errors, decode_responses) self._sock = None self._parser = parser_class(socket_read_size=socket_read_size) self._connect_callbacks = [] self._buffer_cutoff = 6000 def __repr__(self): repr_args = ','.join(['%s=%s' % (k, v) for k, v in self.repr_pieces()]) return '%s<%s>' % (self.__class__.__name__, repr_args) def repr_pieces(self): pieces = [ ('host', self.host), ('port', self.port), ('db', self.db) ] if self.client_name: pieces.append(('client_name', self.client_name)) return pieces def __del__(self): try: self.disconnect() except Exception: pass def register_connect_callback(self, callback): self._connect_callbacks.append(callback) def clear_connect_callbacks(self): self._connect_callbacks = [] def connect(self): "Connects to the Redis server if not already connected" if self._sock: return try: sock = self._connect() except socket.timeout: raise TimeoutError("Timeout connecting to server") except socket.error as e: raise ConnectionError(self._error_message(e)) self._sock = sock try: self.on_connect() except RedisError: # clean up after any error in on_connect self.disconnect() raise # run any user callbacks. right now the only internal callback # is for pubsub channel/pattern resubscription for callback in self._connect_callbacks: callback(self) def _connect(self): "Create a TCP socket connection" # we want to mimic what socket.create_connection does to support # ipv4/ipv6, but we want to set options prior to calling # socket.connect() err = None for res in socket.getaddrinfo(self.host, self.port, self.socket_type, socket.SOCK_STREAM): family, socktype, proto, canonname, socket_address = res sock = None try: sock = socket.socket(family, socktype, proto) # TCP_NODELAY sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) # TCP_KEEPALIVE if self.socket_keepalive: sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) for k, v in iteritems(self.socket_keepalive_options): sock.setsockopt(socket.IPPROTO_TCP, k, v) # set the socket_connect_timeout before we connect sock.settimeout(self.socket_connect_timeout) # connect sock.connect(socket_address) # set the socket_timeout now that we're connected sock.settimeout(self.socket_timeout) return sock except socket.error as _: err = _ if sock is not None: sock.close() if err is not None: raise err raise socket.error("socket.getaddrinfo returned an empty list") def _error_message(self, exception): # args for socket.error can either be (errno, "message") # or just "message" if len(exception.args) == 1: return "Error connecting to %s:%s. %s." % \ (self.host, self.port, exception.args[0]) else: return "Error %s connecting to %s:%s. %s." % \ (exception.args[0], self.host, self.port, exception.args[1]) def on_connect(self): "Initialize the connection, authenticate and select a database" self._parser.on_connect(self) # if username and/or password are set, authenticate if self.username or self.password: if self.username: auth_args = (self.username, self.password or '') else: auth_args = (self.password,) # avoid checking health here -- PING will fail if we try # to check the health prior to the AUTH self.send_command('AUTH', *auth_args, check_health=False) try: auth_response = self.read_response() except AuthenticationWrongNumberOfArgsError: # a username and password were specified but the Redis # server seems to be < 6.0.0 which expects a single password # arg. retry auth with just the password. # https://github.com/andymccurdy/redis-py/issues/1274 self.send_command('AUTH', self.password, check_health=False) auth_response = self.read_response() if nativestr(auth_response) != 'OK': raise AuthenticationError('Invalid Username or Password') # if a client_name is given, set it if self.client_name: self.send_command('CLIENT', 'SETNAME', self.client_name) if nativestr(self.read_response()) != 'OK': raise ConnectionError('Error setting client name') # if a database is specified, switch to it if self.db: self.send_command('SELECT', self.db) if nativestr(self.read_response()) != 'OK': raise ConnectionError('Invalid Database') def disconnect(self): "Disconnects from the Redis server" self._parser.on_disconnect() if self._sock is None: return try: if os.getpid() == self.pid: shutdown(self._sock, socket.SHUT_RDWR) self._sock.close() except socket.error: pass self._sock = None def check_health(self): "Check the health of the connection with a PING/PONG" if self.health_check_interval and time() > self.next_health_check: try: self.send_command('PING', check_health=False) if nativestr(self.read_response()) != 'PONG': raise ConnectionError( 'Bad response from PING health check') except (ConnectionError, TimeoutError): self.disconnect() self.send_command('PING', check_health=False) if nativestr(self.read_response()) != 'PONG': raise ConnectionError( 'Bad response from PING health check') def send_packed_command(self, command, check_health=True): "Send an already packed command to the Redis server" if not self._sock: self.connect() # guard against health check recursion if check_health: self.check_health() try: if isinstance(command, str): command = [command] for item in command: sendall(self._sock, item) except socket.timeout: self.disconnect() raise TimeoutError("Timeout writing to socket") except socket.error as e: self.disconnect() if len(e.args) == 1: errno, errmsg = 'UNKNOWN', e.args[0] else: errno = e.args[0] errmsg = e.args[1] raise ConnectionError("Error %s while writing to socket. %s." % (errno, errmsg)) except BaseException: self.disconnect() raise def send_command(self, *args, **kwargs): "Pack and send a command to the Redis server" self.send_packed_command(self.pack_command(*args), check_health=kwargs.get('check_health', True)) def can_read(self, timeout=0): "Poll the socket to see if there's data that can be read." sock = self._sock if not sock: self.connect() sock = self._sock return self._parser.can_read(timeout) def read_response(self): "Read the response from a previously sent command" try: response = self._parser.read_response() except socket.timeout: self.disconnect() raise TimeoutError("Timeout reading from %s:%s" % (self.host, self.port)) except socket.error as e: self.disconnect() raise ConnectionError("Error while reading from %s:%s : %s" % (self.host, self.port, e.args)) except BaseException: self.disconnect() raise if self.health_check_interval: self.next_health_check = time() + self.health_check_interval if isinstance(response, ResponseError): raise response return response def pack_command(self, *args): "Pack a series of arguments into the Redis protocol" output = [] # the client might have included 1 or more literal arguments in # the command name, e.g., 'CONFIG GET'. The Redis server expects these # arguments to be sent separately, so split the first argument # manually. These arguments should be bytestrings so that they are # not encoded. if isinstance(args[0], unicode): args = tuple(args[0].encode().split()) + args[1:] elif b' ' in args[0]: args = tuple(args[0].split()) + args[1:] buff = SYM_EMPTY.join((SYM_STAR, str(len(args)).encode(), SYM_CRLF)) buffer_cutoff = self._buffer_cutoff for arg in imap(self.encoder.encode, args): # to avoid large string mallocs, chunk the command into the # output list if we're sending large values or memoryviews arg_length = len(arg) if (len(buff) > buffer_cutoff or arg_length > buffer_cutoff or isinstance(arg, memoryview)): buff = SYM_EMPTY.join( (buff, SYM_DOLLAR, str(arg_length).encode(), SYM_CRLF)) output.append(buff) output.append(arg) buff = SYM_CRLF else: buff = SYM_EMPTY.join( (buff, SYM_DOLLAR, str(arg_length).encode(), SYM_CRLF, arg, SYM_CRLF)) output.append(buff) return output def pack_commands(self, commands): "Pack multiple commands into the Redis protocol" output = [] pieces = [] buffer_length = 0 buffer_cutoff = self._buffer_cutoff for cmd in commands: for chunk in self.pack_command(*cmd): chunklen = len(chunk) if (buffer_length > buffer_cutoff or chunklen > buffer_cutoff or isinstance(chunk, memoryview)): output.append(SYM_EMPTY.join(pieces)) buffer_length = 0 pieces = [] if chunklen > buffer_cutoff or isinstance(chunk, memoryview): output.append(chunk) else: pieces.append(chunk) buffer_length += chunklen if pieces: output.append(SYM_EMPTY.join(pieces)) return output class SSLConnection(Connection): def __init__(self, ssl_keyfile=None, ssl_certfile=None, ssl_cert_reqs='required', ssl_ca_certs=None, ssl_check_hostname=False, **kwargs): if not ssl_available: raise RedisError("Python wasn't built with SSL support") super(SSLConnection, self).__init__(**kwargs) self.keyfile = ssl_keyfile self.certfile = ssl_certfile if ssl_cert_reqs is None: ssl_cert_reqs = ssl.CERT_NONE elif isinstance(ssl_cert_reqs, basestring): CERT_REQS = { 'none': ssl.CERT_NONE, 'optional': ssl.CERT_OPTIONAL, 'required': ssl.CERT_REQUIRED } if ssl_cert_reqs not in CERT_REQS: raise RedisError( "Invalid SSL Certificate Requirements Flag: %s" % ssl_cert_reqs) ssl_cert_reqs = CERT_REQS[ssl_cert_reqs] self.cert_reqs = ssl_cert_reqs self.ca_certs = ssl_ca_certs self.check_hostname = ssl_check_hostname def _connect(self): "Wrap the socket with SSL support" sock = super(SSLConnection, self)._connect() if hasattr(ssl, "create_default_context"): context = ssl.create_default_context() context.check_hostname = self.check_hostname context.verify_mode = self.cert_reqs if self.certfile and self.keyfile: context.load_cert_chain(certfile=self.certfile, keyfile=self.keyfile) if self.ca_certs: context.load_verify_locations(self.ca_certs) sock = ssl_wrap_socket(context, sock, server_hostname=self.host) else: # In case this code runs in a version which is older than 2.7.9, # we want to fall back to old code sock = ssl_wrap_socket(ssl, sock, cert_reqs=self.cert_reqs, keyfile=self.keyfile, certfile=self.certfile, ca_certs=self.ca_certs) return sock class UnixDomainSocketConnection(Connection): def __init__(self, path='', db=0, username=None, password=None, socket_timeout=None, encoding='utf-8', encoding_errors='strict', decode_responses=False, retry_on_timeout=False, parser_class=DefaultParser, socket_read_size=65536, health_check_interval=0, client_name=None): self.pid = os.getpid() self.path = path self.db = db self.username = username self.client_name = client_name self.password = password self.socket_timeout = socket_timeout self.retry_on_timeout = retry_on_timeout self.health_check_interval = health_check_interval self.next_health_check = 0 self.encoder = Encoder(encoding, encoding_errors, decode_responses) self._sock = None self._parser = parser_class(socket_read_size=socket_read_size) self._connect_callbacks = [] self._buffer_cutoff = 6000 def repr_pieces(self): pieces = [ ('path', self.path), ('db', self.db), ] if self.client_name: pieces.append(('client_name', self.client_name)) return pieces def _connect(self): "Create a Unix domain socket connection" sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) sock.settimeout(self.socket_timeout) sock.connect(self.path) return sock def _error_message(self, exception): # args for socket.error can either be (errno, "message") # or just "message" if len(exception.args) == 1: return "Error connecting to unix socket: %s. %s." % \ (self.path, exception.args[0]) else: return "Error %s connecting to unix socket: %s. %s." % \ (exception.args[0], self.path, exception.args[1]) FALSE_STRINGS = ('0', 'F', 'FALSE', 'N', 'NO') def to_bool(value): if value is None or value == '': return None if isinstance(value, basestring) and value.upper() in FALSE_STRINGS: return False return bool(value) URL_QUERY_ARGUMENT_PARSERS = { 'socket_timeout': float, 'socket_connect_timeout': float, 'socket_keepalive': to_bool, 'retry_on_timeout': to_bool, 'max_connections': int, 'health_check_interval': int, 'ssl_check_hostname': to_bool, } class ConnectionPool(object): "Generic connection pool" @classmethod def from_url(cls, url, db=None, decode_components=False, **kwargs): """ Return a connection pool configured from the given URL. For example:: redis://[[username]:[password]]@localhost:6379/0 rediss://[[username]:[password]]@localhost:6379/0 unix://[[username]:[password]]@/path/to/socket.sock?db=0 Three URL schemes are supported: - ```redis://`` `_ creates a normal TCP socket connection - ```rediss://`` `_ creates a SSL wrapped TCP socket connection - ``unix://`` creates a Unix Domain Socket connection There are several ways to specify a database number. The parse function will return the first specified option: 1. A ``db`` querystring option, e.g. redis://localhost?db=0 2. If using the redis:// scheme, the path argument of the url, e.g. redis://localhost/0 3. The ``db`` argument to this function. If none of these options are specified, db=0 is used. The ``decode_components`` argument allows this function to work with percent-encoded URLs. If this argument is set to ``True`` all ``%xx`` escapes will be replaced by their single-character equivalents after the URL has been parsed. This only applies to the ``hostname``, ``path``, ``username`` and ``password`` components. Any additional querystring arguments and keyword arguments will be passed along to the ConnectionPool class's initializer. The querystring arguments ``socket_connect_timeout`` and ``socket_timeout`` if supplied are parsed as float values. The arguments ``socket_keepalive`` and ``retry_on_timeout`` are parsed to boolean values that accept True/False, Yes/No values to indicate state. Invalid types cause a ``UserWarning`` to be raised. In the case of conflicting arguments, querystring arguments always win. """ url = urlparse(url) url_options = {} for name, value in iteritems(parse_qs(url.query)): if value and len(value) > 0: parser = URL_QUERY_ARGUMENT_PARSERS.get(name) if parser: try: url_options[name] = parser(value[0]) except (TypeError, ValueError): warnings.warn(UserWarning( "Invalid value for `%s` in connection URL." % name )) else: url_options[name] = value[0] if decode_components: username = unquote(url.username) if url.username else None password = unquote(url.password) if url.password else None path = unquote(url.path) if url.path else None hostname = unquote(url.hostname) if url.hostname else None else: username = url.username or None password = url.password or None path = url.path hostname = url.hostname # We only support redis://, rediss:// and unix:// schemes. if url.scheme == 'unix': url_options.update({ 'username': username, 'password': password, 'path': path, 'connection_class': UnixDomainSocketConnection, }) elif url.scheme in ('redis', 'rediss'): url_options.update({ 'host': hostname, 'port': int(url.port or 6379), 'username': username, 'password': password, }) # If there's a path argument, use it as the db argument if a # querystring value wasn't specified if 'db' not in url_options and path: try: url_options['db'] = int(path.replace('/', '')) except (AttributeError, ValueError): pass if url.scheme == 'rediss': url_options['connection_class'] = SSLConnection else: valid_schemes = ', '.join(('redis://', 'rediss://', 'unix://')) raise ValueError('Redis URL must specify one of the following ' 'schemes (%s)' % valid_schemes) # last shot at the db value url_options['db'] = int(url_options.get('db', db or 0)) # update the arguments from the URL values kwargs.update(url_options) # backwards compatability if 'charset' in kwargs: warnings.warn(DeprecationWarning( '"charset" is deprecated. Use "encoding" instead')) kwargs['encoding'] = kwargs.pop('charset') if 'errors' in kwargs: warnings.warn(DeprecationWarning( '"errors" is deprecated. Use "encoding_errors" instead')) kwargs['encoding_errors'] = kwargs.pop('errors') return cls(**kwargs) def __init__(self, connection_class=Connection, max_connections=None, **connection_kwargs): """ Create a connection pool. If max_connections is set, then this object raises redis.ConnectionError when the pool's limit is reached. By default, TCP connections are created unless connection_class is specified. Use redis.UnixDomainSocketConnection for unix sockets. Any additional keyword arguments are passed to the constructor of connection_class. """ max_connections = max_connections or 2 ** 31 if not isinstance(max_connections, (int, long)) or max_connections < 0: raise ValueError('"max_connections" must be a positive integer') self.connection_class = connection_class self.connection_kwargs = connection_kwargs self.max_connections = max_connections # a lock to protect the critical section in _checkpid(). # this lock is acquired when the process id changes, such as # after a fork. during this time, multiple threads in the child # process could attempt to acquire this lock. the first thread # to acquire the lock will reset the data structures and lock # object of this pool. subsequent threads acquiring this lock # will notice the first thread already did the work and simply # release the lock. self._fork_lock = threading.Lock() self.reset() def __repr__(self): return "%s<%s>" % ( type(self).__name__, repr(self.connection_class(**self.connection_kwargs)), ) def reset(self): self._lock = threading.Lock() self._created_connections = 0 self._available_connections = [] self._in_use_connections = set() # this must be the last operation in this method. while reset() is # called when holding _fork_lock, other threads in this process # can call _checkpid() which compares self.pid and os.getpid() without # holding any lock (for performance reasons). keeping this assignment # as the last operation ensures that those other threads will also # notice a pid difference and block waiting for the first thread to # release _fork_lock. when each of these threads eventually acquire # _fork_lock, they will notice that another thread already called # reset() and they will immediately release _fork_lock and continue on. self.pid = os.getpid() def _checkpid(self): # _checkpid() attempts to keep ConnectionPool fork-safe on modern # systems. this is called by all ConnectionPool methods that # manipulate the pool's state such as get_connection() and release(). # # _checkpid() determines whether the process has forked by comparing # the current process id to the process id saved on the ConnectionPool # instance. if these values are the same, _checkpid() simply returns. # # when the process ids differ, _checkpid() assumes that the process # has forked and that we're now running in the child process. the child # process cannot use the parent's file descriptors (e.g., sockets). # therefore, when _checkpid() sees the process id change, it calls # reset() in order to reinitialize the child's ConnectionPool. this # will cause the child to make all new connection objects. # # _checkpid() is protected by self._fork_lock to ensure that multiple # threads in the child process do not call reset() multiple times. # # there is an extremely small chance this could fail in the following # scenario: # 1. process A calls _checkpid() for the first time and acquires # self._fork_lock. # 2. while holding self._fork_lock, process A forks (the fork() # could happen in a different thread owned by process A) # 3. process B (the forked child process) inherits the # ConnectionPool's state from the parent. that state includes # a locked _fork_lock. process B will not be notified when # process A releases the _fork_lock and will thus never be # able to acquire the _fork_lock. # # to mitigate this possible deadlock, _checkpid() will only wait 5 # seconds to acquire _fork_lock. if _fork_lock cannot be acquired in # that time it is assumed that the child is deadlocked and a # redis.ChildDeadlockedError error is raised. if self.pid != os.getpid(): # python 2.7 doesn't support a timeout option to lock.acquire() # we have to mimic lock timeouts ourselves. timeout_at = time() + 5 acquired = False while time() < timeout_at: acquired = self._fork_lock.acquire(False) if acquired: break if not acquired: raise ChildDeadlockedError # reset() the instance for the new process if another thread # hasn't already done so try: if self.pid != os.getpid(): self.reset() finally: self._fork_lock.release() def get_connection(self, command_name, *keys, **options): "Get a connection from the pool" self._checkpid() with self._lock: try: connection = self._available_connections.pop() except IndexError: connection = self.make_connection() self._in_use_connections.add(connection) try: # ensure this connection is connected to Redis connection.connect() # connections that the pool provides should be ready to send # a command. if not, the connection was either returned to the # pool before all data has been read or the socket has been # closed. either way, reconnect and verify everything is good. try: if connection.can_read(): raise ConnectionError('Connection has data') except ConnectionError: connection.disconnect() connection.connect() if connection.can_read(): raise ConnectionError('Connection not ready') except BaseException: # release the connection back to the pool so that we don't # leak it self.release(connection) raise return connection def get_encoder(self): "Return an encoder based on encoding settings" kwargs = self.connection_kwargs return Encoder( encoding=kwargs.get('encoding', 'utf-8'), encoding_errors=kwargs.get('encoding_errors', 'strict'), decode_responses=kwargs.get('decode_responses', False) ) def make_connection(self): "Create a new connection" if self._created_connections >= self.max_connections: raise ConnectionError("Too many connections") self._created_connections += 1 return self.connection_class(**self.connection_kwargs) def release(self, connection): "Releases the connection back to the pool" self._checkpid() with self._lock: try: self._in_use_connections.remove(connection) except KeyError: # Gracefully fail when a connection is returned to this pool # that the pool doesn't actually own pass if self.owns_connection(connection): self._available_connections.append(connection) else: # pool doesn't own this connection. do not add it back # to the pool and decrement the count so that another # connection can take its place if needed self._created_connections -= 1 connection.disconnect() return def owns_connection(self, connection): return connection.pid == self.pid def disconnect(self, inuse_connections=True): """ Disconnects connections in the pool If ``inuse_connections`` is True, disconnect connections that are current in use, potentially by other threads. Otherwise only disconnect connections that are idle in the pool. """ self._checkpid() with self._lock: if inuse_connections: connections = chain(self._available_connections, self._in_use_connections) else: connections = self._available_connections for connection in connections: connection.disconnect() class BlockingConnectionPool(ConnectionPool): """ Thread-safe blocking connection pool:: >>> from redis.client import Redis >>> client = Redis(connection_pool=BlockingConnectionPool()) It performs the same function as the default ``:py:class: ~redis.connection.ConnectionPool`` implementation, in that, it maintains a pool of reusable connections that can be shared by multiple redis clients (safely across threads if required). The difference is that, in the event that a client tries to get a connection from the pool when all of connections are in use, rather than raising a ``:py:class: ~redis.exceptions.ConnectionError`` (as the default ``:py:class: ~redis.connection.ConnectionPool`` implementation does), it makes the client wait ("blocks") for a specified number of seconds until a connection becomes available. Use ``max_connections`` to increase / decrease the pool size:: >>> pool = BlockingConnectionPool(max_connections=10) Use ``timeout`` to tell it either how many seconds to wait for a connection to become available, or to block forever: # Block forever. >>> pool = BlockingConnectionPool(timeout=None) # Raise a ``ConnectionError`` after five seconds if a connection is # not available. >>> pool = BlockingConnectionPool(timeout=5) """ def __init__(self, max_connections=50, timeout=20, connection_class=Connection, queue_class=LifoQueue, **connection_kwargs): self.queue_class = queue_class self.timeout = timeout super(BlockingConnectionPool, self).__init__( connection_class=connection_class, max_connections=max_connections, **connection_kwargs) def reset(self): # Create and fill up a thread safe queue with ``None`` values. self.pool = self.queue_class(self.max_connections) while True: try: self.pool.put_nowait(None) except Full: break # Keep a list of actual connection instances so that we can # disconnect them later. self._connections = [] # this must be the last operation in this method. while reset() is # called when holding _fork_lock, other threads in this process # can call _checkpid() which compares self.pid and os.getpid() without # holding any lock (for performance reasons). keeping this assignment # as the last operation ensures that those other threads will also # notice a pid difference and block waiting for the first thread to # release _fork_lock. when each of these threads eventually acquire # _fork_lock, they will notice that another thread already called # reset() and they will immediately release _fork_lock and continue on. self.pid = os.getpid() def make_connection(self): "Make a fresh connection." connection = self.connection_class(**self.connection_kwargs) self._connections.append(connection) return connection def get_connection(self, command_name, *keys, **options): """ Get a connection, blocking for ``self.timeout`` until a connection is available from the pool. If the connection returned is ``None`` then creates a new connection. Because we use a last-in first-out queue, the existing connections (having been returned to the pool after the initial ``None`` values were added) will be returned before ``None`` values. This means we only create new connections when we need to, i.e.: the actual number of connections will only increase in response to demand. """ # Make sure we haven't changed process. self._checkpid() # Try and get a connection from the pool. If one isn't available within # self.timeout then raise a ``ConnectionError``. connection = None try: connection = self.pool.get(block=True, timeout=self.timeout) except Empty: # Note that this is not caught by the redis client and will be # raised unless handled by application code. If you want never to raise ConnectionError("No connection available.") # If the ``connection`` is actually ``None`` then that's a cue to make # a new connection to add to the pool. if connection is None: connection = self.make_connection() try: # ensure this connection is connected to Redis connection.connect() # connections that the pool provides should be ready to send # a command. if not, the connection was either returned to the # pool before all data has been read or the socket has been # closed. either way, reconnect and verify everything is good. try: if connection.can_read(): raise ConnectionError('Connection has data') except ConnectionError: connection.disconnect() connection.connect() if connection.can_read(): raise ConnectionError('Connection not ready') except BaseException: # release the connection back to the pool so that we don't leak it self.release(connection) raise return connection def release(self, connection): "Releases the connection back to the pool." # Make sure we haven't changed process. self._checkpid() if not self.owns_connection(connection): # pool doesn't own this connection. do not add it back # to the pool. instead add a None value which is a placeholder # that will cause the pool to recreate the connection if # its needed. connection.disconnect() self.pool.put_nowait(None) return # Put the connection back into the pool. try: self.pool.put_nowait(connection) except Full: # perhaps the pool has been reset() after a fork? regardless, # we don't want this connection pass def disconnect(self): "Disconnects all connections in the pool." self._checkpid() for connection in self._connections: connection.disconnect() redis-py-3.5.3/redis/exceptions.py000066400000000000000000000024751366526254200171420ustar00rootroot00000000000000"Core exceptions raised by the Redis client" class RedisError(Exception): pass class ConnectionError(RedisError): pass class TimeoutError(RedisError): pass class AuthenticationError(ConnectionError): pass class BusyLoadingError(ConnectionError): pass class InvalidResponse(RedisError): pass class ResponseError(RedisError): pass class DataError(RedisError): pass class PubSubError(RedisError): pass class WatchError(RedisError): pass class NoScriptError(ResponseError): pass class ExecAbortError(ResponseError): pass class ReadOnlyError(ResponseError): pass class NoPermissionError(ResponseError): pass class LockError(RedisError, ValueError): "Errors acquiring or releasing a lock" # NOTE: For backwards compatability, this class derives from ValueError. # This was originally chosen to behave like threading.Lock. pass class LockNotOwnedError(LockError): "Error trying to extend or release a lock that is (no longer) owned" pass class ChildDeadlockedError(Exception): "Error indicating that a child process is deadlocked after a fork()" pass class AuthenticationWrongNumberOfArgsError(ResponseError): """ An error to indicate that the wrong number of args were sent to the AUTH command """ pass redis-py-3.5.3/redis/lock.py000066400000000000000000000261251366526254200157070ustar00rootroot00000000000000import threading import time as mod_time import uuid from redis.exceptions import LockError, LockNotOwnedError from redis.utils import dummy class Lock(object): """ A shared, distributed Lock. Using Redis for locking allows the Lock to be shared across processes and/or machines. It's left to the user to resolve deadlock issues and make sure multiple clients play nicely together. """ lua_release = None lua_extend = None lua_reacquire = None # KEYS[1] - lock name # ARGV[1] - token # return 1 if the lock was released, otherwise 0 LUA_RELEASE_SCRIPT = """ local token = redis.call('get', KEYS[1]) if not token or token ~= ARGV[1] then return 0 end redis.call('del', KEYS[1]) return 1 """ # KEYS[1] - lock name # ARGV[1] - token # ARGV[2] - additional milliseconds # ARGV[3] - "0" if the additional time should be added to the lock's # existing ttl or "1" if the existing ttl should be replaced # return 1 if the locks time was extended, otherwise 0 LUA_EXTEND_SCRIPT = """ local token = redis.call('get', KEYS[1]) if not token or token ~= ARGV[1] then return 0 end local expiration = redis.call('pttl', KEYS[1]) if not expiration then expiration = 0 end if expiration < 0 then return 0 end local newttl = ARGV[2] if ARGV[3] == "0" then newttl = ARGV[2] + expiration end redis.call('pexpire', KEYS[1], newttl) return 1 """ # KEYS[1] - lock name # ARGV[1] - token # ARGV[2] - milliseconds # return 1 if the locks time was reacquired, otherwise 0 LUA_REACQUIRE_SCRIPT = """ local token = redis.call('get', KEYS[1]) if not token or token ~= ARGV[1] then return 0 end redis.call('pexpire', KEYS[1], ARGV[2]) return 1 """ def __init__(self, redis, name, timeout=None, sleep=0.1, blocking=True, blocking_timeout=None, thread_local=True): """ Create a new Lock instance named ``name`` using the Redis client supplied by ``redis``. ``timeout`` indicates a maximum life for the lock. By default, it will remain locked until release() is called. ``timeout`` can be specified as a float or integer, both representing the number of seconds to wait. ``sleep`` indicates the amount of time to sleep per loop iteration when the lock is in blocking mode and another client is currently holding the lock. ``blocking`` indicates whether calling ``acquire`` should block until the lock has been acquired or to fail immediately, causing ``acquire`` to return False and the lock not being acquired. Defaults to True. Note this value can be overridden by passing a ``blocking`` argument to ``acquire``. ``blocking_timeout`` indicates the maximum amount of time in seconds to spend trying to acquire the lock. A value of ``None`` indicates continue trying forever. ``blocking_timeout`` can be specified as a float or integer, both representing the number of seconds to wait. ``thread_local`` indicates whether the lock token is placed in thread-local storage. By default, the token is placed in thread local storage so that a thread only sees its token, not a token set by another thread. Consider the following timeline: time: 0, thread-1 acquires `my-lock`, with a timeout of 5 seconds. thread-1 sets the token to "abc" time: 1, thread-2 blocks trying to acquire `my-lock` using the Lock instance. time: 5, thread-1 has not yet completed. redis expires the lock key. time: 5, thread-2 acquired `my-lock` now that it's available. thread-2 sets the token to "xyz" time: 6, thread-1 finishes its work and calls release(). if the token is *not* stored in thread local storage, then thread-1 would see the token value as "xyz" and would be able to successfully release the thread-2's lock. In some use cases it's necessary to disable thread local storage. For example, if you have code where one thread acquires a lock and passes that lock instance to a worker thread to release later. If thread local storage isn't disabled in this case, the worker thread won't see the token set by the thread that acquired the lock. Our assumption is that these cases aren't common and as such default to using thread local storage. """ self.redis = redis self.name = name self.timeout = timeout self.sleep = sleep self.blocking = blocking self.blocking_timeout = blocking_timeout self.thread_local = bool(thread_local) self.local = threading.local() if self.thread_local else dummy() self.local.token = None self.register_scripts() def register_scripts(self): cls = self.__class__ client = self.redis if cls.lua_release is None: cls.lua_release = client.register_script(cls.LUA_RELEASE_SCRIPT) if cls.lua_extend is None: cls.lua_extend = client.register_script(cls.LUA_EXTEND_SCRIPT) if cls.lua_reacquire is None: cls.lua_reacquire = \ client.register_script(cls.LUA_REACQUIRE_SCRIPT) def __enter__(self): # force blocking, as otherwise the user would have to check whether # the lock was actually acquired or not. if self.acquire(blocking=True): return self raise LockError("Unable to acquire lock within the time specified") def __exit__(self, exc_type, exc_value, traceback): self.release() def acquire(self, blocking=None, blocking_timeout=None, token=None): """ Use Redis to hold a shared, distributed lock named ``name``. Returns True once the lock is acquired. If ``blocking`` is False, always return immediately. If the lock was acquired, return True, otherwise return False. ``blocking_timeout`` specifies the maximum number of seconds to wait trying to acquire the lock. ``token`` specifies the token value to be used. If provided, token must be a bytes object or a string that can be encoded to a bytes object with the default encoding. If a token isn't specified, a UUID will be generated. """ sleep = self.sleep if token is None: token = uuid.uuid1().hex.encode() else: encoder = self.redis.connection_pool.get_encoder() token = encoder.encode(token) if blocking is None: blocking = self.blocking if blocking_timeout is None: blocking_timeout = self.blocking_timeout stop_trying_at = None if blocking_timeout is not None: stop_trying_at = mod_time.time() + blocking_timeout while True: if self.do_acquire(token): self.local.token = token return True if not blocking: return False next_try_at = mod_time.time() + sleep if stop_trying_at is not None and next_try_at > stop_trying_at: return False mod_time.sleep(sleep) def do_acquire(self, token): if self.timeout: # convert to milliseconds timeout = int(self.timeout * 1000) else: timeout = None if self.redis.set(self.name, token, nx=True, px=timeout): return True return False def locked(self): """ Returns True if this key is locked by any process, otherwise False. """ return self.redis.get(self.name) is not None def owned(self): """ Returns True if this key is locked by this lock, otherwise False. """ stored_token = self.redis.get(self.name) # need to always compare bytes to bytes # TODO: this can be simplified when the context manager is finished if stored_token and not isinstance(stored_token, bytes): encoder = self.redis.connection_pool.get_encoder() stored_token = encoder.encode(stored_token) return self.local.token is not None and \ stored_token == self.local.token def release(self): "Releases the already acquired lock" expected_token = self.local.token if expected_token is None: raise LockError("Cannot release an unlocked lock") self.local.token = None self.do_release(expected_token) def do_release(self, expected_token): if not bool(self.lua_release(keys=[self.name], args=[expected_token], client=self.redis)): raise LockNotOwnedError("Cannot release a lock" " that's no longer owned") def extend(self, additional_time, replace_ttl=False): """ Adds more time to an already acquired lock. ``additional_time`` can be specified as an integer or a float, both representing the number of seconds to add. ``replace_ttl`` if False (the default), add `additional_time` to the lock's existing ttl. If True, replace the lock's ttl with `additional_time`. """ if self.local.token is None: raise LockError("Cannot extend an unlocked lock") if self.timeout is None: raise LockError("Cannot extend a lock with no timeout") return self.do_extend(additional_time, replace_ttl) def do_extend(self, additional_time, replace_ttl): additional_time = int(additional_time * 1000) if not bool( self.lua_extend( keys=[self.name], args=[ self.local.token, additional_time, replace_ttl and "1" or "0" ], client=self.redis, ) ): raise LockNotOwnedError( "Cannot extend a lock that's" " no longer owned" ) return True def reacquire(self): """ Resets a TTL of an already acquired lock back to a timeout value. """ if self.local.token is None: raise LockError("Cannot reacquire an unlocked lock") if self.timeout is None: raise LockError("Cannot reacquire a lock with no timeout") return self.do_reacquire() def do_reacquire(self): timeout = int(self.timeout * 1000) if not bool(self.lua_reacquire(keys=[self.name], args=[self.local.token, timeout], client=self.redis)): raise LockNotOwnedError("Cannot reacquire a lock that's" " no longer owned") return True redis-py-3.5.3/redis/sentinel.py000066400000000000000000000266761366526254200166130ustar00rootroot00000000000000import random import weakref from redis.client import Redis from redis.connection import ConnectionPool, Connection from redis.exceptions import (ConnectionError, ResponseError, ReadOnlyError, TimeoutError) from redis._compat import iteritems, nativestr, xrange class MasterNotFoundError(ConnectionError): pass class SlaveNotFoundError(ConnectionError): pass class SentinelManagedConnection(Connection): def __init__(self, **kwargs): self.connection_pool = kwargs.pop('connection_pool') super(SentinelManagedConnection, self).__init__(**kwargs) def __repr__(self): pool = self.connection_pool s = '%s' % (type(self).__name__, pool.service_name) if self.host: host_info = ',host=%s,port=%s' % (self.host, self.port) s = s % host_info return s def connect_to(self, address): self.host, self.port = address super(SentinelManagedConnection, self).connect() if self.connection_pool.check_connection: self.send_command('PING') if nativestr(self.read_response()) != 'PONG': raise ConnectionError('PING failed') def connect(self): if self._sock: return # already connected if self.connection_pool.is_master: self.connect_to(self.connection_pool.get_master_address()) else: for slave in self.connection_pool.rotate_slaves(): try: return self.connect_to(slave) except ConnectionError: continue raise SlaveNotFoundError # Never be here def read_response(self): try: return super(SentinelManagedConnection, self).read_response() except ReadOnlyError: if self.connection_pool.is_master: # When talking to a master, a ReadOnlyError when likely # indicates that the previous master that we're still connected # to has been demoted to a slave and there's a new master. # calling disconnect will force the connection to re-query # sentinel during the next connect() attempt. self.disconnect() raise ConnectionError('The previous master is now a slave') raise class SentinelConnectionPool(ConnectionPool): """ Sentinel backed connection pool. If ``check_connection`` flag is set to True, SentinelManagedConnection sends a PING command right after establishing the connection. """ def __init__(self, service_name, sentinel_manager, **kwargs): kwargs['connection_class'] = kwargs.get( 'connection_class', SentinelManagedConnection) self.is_master = kwargs.pop('is_master', True) self.check_connection = kwargs.pop('check_connection', False) super(SentinelConnectionPool, self).__init__(**kwargs) self.connection_kwargs['connection_pool'] = weakref.proxy(self) self.service_name = service_name self.sentinel_manager = sentinel_manager def __repr__(self): return "%s>> from redis.sentinel import Sentinel >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1) >>> master = sentinel.master_for('mymaster', socket_timeout=0.1) >>> master.set('foo', 'bar') >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1) >>> slave.get('foo') b'bar' ``sentinels`` is a list of sentinel nodes. Each node is represented by a pair (hostname, port). ``min_other_sentinels`` defined a minimum number of peers for a sentinel. When querying a sentinel, if it doesn't meet this threshold, responses from that sentinel won't be considered valid. ``sentinel_kwargs`` is a dictionary of connection arguments used when connecting to sentinel instances. Any argument that can be passed to a normal Redis connection can be specified here. If ``sentinel_kwargs`` is not specified, any socket_timeout and socket_keepalive options specified in ``connection_kwargs`` will be used. ``connection_kwargs`` are keyword arguments that will be used when establishing a connection to a Redis server. """ def __init__(self, sentinels, min_other_sentinels=0, sentinel_kwargs=None, **connection_kwargs): # if sentinel_kwargs isn't defined, use the socket_* options from # connection_kwargs if sentinel_kwargs is None: sentinel_kwargs = { k: v for k, v in iteritems(connection_kwargs) if k.startswith('socket_') } self.sentinel_kwargs = sentinel_kwargs self.sentinels = [Redis(hostname, port, **self.sentinel_kwargs) for hostname, port in sentinels] self.min_other_sentinels = min_other_sentinels self.connection_kwargs = connection_kwargs def __repr__(self): sentinel_addresses = [] for sentinel in self.sentinels: sentinel_addresses.append('%s:%s' % ( sentinel.connection_pool.connection_kwargs['host'], sentinel.connection_pool.connection_kwargs['port'], )) return '%s' % ( type(self).__name__, ','.join(sentinel_addresses)) def check_master_state(self, state, service_name): if not state['is_master'] or state['is_sdown'] or state['is_odown']: return False # Check if our sentinel doesn't see other nodes if state['num-other-sentinels'] < self.min_other_sentinels: return False return True def discover_master(self, service_name): """ Asks sentinel servers for the Redis master's address corresponding to the service labeled ``service_name``. Returns a pair (address, port) or raises MasterNotFoundError if no master is found. """ for sentinel_no, sentinel in enumerate(self.sentinels): try: masters = sentinel.sentinel_masters() except (ConnectionError, TimeoutError): continue state = masters.get(service_name) if state and self.check_master_state(state, service_name): # Put this sentinel at the top of the list self.sentinels[0], self.sentinels[sentinel_no] = ( sentinel, self.sentinels[0]) return state['ip'], state['port'] raise MasterNotFoundError("No master found for %r" % (service_name,)) def filter_slaves(self, slaves): "Remove slaves that are in an ODOWN or SDOWN state" slaves_alive = [] for slave in slaves: if slave['is_odown'] or slave['is_sdown']: continue slaves_alive.append((slave['ip'], slave['port'])) return slaves_alive def discover_slaves(self, service_name): "Returns a list of alive slaves for service ``service_name``" for sentinel in self.sentinels: try: slaves = sentinel.sentinel_slaves(service_name) except (ConnectionError, ResponseError, TimeoutError): continue slaves = self.filter_slaves(slaves) if slaves: return slaves return [] def master_for(self, service_name, redis_class=Redis, connection_pool_class=SentinelConnectionPool, **kwargs): """ Returns a redis client instance for the ``service_name`` master. A SentinelConnectionPool class is used to retrive the master's address before establishing a new connection. NOTE: If the master's address has changed, any cached connections to the old master are closed. By default clients will be a redis.Redis instance. Specify a different class to the ``redis_class`` argument if you desire something different. The ``connection_pool_class`` specifies the connection pool to use. The SentinelConnectionPool will be used by default. All other keyword arguments are merged with any connection_kwargs passed to this class and passed to the connection pool as keyword arguments to be used to initialize Redis connections. """ kwargs['is_master'] = True connection_kwargs = dict(self.connection_kwargs) connection_kwargs.update(kwargs) return redis_class(connection_pool=connection_pool_class( service_name, self, **connection_kwargs)) def slave_for(self, service_name, redis_class=Redis, connection_pool_class=SentinelConnectionPool, **kwargs): """ Returns redis client instance for the ``service_name`` slave(s). A SentinelConnectionPool class is used to retrive the slave's address before establishing a new connection. By default clients will be a redis.Redis instance. Specify a different class to the ``redis_class`` argument if you desire something different. The ``connection_pool_class`` specifies the connection pool to use. The SentinelConnectionPool will be used by default. All other keyword arguments are merged with any connection_kwargs passed to this class and passed to the connection pool as keyword arguments to be used to initialize Redis connections. """ kwargs['is_master'] = False connection_kwargs = dict(self.connection_kwargs) connection_kwargs.update(kwargs) return redis_class(connection_pool=connection_pool_class( service_name, self, **connection_kwargs)) redis-py-3.5.3/redis/utils.py000066400000000000000000000012421366526254200161100ustar00rootroot00000000000000from contextlib import contextmanager try: import hiredis # noqa HIREDIS_AVAILABLE = True except ImportError: HIREDIS_AVAILABLE = False def from_url(url, db=None, **kwargs): """ Returns an active Redis client generated from the given database URL. Will attempt to extract the database id from the path url fragment, if none is provided. """ from redis.client import Redis return Redis.from_url(url, db, **kwargs) @contextmanager def pipeline(redis_obj): p = redis_obj.pipeline() yield p p.execute() class dummy(object): """ Instances of this class can be used as an attribute container. """ pass redis-py-3.5.3/setup.cfg000066400000000000000000000023541366526254200151160ustar00rootroot00000000000000[metadata] name = redis version = attr: redis.__version__ description = Python client for Redis key-value store long_description = file: README.rst url = https://github.com/andymccurdy/redis-py author = Andy McCurdy author_email = sedrik@gmail.com maintainer = Andy McCurdy maintainer_email = sedrik@gmail.com keywords = Redis, key-value store license = MIT classifiers = Development Status :: 5 - Production/Stable Environment :: Console Intended Audience :: Developers License :: OSI Approved :: MIT License Operating System :: OS Independent Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.5 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.8 Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: Implementation :: PyPy [options] packages = redis python_requires = >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.* [options.extras_require] hiredis = hiredis>=0.1.3 [flake8] exclude = .venv,.tox,dist,docs,build,*.egg,redis_install [bdist_wheel] universal = 1 redis-py-3.5.3/setup.py000066400000000000000000000000741366526254200150040ustar00rootroot00000000000000#!/usr/bin/env python from setuptools import setup setup() redis-py-3.5.3/tests/000077500000000000000000000000001366526254200144335ustar00rootroot00000000000000redis-py-3.5.3/tests/__init__.py000066400000000000000000000000001366526254200165320ustar00rootroot00000000000000redis-py-3.5.3/tests/conftest.py000066400000000000000000000137201366526254200166350ustar00rootroot00000000000000import random import pytest import redis from mock import Mock from distutils.version import StrictVersion # redis 6 release candidates report a version number of 5.9.x. Use this # constant for skip_if decorators as a placeholder until 6.0.0 is officially # released REDIS_6_VERSION = '5.9.0' REDIS_INFO = {} default_redis_url = "redis://localhost:6379/9" def pytest_addoption(parser): parser.addoption('--redis-url', default=default_redis_url, action="store", help="Redis connection string," " defaults to `%(default)s`") def _get_info(redis_url): client = redis.Redis.from_url(redis_url) info = client.info() client.connection_pool.disconnect() return info def pytest_sessionstart(session): redis_url = session.config.getoption("--redis-url") info = _get_info(redis_url) version = info["redis_version"] arch_bits = info["arch_bits"] REDIS_INFO["version"] = version REDIS_INFO["arch_bits"] = arch_bits def skip_if_server_version_lt(min_version): redis_version = REDIS_INFO["version"] check = StrictVersion(redis_version) < StrictVersion(min_version) return pytest.mark.skipif( check, reason="Redis version required >= {}".format(min_version)) def skip_if_server_version_gte(min_version): redis_version = REDIS_INFO["version"] check = StrictVersion(redis_version) >= StrictVersion(min_version) return pytest.mark.skipif( check, reason="Redis version required < {}".format(min_version)) def skip_unless_arch_bits(arch_bits): return pytest.mark.skipif(REDIS_INFO["arch_bits"] != arch_bits, reason="server is not {}-bit".format(arch_bits)) def _get_client(cls, request, single_connection_client=True, **kwargs): redis_url = request.config.getoption("--redis-url") client = cls.from_url(redis_url, **kwargs) if single_connection_client: client = client.client() if request: def teardown(): try: client.flushdb() except redis.ConnectionError: # handle cases where a test disconnected a client # just manually retry the flushdb client.flushdb() client.close() client.connection_pool.disconnect() request.addfinalizer(teardown) return client @pytest.fixture() def r(request): with _get_client(redis.Redis, request) as client: yield client @pytest.fixture() def r2(request): "A second client for tests that need multiple" with _get_client(redis.Redis, request) as client: yield client def _gen_cluster_mock_resp(r, response): connection = Mock() connection.read_response.return_value = response r.connection = connection return r @pytest.fixture() def mock_cluster_resp_ok(request, **kwargs): r = _get_client(redis.Redis, request, **kwargs) return _gen_cluster_mock_resp(r, 'OK') @pytest.fixture() def mock_cluster_resp_int(request, **kwargs): r = _get_client(redis.Redis, request, **kwargs) return _gen_cluster_mock_resp(r, '2') @pytest.fixture() def mock_cluster_resp_info(request, **kwargs): r = _get_client(redis.Redis, request, **kwargs) response = ('cluster_state:ok\r\ncluster_slots_assigned:16384\r\n' 'cluster_slots_ok:16384\r\ncluster_slots_pfail:0\r\n' 'cluster_slots_fail:0\r\ncluster_known_nodes:7\r\n' 'cluster_size:3\r\ncluster_current_epoch:7\r\n' 'cluster_my_epoch:2\r\ncluster_stats_messages_sent:170262\r\n' 'cluster_stats_messages_received:105653\r\n') return _gen_cluster_mock_resp(r, response) @pytest.fixture() def mock_cluster_resp_nodes(request, **kwargs): r = _get_client(redis.Redis, request, **kwargs) response = ('c8253bae761cb1ecb2b61857d85dfe455a0fec8b 172.17.0.7:7006 ' 'slave aa90da731f673a99617dfe930306549a09f83a6b 0 ' '1447836263059 5 connected\n' '9bd595fe4821a0e8d6b99d70faa660638a7612b3 172.17.0.7:7008 ' 'master - 0 1447836264065 0 connected\n' 'aa90da731f673a99617dfe930306549a09f83a6b 172.17.0.7:7003 ' 'myself,master - 0 0 2 connected 5461-10922\n' '1df047e5a594f945d82fc140be97a1452bcbf93e 172.17.0.7:7007 ' 'slave 19efe5a631f3296fdf21a5441680f893e8cc96ec 0 ' '1447836262556 3 connected\n' '4ad9a12e63e8f0207025eeba2354bcf4c85e5b22 172.17.0.7:7005 ' 'master - 0 1447836262555 7 connected 0-5460\n' '19efe5a631f3296fdf21a5441680f893e8cc96ec 172.17.0.7:7004 ' 'master - 0 1447836263562 3 connected 10923-16383\n' 'fbb23ed8cfa23f17eaf27ff7d0c410492a1093d6 172.17.0.7:7002 ' 'master,fail - 1447829446956 1447829444948 1 disconnected\n' ) return _gen_cluster_mock_resp(r, response) @pytest.fixture() def mock_cluster_resp_slaves(request, **kwargs): r = _get_client(redis.Redis, request, **kwargs) response = ("['1df047e5a594f945d82fc140be97a1452bcbf93e 172.17.0.7:7007 " "slave 19efe5a631f3296fdf21a5441680f893e8cc96ec 0 " "1447836789290 3 connected']") return _gen_cluster_mock_resp(r, response) def wait_for_command(client, monitor, command): # issue a command with a key name that's local to this process. # if we find a command with our key before the command we're waiting # for, something went wrong redis_version = REDIS_INFO["version"] if StrictVersion(redis_version) >= StrictVersion('5.0.0'): id_str = str(client.client_id()) else: id_str = '%08x' % random.randrange(2**32) key = '__REDIS-PY-%s__' % id_str client.get(key) while True: monitor_response = monitor.next_command() if command in monitor_response['command']: return monitor_response if key in monitor_response['command']: return None redis-py-3.5.3/tests/test_commands.py000066400000000000000000003001751366526254200176530ustar00rootroot00000000000000from __future__ import unicode_literals import binascii import datetime import pytest import re import redis import time from redis._compat import (unichr, ascii_letters, iteritems, iterkeys, itervalues, long, basestring) from redis.client import parse_info from redis import exceptions from .conftest import (skip_if_server_version_lt, skip_if_server_version_gte, skip_unless_arch_bits, REDIS_6_VERSION) @pytest.fixture() def slowlog(request, r): current_config = r.config_get() old_slower_than_value = current_config['slowlog-log-slower-than'] old_max_legnth_value = current_config['slowlog-max-len'] def cleanup(): r.config_set('slowlog-log-slower-than', old_slower_than_value) r.config_set('slowlog-max-len', old_max_legnth_value) request.addfinalizer(cleanup) r.config_set('slowlog-log-slower-than', 0) r.config_set('slowlog-max-len', 128) def redis_server_time(client): seconds, milliseconds = client.time() timestamp = float('%s.%s' % (seconds, milliseconds)) return datetime.datetime.fromtimestamp(timestamp) def get_stream_message(client, stream, message_id): "Fetch a stream message and format it as a (message_id, fields) pair" response = client.xrange(stream, min=message_id, max=message_id) assert len(response) == 1 return response[0] # RESPONSE CALLBACKS class TestResponseCallbacks(object): "Tests for the response callback system" def test_response_callbacks(self, r): assert r.response_callbacks == redis.Redis.RESPONSE_CALLBACKS assert id(r.response_callbacks) != id(redis.Redis.RESPONSE_CALLBACKS) r.set_response_callback('GET', lambda x: 'static') r['a'] = 'foo' assert r['a'] == 'static' def test_case_insensitive_command_names(self, r): assert r.response_callbacks['del'] == r.response_callbacks['DEL'] class TestRedisCommands(object): def test_command_on_invalid_key_type(self, r): r.lpush('a', '1') with pytest.raises(redis.ResponseError): r['a'] # SERVER INFORMATION @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_cat_no_category(self, r): categories = r.acl_cat() assert isinstance(categories, list) assert 'read' in categories @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_cat_with_category(self, r): commands = r.acl_cat('read') assert isinstance(commands, list) assert 'get' in commands @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_deluser(self, r, request): username = 'redis-py-user' def teardown(): r.acl_deluser(username) request.addfinalizer(teardown) assert r.acl_deluser(username) == 0 assert r.acl_setuser(username, enabled=False, reset=True) assert r.acl_deluser(username) == 1 @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_genpass(self, r): password = r.acl_genpass() assert isinstance(password, basestring) @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_getuser_setuser(self, r, request): username = 'redis-py-user' def teardown(): r.acl_deluser(username) request.addfinalizer(teardown) # test enabled=False assert r.acl_setuser(username, enabled=False, reset=True) assert r.acl_getuser(username) == { 'categories': ['-@all'], 'commands': [], 'enabled': False, 'flags': ['off'], 'keys': [], 'passwords': [], } # test nopass=True assert r.acl_setuser(username, enabled=True, reset=True, nopass=True) assert r.acl_getuser(username) == { 'categories': ['-@all'], 'commands': [], 'enabled': True, 'flags': ['on', 'nopass'], 'keys': [], 'passwords': [], } # test all args assert r.acl_setuser(username, enabled=True, reset=True, passwords=['+pass1', '+pass2'], categories=['+set', '+@hash', '-geo'], commands=['+get', '+mget', '-hset'], keys=['cache:*', 'objects:*']) acl = r.acl_getuser(username) assert set(acl['categories']) == set(['-@all', '+@set', '+@hash']) assert set(acl['commands']) == set(['+get', '+mget', '-hset']) assert acl['enabled'] is True assert acl['flags'] == ['on'] assert set(acl['keys']) == set([b'cache:*', b'objects:*']) assert len(acl['passwords']) == 2 # test reset=False keeps existing ACL and applies new ACL on top assert r.acl_setuser(username, enabled=True, reset=True, passwords=['+pass1'], categories=['+@set'], commands=['+get'], keys=['cache:*']) assert r.acl_setuser(username, enabled=True, passwords=['+pass2'], categories=['+@hash'], commands=['+mget'], keys=['objects:*']) acl = r.acl_getuser(username) assert set(acl['categories']) == set(['-@all', '+@set', '+@hash']) assert set(acl['commands']) == set(['+get', '+mget']) assert acl['enabled'] is True assert acl['flags'] == ['on'] assert set(acl['keys']) == set([b'cache:*', b'objects:*']) assert len(acl['passwords']) == 2 # test removal of passwords assert r.acl_setuser(username, enabled=True, reset=True, passwords=['+pass1', '+pass2']) assert len(r.acl_getuser(username)['passwords']) == 2 assert r.acl_setuser(username, enabled=True, passwords=['-pass2']) assert len(r.acl_getuser(username)['passwords']) == 1 # Resets and tests that hashed passwords are set properly. hashed_password = ('5e884898da28047151d0e56f8dc629' '2773603d0d6aabbdd62a11ef721d1542d8') assert r.acl_setuser(username, enabled=True, reset=True, hashed_passwords=['+' + hashed_password]) acl = r.acl_getuser(username) assert acl['passwords'] == [hashed_password] # test removal of hashed passwords assert r.acl_setuser(username, enabled=True, reset=True, hashed_passwords=['+' + hashed_password], passwords=['+pass1']) assert len(r.acl_getuser(username)['passwords']) == 2 assert r.acl_setuser(username, enabled=True, hashed_passwords=['-' + hashed_password]) assert len(r.acl_getuser(username)['passwords']) == 1 @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_list(self, r, request): username = 'redis-py-user' def teardown(): r.acl_deluser(username) request.addfinalizer(teardown) assert r.acl_setuser(username, enabled=False, reset=True) users = r.acl_list() assert 'user %s off -@all' % username in users @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_setuser_categories_without_prefix_fails(self, r, request): username = 'redis-py-user' def teardown(): r.acl_deluser(username) request.addfinalizer(teardown) with pytest.raises(exceptions.DataError): r.acl_setuser(username, categories=['list']) @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_setuser_commands_without_prefix_fails(self, r, request): username = 'redis-py-user' def teardown(): r.acl_deluser(username) request.addfinalizer(teardown) with pytest.raises(exceptions.DataError): r.acl_setuser(username, commands=['get']) @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_setuser_add_passwords_and_nopass_fails(self, r, request): username = 'redis-py-user' def teardown(): r.acl_deluser(username) request.addfinalizer(teardown) with pytest.raises(exceptions.DataError): r.acl_setuser(username, passwords='+mypass', nopass=True) @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_users(self, r): users = r.acl_users() assert isinstance(users, list) assert len(users) > 0 @skip_if_server_version_lt(REDIS_6_VERSION) def test_acl_whoami(self, r): username = r.acl_whoami() assert isinstance(username, basestring) def test_client_list(self, r): clients = r.client_list() assert isinstance(clients[0], dict) assert 'addr' in clients[0] @skip_if_server_version_lt('5.0.0') def test_client_list_type(self, r): with pytest.raises(exceptions.RedisError): r.client_list(_type='not a client type') for client_type in ['normal', 'master', 'replica', 'pubsub']: clients = r.client_list(_type=client_type) assert isinstance(clients, list) @skip_if_server_version_lt('5.0.0') def test_client_id(self, r): assert r.client_id() > 0 @skip_if_server_version_lt('5.0.0') def test_client_unblock(self, r): myid = r.client_id() assert not r.client_unblock(myid) assert not r.client_unblock(myid, error=True) assert not r.client_unblock(myid, error=False) @skip_if_server_version_lt('2.6.9') def test_client_getname(self, r): assert r.client_getname() is None @skip_if_server_version_lt('2.6.9') def test_client_setname(self, r): assert r.client_setname('redis_py_test') assert r.client_getname() == 'redis_py_test' @skip_if_server_version_lt('2.6.9') def test_client_kill(self, r, r2): r.client_setname('redis-py-c1') r2.client_setname('redis-py-c2') clients = [client for client in r.client_list() if client.get('name') in ['redis-py-c1', 'redis-py-c2']] assert len(clients) == 2 clients_by_name = dict([(client.get('name'), client) for client in clients]) client_addr = clients_by_name['redis-py-c2'].get('addr') assert r.client_kill(client_addr) is True clients = [client for client in r.client_list() if client.get('name') in ['redis-py-c1', 'redis-py-c2']] assert len(clients) == 1 assert clients[0].get('name') == 'redis-py-c1' @skip_if_server_version_lt('2.8.12') def test_client_kill_filter_invalid_params(self, r): # empty with pytest.raises(exceptions.DataError): r.client_kill_filter() # invalid skipme with pytest.raises(exceptions.DataError): r.client_kill_filter(skipme="yeah") # invalid type with pytest.raises(exceptions.DataError): r.client_kill_filter(_type="caster") @skip_if_server_version_lt('2.8.12') def test_client_kill_filter_by_id(self, r, r2): r.client_setname('redis-py-c1') r2.client_setname('redis-py-c2') clients = [client for client in r.client_list() if client.get('name') in ['redis-py-c1', 'redis-py-c2']] assert len(clients) == 2 clients_by_name = dict([(client.get('name'), client) for client in clients]) client_2_id = clients_by_name['redis-py-c2'].get('id') resp = r.client_kill_filter(_id=client_2_id) assert resp == 1 clients = [client for client in r.client_list() if client.get('name') in ['redis-py-c1', 'redis-py-c2']] assert len(clients) == 1 assert clients[0].get('name') == 'redis-py-c1' @skip_if_server_version_lt('2.8.12') def test_client_kill_filter_by_addr(self, r, r2): r.client_setname('redis-py-c1') r2.client_setname('redis-py-c2') clients = [client for client in r.client_list() if client.get('name') in ['redis-py-c1', 'redis-py-c2']] assert len(clients) == 2 clients_by_name = dict([(client.get('name'), client) for client in clients]) client_2_addr = clients_by_name['redis-py-c2'].get('addr') resp = r.client_kill_filter(addr=client_2_addr) assert resp == 1 clients = [client for client in r.client_list() if client.get('name') in ['redis-py-c1', 'redis-py-c2']] assert len(clients) == 1 assert clients[0].get('name') == 'redis-py-c1' @skip_if_server_version_lt('2.6.9') def test_client_list_after_client_setname(self, r): r.client_setname('redis_py_test') clients = r.client_list() # we don't know which client ours will be assert 'redis_py_test' in [c['name'] for c in clients] @skip_if_server_version_lt('2.9.50') def test_client_pause(self, r): assert r.client_pause(1) assert r.client_pause(timeout=1) with pytest.raises(exceptions.RedisError): r.client_pause(timeout='not an integer') def test_config_get(self, r): data = r.config_get() assert 'maxmemory' in data assert data['maxmemory'].isdigit() def test_config_resetstat(self, r): r.ping() prior_commands_processed = int(r.info()['total_commands_processed']) assert prior_commands_processed >= 1 r.config_resetstat() reset_commands_processed = int(r.info()['total_commands_processed']) assert reset_commands_processed < prior_commands_processed def test_config_set(self, r): data = r.config_get() rdbname = data['dbfilename'] try: assert r.config_set('dbfilename', 'redis_py_test.rdb') assert r.config_get()['dbfilename'] == 'redis_py_test.rdb' finally: assert r.config_set('dbfilename', rdbname) def test_dbsize(self, r): r['a'] = 'foo' r['b'] = 'bar' assert r.dbsize() == 2 def test_echo(self, r): assert r.echo('foo bar') == b'foo bar' def test_info(self, r): r['a'] = 'foo' r['b'] = 'bar' info = r.info() assert isinstance(info, dict) assert info['db9']['keys'] == 2 def test_lastsave(self, r): assert isinstance(r.lastsave(), datetime.datetime) def test_object(self, r): r['a'] = 'foo' assert isinstance(r.object('refcount', 'a'), int) assert isinstance(r.object('idletime', 'a'), int) assert r.object('encoding', 'a') in (b'raw', b'embstr') assert r.object('idletime', 'invalid-key') is None def test_ping(self, r): assert r.ping() def test_slowlog_get(self, r, slowlog): assert r.slowlog_reset() unicode_string = unichr(3456) + 'abcd' + unichr(3421) r.get(unicode_string) slowlog = r.slowlog_get() assert isinstance(slowlog, list) commands = [log['command'] for log in slowlog] get_command = b' '.join((b'GET', unicode_string.encode('utf-8'))) assert get_command in commands assert b'SLOWLOG RESET' in commands # the order should be ['GET ', 'SLOWLOG RESET'], # but if other clients are executing commands at the same time, there # could be commands, before, between, or after, so just check that # the two we care about are in the appropriate order. assert commands.index(get_command) < commands.index(b'SLOWLOG RESET') # make sure other attributes are typed correctly assert isinstance(slowlog[0]['start_time'], int) assert isinstance(slowlog[0]['duration'], int) def test_slowlog_get_limit(self, r, slowlog): assert r.slowlog_reset() r.get('foo') r.get('bar') slowlog = r.slowlog_get(1) assert isinstance(slowlog, list) commands = [log['command'] for log in slowlog] assert b'GET foo' not in commands assert b'GET bar' in commands def test_slowlog_length(self, r, slowlog): r.get('foo') assert isinstance(r.slowlog_len(), int) @skip_if_server_version_lt('2.6.0') def test_time(self, r): t = r.time() assert len(t) == 2 assert isinstance(t[0], int) assert isinstance(t[1], int) # BASIC KEY COMMANDS def test_append(self, r): assert r.append('a', 'a1') == 2 assert r['a'] == b'a1' assert r.append('a', 'a2') == 4 assert r['a'] == b'a1a2' @skip_if_server_version_lt('2.6.0') def test_bitcount(self, r): r.setbit('a', 5, True) assert r.bitcount('a') == 1 r.setbit('a', 6, True) assert r.bitcount('a') == 2 r.setbit('a', 5, False) assert r.bitcount('a') == 1 r.setbit('a', 9, True) r.setbit('a', 17, True) r.setbit('a', 25, True) r.setbit('a', 33, True) assert r.bitcount('a') == 5 assert r.bitcount('a', 0, -1) == 5 assert r.bitcount('a', 2, 3) == 2 assert r.bitcount('a', 2, -1) == 3 assert r.bitcount('a', -2, -1) == 2 assert r.bitcount('a', 1, 1) == 1 @skip_if_server_version_lt('2.6.0') def test_bitop_not_empty_string(self, r): r['a'] = '' r.bitop('not', 'r', 'a') assert r.get('r') is None @skip_if_server_version_lt('2.6.0') def test_bitop_not(self, r): test_str = b'\xAA\x00\xFF\x55' correct = ~0xAA00FF55 & 0xFFFFFFFF r['a'] = test_str r.bitop('not', 'r', 'a') assert int(binascii.hexlify(r['r']), 16) == correct @skip_if_server_version_lt('2.6.0') def test_bitop_not_in_place(self, r): test_str = b'\xAA\x00\xFF\x55' correct = ~0xAA00FF55 & 0xFFFFFFFF r['a'] = test_str r.bitop('not', 'a', 'a') assert int(binascii.hexlify(r['a']), 16) == correct @skip_if_server_version_lt('2.6.0') def test_bitop_single_string(self, r): test_str = b'\x01\x02\xFF' r['a'] = test_str r.bitop('and', 'res1', 'a') r.bitop('or', 'res2', 'a') r.bitop('xor', 'res3', 'a') assert r['res1'] == test_str assert r['res2'] == test_str assert r['res3'] == test_str @skip_if_server_version_lt('2.6.0') def test_bitop_string_operands(self, r): r['a'] = b'\x01\x02\xFF\xFF' r['b'] = b'\x01\x02\xFF' r.bitop('and', 'res1', 'a', 'b') r.bitop('or', 'res2', 'a', 'b') r.bitop('xor', 'res3', 'a', 'b') assert int(binascii.hexlify(r['res1']), 16) == 0x0102FF00 assert int(binascii.hexlify(r['res2']), 16) == 0x0102FFFF assert int(binascii.hexlify(r['res3']), 16) == 0x000000FF @skip_if_server_version_lt('2.8.7') def test_bitpos(self, r): key = 'key:bitpos' r.set(key, b'\xff\xf0\x00') assert r.bitpos(key, 0) == 12 assert r.bitpos(key, 0, 2, -1) == 16 assert r.bitpos(key, 0, -2, -1) == 12 r.set(key, b'\x00\xff\xf0') assert r.bitpos(key, 1, 0) == 8 assert r.bitpos(key, 1, 1) == 8 r.set(key, b'\x00\x00\x00') assert r.bitpos(key, 1) == -1 @skip_if_server_version_lt('2.8.7') def test_bitpos_wrong_arguments(self, r): key = 'key:bitpos:wrong:args' r.set(key, b'\xff\xf0\x00') with pytest.raises(exceptions.RedisError): r.bitpos(key, 0, end=1) == 12 with pytest.raises(exceptions.RedisError): r.bitpos(key, 7) == 12 def test_decr(self, r): assert r.decr('a') == -1 assert r['a'] == b'-1' assert r.decr('a') == -2 assert r['a'] == b'-2' assert r.decr('a', amount=5) == -7 assert r['a'] == b'-7' def test_decrby(self, r): assert r.decrby('a', amount=2) == -2 assert r.decrby('a', amount=3) == -5 assert r['a'] == b'-5' def test_delete(self, r): assert r.delete('a') == 0 r['a'] = 'foo' assert r.delete('a') == 1 def test_delete_with_multiple_keys(self, r): r['a'] = 'foo' r['b'] = 'bar' assert r.delete('a', 'b') == 2 assert r.get('a') is None assert r.get('b') is None def test_delitem(self, r): r['a'] = 'foo' del r['a'] assert r.get('a') is None @skip_if_server_version_lt('4.0.0') def test_unlink(self, r): assert r.unlink('a') == 0 r['a'] = 'foo' assert r.unlink('a') == 1 assert r.get('a') is None @skip_if_server_version_lt('4.0.0') def test_unlink_with_multiple_keys(self, r): r['a'] = 'foo' r['b'] = 'bar' assert r.unlink('a', 'b') == 2 assert r.get('a') is None assert r.get('b') is None @skip_if_server_version_lt('2.6.0') def test_dump_and_restore(self, r): r['a'] = 'foo' dumped = r.dump('a') del r['a'] r.restore('a', 0, dumped) assert r['a'] == b'foo' @skip_if_server_version_lt('3.0.0') def test_dump_and_restore_and_replace(self, r): r['a'] = 'bar' dumped = r.dump('a') with pytest.raises(redis.ResponseError): r.restore('a', 0, dumped) r.restore('a', 0, dumped, replace=True) assert r['a'] == b'bar' def test_exists(self, r): assert r.exists('a') == 0 r['a'] = 'foo' r['b'] = 'bar' assert r.exists('a') == 1 assert r.exists('a', 'b') == 2 def test_exists_contains(self, r): assert 'a' not in r r['a'] = 'foo' assert 'a' in r def test_expire(self, r): assert not r.expire('a', 10) r['a'] = 'foo' assert r.expire('a', 10) assert 0 < r.ttl('a') <= 10 assert r.persist('a') assert r.ttl('a') == -1 def test_expireat_datetime(self, r): expire_at = redis_server_time(r) + datetime.timedelta(minutes=1) r['a'] = 'foo' assert r.expireat('a', expire_at) assert 0 < r.ttl('a') <= 61 def test_expireat_no_key(self, r): expire_at = redis_server_time(r) + datetime.timedelta(minutes=1) assert not r.expireat('a', expire_at) def test_expireat_unixtime(self, r): expire_at = redis_server_time(r) + datetime.timedelta(minutes=1) r['a'] = 'foo' expire_at_seconds = int(time.mktime(expire_at.timetuple())) assert r.expireat('a', expire_at_seconds) assert 0 < r.ttl('a') <= 61 def test_get_and_set(self, r): # get and set can't be tested independently of each other assert r.get('a') is None byte_string = b'value' integer = 5 unicode_string = unichr(3456) + 'abcd' + unichr(3421) assert r.set('byte_string', byte_string) assert r.set('integer', 5) assert r.set('unicode_string', unicode_string) assert r.get('byte_string') == byte_string assert r.get('integer') == str(integer).encode() assert r.get('unicode_string').decode('utf-8') == unicode_string def test_getitem_and_setitem(self, r): r['a'] = 'bar' assert r['a'] == b'bar' def test_getitem_raises_keyerror_for_missing_key(self, r): with pytest.raises(KeyError): r['a'] def test_getitem_does_not_raise_keyerror_for_empty_string(self, r): r['a'] = b"" assert r['a'] == b"" def test_get_set_bit(self, r): # no value assert not r.getbit('a', 5) # set bit 5 assert not r.setbit('a', 5, True) assert r.getbit('a', 5) # unset bit 4 assert not r.setbit('a', 4, False) assert not r.getbit('a', 4) # set bit 4 assert not r.setbit('a', 4, True) assert r.getbit('a', 4) # set bit 5 again assert r.setbit('a', 5, True) assert r.getbit('a', 5) def test_getrange(self, r): r['a'] = 'foo' assert r.getrange('a', 0, 0) == b'f' assert r.getrange('a', 0, 2) == b'foo' assert r.getrange('a', 3, 4) == b'' def test_getset(self, r): assert r.getset('a', 'foo') is None assert r.getset('a', 'bar') == b'foo' assert r.get('a') == b'bar' def test_incr(self, r): assert r.incr('a') == 1 assert r['a'] == b'1' assert r.incr('a') == 2 assert r['a'] == b'2' assert r.incr('a', amount=5) == 7 assert r['a'] == b'7' def test_incrby(self, r): assert r.incrby('a') == 1 assert r.incrby('a', 4) == 5 assert r['a'] == b'5' @skip_if_server_version_lt('2.6.0') def test_incrbyfloat(self, r): assert r.incrbyfloat('a') == 1.0 assert r['a'] == b'1' assert r.incrbyfloat('a', 1.1) == 2.1 assert float(r['a']) == float(2.1) def test_keys(self, r): assert r.keys() == [] keys_with_underscores = {b'test_a', b'test_b'} keys = keys_with_underscores.union({b'testc'}) for key in keys: r[key] = 1 assert set(r.keys(pattern='test_*')) == keys_with_underscores assert set(r.keys(pattern='test*')) == keys def test_mget(self, r): assert r.mget([]) == [] assert r.mget(['a', 'b']) == [None, None] r['a'] = '1' r['b'] = '2' r['c'] = '3' assert r.mget('a', 'other', 'b', 'c') == [b'1', None, b'2', b'3'] def test_mset(self, r): d = {'a': b'1', 'b': b'2', 'c': b'3'} assert r.mset(d) for k, v in iteritems(d): assert r[k] == v def test_msetnx(self, r): d = {'a': b'1', 'b': b'2', 'c': b'3'} assert r.msetnx(d) d2 = {'a': b'x', 'd': b'4'} assert not r.msetnx(d2) for k, v in iteritems(d): assert r[k] == v assert r.get('d') is None @skip_if_server_version_lt('2.6.0') def test_pexpire(self, r): assert not r.pexpire('a', 60000) r['a'] = 'foo' assert r.pexpire('a', 60000) assert 0 < r.pttl('a') <= 60000 assert r.persist('a') assert r.pttl('a') == -1 @skip_if_server_version_lt('2.6.0') def test_pexpireat_datetime(self, r): expire_at = redis_server_time(r) + datetime.timedelta(minutes=1) r['a'] = 'foo' assert r.pexpireat('a', expire_at) assert 0 < r.pttl('a') <= 61000 @skip_if_server_version_lt('2.6.0') def test_pexpireat_no_key(self, r): expire_at = redis_server_time(r) + datetime.timedelta(minutes=1) assert not r.pexpireat('a', expire_at) @skip_if_server_version_lt('2.6.0') def test_pexpireat_unixtime(self, r): expire_at = redis_server_time(r) + datetime.timedelta(minutes=1) r['a'] = 'foo' expire_at_seconds = int(time.mktime(expire_at.timetuple())) * 1000 assert r.pexpireat('a', expire_at_seconds) assert 0 < r.pttl('a') <= 61000 @skip_if_server_version_lt('2.6.0') def test_psetex(self, r): assert r.psetex('a', 1000, 'value') assert r['a'] == b'value' assert 0 < r.pttl('a') <= 1000 @skip_if_server_version_lt('2.6.0') def test_psetex_timedelta(self, r): expire_at = datetime.timedelta(milliseconds=1000) assert r.psetex('a', expire_at, 'value') assert r['a'] == b'value' assert 0 < r.pttl('a') <= 1000 @skip_if_server_version_lt('2.6.0') def test_pttl(self, r): assert not r.pexpire('a', 10000) r['a'] = '1' assert r.pexpire('a', 10000) assert 0 < r.pttl('a') <= 10000 assert r.persist('a') assert r.pttl('a') == -1 @skip_if_server_version_lt('2.8.0') def test_pttl_no_key(self, r): "PTTL on servers 2.8 and after return -2 when the key doesn't exist" assert r.pttl('a') == -2 def test_randomkey(self, r): assert r.randomkey() is None for key in ('a', 'b', 'c'): r[key] = 1 assert r.randomkey() in (b'a', b'b', b'c') def test_rename(self, r): r['a'] = '1' assert r.rename('a', 'b') assert r.get('a') is None assert r['b'] == b'1' def test_renamenx(self, r): r['a'] = '1' r['b'] = '2' assert not r.renamenx('a', 'b') assert r['a'] == b'1' assert r['b'] == b'2' @skip_if_server_version_lt('2.6.0') def test_set_nx(self, r): assert r.set('a', '1', nx=True) assert not r.set('a', '2', nx=True) assert r['a'] == b'1' @skip_if_server_version_lt('2.6.0') def test_set_xx(self, r): assert not r.set('a', '1', xx=True) assert r.get('a') is None r['a'] = 'bar' assert r.set('a', '2', xx=True) assert r.get('a') == b'2' @skip_if_server_version_lt('2.6.0') def test_set_px(self, r): assert r.set('a', '1', px=10000) assert r['a'] == b'1' assert 0 < r.pttl('a') <= 10000 assert 0 < r.ttl('a') <= 10 @skip_if_server_version_lt('2.6.0') def test_set_px_timedelta(self, r): expire_at = datetime.timedelta(milliseconds=1000) assert r.set('a', '1', px=expire_at) assert 0 < r.pttl('a') <= 1000 assert 0 < r.ttl('a') <= 1 @skip_if_server_version_lt('2.6.0') def test_set_ex(self, r): assert r.set('a', '1', ex=10) assert 0 < r.ttl('a') <= 10 @skip_if_server_version_lt('2.6.0') def test_set_ex_timedelta(self, r): expire_at = datetime.timedelta(seconds=60) assert r.set('a', '1', ex=expire_at) assert 0 < r.ttl('a') <= 60 @skip_if_server_version_lt('2.6.0') def test_set_multipleoptions(self, r): r['a'] = 'val' assert r.set('a', '1', xx=True, px=10000) assert 0 < r.ttl('a') <= 10 @skip_if_server_version_lt(REDIS_6_VERSION) def test_set_keepttl(self, r): r['a'] = 'val' assert r.set('a', '1', xx=True, px=10000) assert 0 < r.ttl('a') <= 10 r.set('a', '2', keepttl=True) assert r.get('a') == b'2' assert 0 < r.ttl('a') <= 10 def test_setex(self, r): assert r.setex('a', 60, '1') assert r['a'] == b'1' assert 0 < r.ttl('a') <= 60 def test_setnx(self, r): assert r.setnx('a', '1') assert r['a'] == b'1' assert not r.setnx('a', '2') assert r['a'] == b'1' def test_setrange(self, r): assert r.setrange('a', 5, 'foo') == 8 assert r['a'] == b'\0\0\0\0\0foo' r['a'] = 'abcdefghijh' assert r.setrange('a', 6, '12345') == 11 assert r['a'] == b'abcdef12345' def test_strlen(self, r): r['a'] = 'foo' assert r.strlen('a') == 3 def test_substr(self, r): r['a'] = '0123456789' assert r.substr('a', 0) == b'0123456789' assert r.substr('a', 2) == b'23456789' assert r.substr('a', 3, 5) == b'345' assert r.substr('a', 3, -2) == b'345678' def test_ttl(self, r): r['a'] = '1' assert r.expire('a', 10) assert 0 < r.ttl('a') <= 10 assert r.persist('a') assert r.ttl('a') == -1 @skip_if_server_version_lt('2.8.0') def test_ttl_nokey(self, r): "TTL on servers 2.8 and after return -2 when the key doesn't exist" assert r.ttl('a') == -2 def test_type(self, r): assert r.type('a') == b'none' r['a'] = '1' assert r.type('a') == b'string' del r['a'] r.lpush('a', '1') assert r.type('a') == b'list' del r['a'] r.sadd('a', '1') assert r.type('a') == b'set' del r['a'] r.zadd('a', {'1': 1}) assert r.type('a') == b'zset' # LIST COMMANDS def test_blpop(self, r): r.rpush('a', '1', '2') r.rpush('b', '3', '4') assert r.blpop(['b', 'a'], timeout=1) == (b'b', b'3') assert r.blpop(['b', 'a'], timeout=1) == (b'b', b'4') assert r.blpop(['b', 'a'], timeout=1) == (b'a', b'1') assert r.blpop(['b', 'a'], timeout=1) == (b'a', b'2') assert r.blpop(['b', 'a'], timeout=1) is None r.rpush('c', '1') assert r.blpop('c', timeout=1) == (b'c', b'1') def test_brpop(self, r): r.rpush('a', '1', '2') r.rpush('b', '3', '4') assert r.brpop(['b', 'a'], timeout=1) == (b'b', b'4') assert r.brpop(['b', 'a'], timeout=1) == (b'b', b'3') assert r.brpop(['b', 'a'], timeout=1) == (b'a', b'2') assert r.brpop(['b', 'a'], timeout=1) == (b'a', b'1') assert r.brpop(['b', 'a'], timeout=1) is None r.rpush('c', '1') assert r.brpop('c', timeout=1) == (b'c', b'1') def test_brpoplpush(self, r): r.rpush('a', '1', '2') r.rpush('b', '3', '4') assert r.brpoplpush('a', 'b') == b'2' assert r.brpoplpush('a', 'b') == b'1' assert r.brpoplpush('a', 'b', timeout=1) is None assert r.lrange('a', 0, -1) == [] assert r.lrange('b', 0, -1) == [b'1', b'2', b'3', b'4'] def test_brpoplpush_empty_string(self, r): r.rpush('a', '') assert r.brpoplpush('a', 'b') == b'' def test_lindex(self, r): r.rpush('a', '1', '2', '3') assert r.lindex('a', '0') == b'1' assert r.lindex('a', '1') == b'2' assert r.lindex('a', '2') == b'3' def test_linsert(self, r): r.rpush('a', '1', '2', '3') assert r.linsert('a', 'after', '2', '2.5') == 4 assert r.lrange('a', 0, -1) == [b'1', b'2', b'2.5', b'3'] assert r.linsert('a', 'before', '2', '1.5') == 5 assert r.lrange('a', 0, -1) == \ [b'1', b'1.5', b'2', b'2.5', b'3'] def test_llen(self, r): r.rpush('a', '1', '2', '3') assert r.llen('a') == 3 def test_lpop(self, r): r.rpush('a', '1', '2', '3') assert r.lpop('a') == b'1' assert r.lpop('a') == b'2' assert r.lpop('a') == b'3' assert r.lpop('a') is None def test_lpush(self, r): assert r.lpush('a', '1') == 1 assert r.lpush('a', '2') == 2 assert r.lpush('a', '3', '4') == 4 assert r.lrange('a', 0, -1) == [b'4', b'3', b'2', b'1'] def test_lpushx(self, r): assert r.lpushx('a', '1') == 0 assert r.lrange('a', 0, -1) == [] r.rpush('a', '1', '2', '3') assert r.lpushx('a', '4') == 4 assert r.lrange('a', 0, -1) == [b'4', b'1', b'2', b'3'] def test_lrange(self, r): r.rpush('a', '1', '2', '3', '4', '5') assert r.lrange('a', 0, 2) == [b'1', b'2', b'3'] assert r.lrange('a', 2, 10) == [b'3', b'4', b'5'] assert r.lrange('a', 0, -1) == [b'1', b'2', b'3', b'4', b'5'] def test_lrem(self, r): r.rpush('a', 'Z', 'b', 'Z', 'Z', 'c', 'Z', 'Z') # remove the first 'Z' item assert r.lrem('a', 1, 'Z') == 1 assert r.lrange('a', 0, -1) == [b'b', b'Z', b'Z', b'c', b'Z', b'Z'] # remove the last 2 'Z' items assert r.lrem('a', -2, 'Z') == 2 assert r.lrange('a', 0, -1) == [b'b', b'Z', b'Z', b'c'] # remove all 'Z' items assert r.lrem('a', 0, 'Z') == 2 assert r.lrange('a', 0, -1) == [b'b', b'c'] def test_lset(self, r): r.rpush('a', '1', '2', '3') assert r.lrange('a', 0, -1) == [b'1', b'2', b'3'] assert r.lset('a', 1, '4') assert r.lrange('a', 0, 2) == [b'1', b'4', b'3'] def test_ltrim(self, r): r.rpush('a', '1', '2', '3') assert r.ltrim('a', 0, 1) assert r.lrange('a', 0, -1) == [b'1', b'2'] def test_rpop(self, r): r.rpush('a', '1', '2', '3') assert r.rpop('a') == b'3' assert r.rpop('a') == b'2' assert r.rpop('a') == b'1' assert r.rpop('a') is None def test_rpoplpush(self, r): r.rpush('a', 'a1', 'a2', 'a3') r.rpush('b', 'b1', 'b2', 'b3') assert r.rpoplpush('a', 'b') == b'a3' assert r.lrange('a', 0, -1) == [b'a1', b'a2'] assert r.lrange('b', 0, -1) == [b'a3', b'b1', b'b2', b'b3'] def test_rpush(self, r): assert r.rpush('a', '1') == 1 assert r.rpush('a', '2') == 2 assert r.rpush('a', '3', '4') == 4 assert r.lrange('a', 0, -1) == [b'1', b'2', b'3', b'4'] def test_rpushx(self, r): assert r.rpushx('a', 'b') == 0 assert r.lrange('a', 0, -1) == [] r.rpush('a', '1', '2', '3') assert r.rpushx('a', '4') == 4 assert r.lrange('a', 0, -1) == [b'1', b'2', b'3', b'4'] # SCAN COMMANDS @skip_if_server_version_lt('2.8.0') def test_scan(self, r): r.set('a', 1) r.set('b', 2) r.set('c', 3) cursor, keys = r.scan() assert cursor == 0 assert set(keys) == {b'a', b'b', b'c'} _, keys = r.scan(match='a') assert set(keys) == {b'a'} @skip_if_server_version_lt(REDIS_6_VERSION) def test_scan_type(self, r): r.sadd('a-set', 1) r.hset('a-hash', 'foo', 2) r.lpush('a-list', 'aux', 3) _, keys = r.scan(match='a*', _type='SET') assert set(keys) == {b'a-set'} @skip_if_server_version_lt('2.8.0') def test_scan_iter(self, r): r.set('a', 1) r.set('b', 2) r.set('c', 3) keys = list(r.scan_iter()) assert set(keys) == {b'a', b'b', b'c'} keys = list(r.scan_iter(match='a')) assert set(keys) == {b'a'} @skip_if_server_version_lt('2.8.0') def test_sscan(self, r): r.sadd('a', 1, 2, 3) cursor, members = r.sscan('a') assert cursor == 0 assert set(members) == {b'1', b'2', b'3'} _, members = r.sscan('a', match=b'1') assert set(members) == {b'1'} @skip_if_server_version_lt('2.8.0') def test_sscan_iter(self, r): r.sadd('a', 1, 2, 3) members = list(r.sscan_iter('a')) assert set(members) == {b'1', b'2', b'3'} members = list(r.sscan_iter('a', match=b'1')) assert set(members) == {b'1'} @skip_if_server_version_lt('2.8.0') def test_hscan(self, r): r.hset('a', mapping={'a': 1, 'b': 2, 'c': 3}) cursor, dic = r.hscan('a') assert cursor == 0 assert dic == {b'a': b'1', b'b': b'2', b'c': b'3'} _, dic = r.hscan('a', match='a') assert dic == {b'a': b'1'} @skip_if_server_version_lt('2.8.0') def test_hscan_iter(self, r): r.hset('a', mapping={'a': 1, 'b': 2, 'c': 3}) dic = dict(r.hscan_iter('a')) assert dic == {b'a': b'1', b'b': b'2', b'c': b'3'} dic = dict(r.hscan_iter('a', match='a')) assert dic == {b'a': b'1'} @skip_if_server_version_lt('2.8.0') def test_zscan(self, r): r.zadd('a', {'a': 1, 'b': 2, 'c': 3}) cursor, pairs = r.zscan('a') assert cursor == 0 assert set(pairs) == {(b'a', 1), (b'b', 2), (b'c', 3)} _, pairs = r.zscan('a', match='a') assert set(pairs) == {(b'a', 1)} @skip_if_server_version_lt('2.8.0') def test_zscan_iter(self, r): r.zadd('a', {'a': 1, 'b': 2, 'c': 3}) pairs = list(r.zscan_iter('a')) assert set(pairs) == {(b'a', 1), (b'b', 2), (b'c', 3)} pairs = list(r.zscan_iter('a', match='a')) assert set(pairs) == {(b'a', 1)} # SET COMMANDS def test_sadd(self, r): members = {b'1', b'2', b'3'} r.sadd('a', *members) assert r.smembers('a') == members def test_scard(self, r): r.sadd('a', '1', '2', '3') assert r.scard('a') == 3 def test_sdiff(self, r): r.sadd('a', '1', '2', '3') assert r.sdiff('a', 'b') == {b'1', b'2', b'3'} r.sadd('b', '2', '3') assert r.sdiff('a', 'b') == {b'1'} def test_sdiffstore(self, r): r.sadd('a', '1', '2', '3') assert r.sdiffstore('c', 'a', 'b') == 3 assert r.smembers('c') == {b'1', b'2', b'3'} r.sadd('b', '2', '3') assert r.sdiffstore('c', 'a', 'b') == 1 assert r.smembers('c') == {b'1'} def test_sinter(self, r): r.sadd('a', '1', '2', '3') assert r.sinter('a', 'b') == set() r.sadd('b', '2', '3') assert r.sinter('a', 'b') == {b'2', b'3'} def test_sinterstore(self, r): r.sadd('a', '1', '2', '3') assert r.sinterstore('c', 'a', 'b') == 0 assert r.smembers('c') == set() r.sadd('b', '2', '3') assert r.sinterstore('c', 'a', 'b') == 2 assert r.smembers('c') == {b'2', b'3'} def test_sismember(self, r): r.sadd('a', '1', '2', '3') assert r.sismember('a', '1') assert r.sismember('a', '2') assert r.sismember('a', '3') assert not r.sismember('a', '4') def test_smembers(self, r): r.sadd('a', '1', '2', '3') assert r.smembers('a') == {b'1', b'2', b'3'} def test_smove(self, r): r.sadd('a', 'a1', 'a2') r.sadd('b', 'b1', 'b2') assert r.smove('a', 'b', 'a1') assert r.smembers('a') == {b'a2'} assert r.smembers('b') == {b'b1', b'b2', b'a1'} def test_spop(self, r): s = [b'1', b'2', b'3'] r.sadd('a', *s) value = r.spop('a') assert value in s assert r.smembers('a') == set(s) - {value} @skip_if_server_version_lt('3.2.0') def test_spop_multi_value(self, r): s = [b'1', b'2', b'3'] r.sadd('a', *s) values = r.spop('a', 2) assert len(values) == 2 for value in values: assert value in s assert r.spop('a', 1) == list(set(s) - set(values)) def test_srandmember(self, r): s = [b'1', b'2', b'3'] r.sadd('a', *s) assert r.srandmember('a') in s @skip_if_server_version_lt('2.6.0') def test_srandmember_multi_value(self, r): s = [b'1', b'2', b'3'] r.sadd('a', *s) randoms = r.srandmember('a', number=2) assert len(randoms) == 2 assert set(randoms).intersection(s) == set(randoms) def test_srem(self, r): r.sadd('a', '1', '2', '3', '4') assert r.srem('a', '5') == 0 assert r.srem('a', '2', '4') == 2 assert r.smembers('a') == {b'1', b'3'} def test_sunion(self, r): r.sadd('a', '1', '2') r.sadd('b', '2', '3') assert r.sunion('a', 'b') == {b'1', b'2', b'3'} def test_sunionstore(self, r): r.sadd('a', '1', '2') r.sadd('b', '2', '3') assert r.sunionstore('c', 'a', 'b') == 3 assert r.smembers('c') == {b'1', b'2', b'3'} # SORTED SET COMMANDS def test_zadd(self, r): mapping = {'a1': 1.0, 'a2': 2.0, 'a3': 3.0} r.zadd('a', mapping) assert r.zrange('a', 0, -1, withscores=True) == \ [(b'a1', 1.0), (b'a2', 2.0), (b'a3', 3.0)] # error cases with pytest.raises(exceptions.DataError): r.zadd('a', {}) # cannot use both nx and xx options with pytest.raises(exceptions.DataError): r.zadd('a', mapping, nx=True, xx=True) # cannot use the incr options with more than one value with pytest.raises(exceptions.DataError): r.zadd('a', mapping, incr=True) def test_zadd_nx(self, r): assert r.zadd('a', {'a1': 1}) == 1 assert r.zadd('a', {'a1': 99, 'a2': 2}, nx=True) == 1 assert r.zrange('a', 0, -1, withscores=True) == \ [(b'a1', 1.0), (b'a2', 2.0)] def test_zadd_xx(self, r): assert r.zadd('a', {'a1': 1}) == 1 assert r.zadd('a', {'a1': 99, 'a2': 2}, xx=True) == 0 assert r.zrange('a', 0, -1, withscores=True) == \ [(b'a1', 99.0)] def test_zadd_ch(self, r): assert r.zadd('a', {'a1': 1}) == 1 assert r.zadd('a', {'a1': 99, 'a2': 2}, ch=True) == 2 assert r.zrange('a', 0, -1, withscores=True) == \ [(b'a2', 2.0), (b'a1', 99.0)] def test_zadd_incr(self, r): assert r.zadd('a', {'a1': 1}) == 1 assert r.zadd('a', {'a1': 4.5}, incr=True) == 5.5 def test_zadd_incr_with_xx(self, r): # this asks zadd to incr 'a1' only if it exists, but it clearly # doesn't. Redis returns a null value in this case and so should # redis-py assert r.zadd('a', {'a1': 1}, xx=True, incr=True) is None def test_zcard(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zcard('a') == 3 def test_zcount(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zcount('a', '-inf', '+inf') == 3 assert r.zcount('a', 1, 2) == 2 assert r.zcount('a', '(' + str(1), 2) == 1 assert r.zcount('a', 1, '(' + str(2)) == 1 assert r.zcount('a', 10, 20) == 0 def test_zincrby(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zincrby('a', 1, 'a2') == 3.0 assert r.zincrby('a', 5, 'a3') == 8.0 assert r.zscore('a', 'a2') == 3.0 assert r.zscore('a', 'a3') == 8.0 @skip_if_server_version_lt('2.8.9') def test_zlexcount(self, r): r.zadd('a', {'a': 0, 'b': 0, 'c': 0, 'd': 0, 'e': 0, 'f': 0, 'g': 0}) assert r.zlexcount('a', '-', '+') == 7 assert r.zlexcount('a', '[b', '[f') == 5 def test_zinterstore_sum(self, r): r.zadd('a', {'a1': 1, 'a2': 1, 'a3': 1}) r.zadd('b', {'a1': 2, 'a2': 2, 'a3': 2}) r.zadd('c', {'a1': 6, 'a3': 5, 'a4': 4}) assert r.zinterstore('d', ['a', 'b', 'c']) == 2 assert r.zrange('d', 0, -1, withscores=True) == \ [(b'a3', 8), (b'a1', 9)] def test_zinterstore_max(self, r): r.zadd('a', {'a1': 1, 'a2': 1, 'a3': 1}) r.zadd('b', {'a1': 2, 'a2': 2, 'a3': 2}) r.zadd('c', {'a1': 6, 'a3': 5, 'a4': 4}) assert r.zinterstore('d', ['a', 'b', 'c'], aggregate='MAX') == 2 assert r.zrange('d', 0, -1, withscores=True) == \ [(b'a3', 5), (b'a1', 6)] def test_zinterstore_min(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) r.zadd('b', {'a1': 2, 'a2': 3, 'a3': 5}) r.zadd('c', {'a1': 6, 'a3': 5, 'a4': 4}) assert r.zinterstore('d', ['a', 'b', 'c'], aggregate='MIN') == 2 assert r.zrange('d', 0, -1, withscores=True) == \ [(b'a1', 1), (b'a3', 3)] def test_zinterstore_with_weight(self, r): r.zadd('a', {'a1': 1, 'a2': 1, 'a3': 1}) r.zadd('b', {'a1': 2, 'a2': 2, 'a3': 2}) r.zadd('c', {'a1': 6, 'a3': 5, 'a4': 4}) assert r.zinterstore('d', {'a': 1, 'b': 2, 'c': 3}) == 2 assert r.zrange('d', 0, -1, withscores=True) == \ [(b'a3', 20), (b'a1', 23)] @skip_if_server_version_lt('4.9.0') def test_zpopmax(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zpopmax('a') == [(b'a3', 3)] # with count assert r.zpopmax('a', count=2) == \ [(b'a2', 2), (b'a1', 1)] @skip_if_server_version_lt('4.9.0') def test_zpopmin(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zpopmin('a') == [(b'a1', 1)] # with count assert r.zpopmin('a', count=2) == \ [(b'a2', 2), (b'a3', 3)] @skip_if_server_version_lt('4.9.0') def test_bzpopmax(self, r): r.zadd('a', {'a1': 1, 'a2': 2}) r.zadd('b', {'b1': 10, 'b2': 20}) assert r.bzpopmax(['b', 'a'], timeout=1) == (b'b', b'b2', 20) assert r.bzpopmax(['b', 'a'], timeout=1) == (b'b', b'b1', 10) assert r.bzpopmax(['b', 'a'], timeout=1) == (b'a', b'a2', 2) assert r.bzpopmax(['b', 'a'], timeout=1) == (b'a', b'a1', 1) assert r.bzpopmax(['b', 'a'], timeout=1) is None r.zadd('c', {'c1': 100}) assert r.bzpopmax('c', timeout=1) == (b'c', b'c1', 100) @skip_if_server_version_lt('4.9.0') def test_bzpopmin(self, r): r.zadd('a', {'a1': 1, 'a2': 2}) r.zadd('b', {'b1': 10, 'b2': 20}) assert r.bzpopmin(['b', 'a'], timeout=1) == (b'b', b'b1', 10) assert r.bzpopmin(['b', 'a'], timeout=1) == (b'b', b'b2', 20) assert r.bzpopmin(['b', 'a'], timeout=1) == (b'a', b'a1', 1) assert r.bzpopmin(['b', 'a'], timeout=1) == (b'a', b'a2', 2) assert r.bzpopmin(['b', 'a'], timeout=1) is None r.zadd('c', {'c1': 100}) assert r.bzpopmin('c', timeout=1) == (b'c', b'c1', 100) def test_zrange(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zrange('a', 0, 1) == [b'a1', b'a2'] assert r.zrange('a', 1, 2) == [b'a2', b'a3'] # withscores assert r.zrange('a', 0, 1, withscores=True) == \ [(b'a1', 1.0), (b'a2', 2.0)] assert r.zrange('a', 1, 2, withscores=True) == \ [(b'a2', 2.0), (b'a3', 3.0)] # custom score function assert r.zrange('a', 0, 1, withscores=True, score_cast_func=int) == \ [(b'a1', 1), (b'a2', 2)] @skip_if_server_version_lt('2.8.9') def test_zrangebylex(self, r): r.zadd('a', {'a': 0, 'b': 0, 'c': 0, 'd': 0, 'e': 0, 'f': 0, 'g': 0}) assert r.zrangebylex('a', '-', '[c') == [b'a', b'b', b'c'] assert r.zrangebylex('a', '-', '(c') == [b'a', b'b'] assert r.zrangebylex('a', '[aaa', '(g') == \ [b'b', b'c', b'd', b'e', b'f'] assert r.zrangebylex('a', '[f', '+') == [b'f', b'g'] assert r.zrangebylex('a', '-', '+', start=3, num=2) == [b'd', b'e'] @skip_if_server_version_lt('2.9.9') def test_zrevrangebylex(self, r): r.zadd('a', {'a': 0, 'b': 0, 'c': 0, 'd': 0, 'e': 0, 'f': 0, 'g': 0}) assert r.zrevrangebylex('a', '[c', '-') == [b'c', b'b', b'a'] assert r.zrevrangebylex('a', '(c', '-') == [b'b', b'a'] assert r.zrevrangebylex('a', '(g', '[aaa') == \ [b'f', b'e', b'd', b'c', b'b'] assert r.zrevrangebylex('a', '+', '[f') == [b'g', b'f'] assert r.zrevrangebylex('a', '+', '-', start=3, num=2) == \ [b'd', b'c'] def test_zrangebyscore(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) assert r.zrangebyscore('a', 2, 4) == [b'a2', b'a3', b'a4'] # slicing with start/num assert r.zrangebyscore('a', 2, 4, start=1, num=2) == \ [b'a3', b'a4'] # withscores assert r.zrangebyscore('a', 2, 4, withscores=True) == \ [(b'a2', 2.0), (b'a3', 3.0), (b'a4', 4.0)] # custom score function assert r.zrangebyscore('a', 2, 4, withscores=True, score_cast_func=int) == \ [(b'a2', 2), (b'a3', 3), (b'a4', 4)] def test_zrank(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) assert r.zrank('a', 'a1') == 0 assert r.zrank('a', 'a2') == 1 assert r.zrank('a', 'a6') is None def test_zrem(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zrem('a', 'a2') == 1 assert r.zrange('a', 0, -1) == [b'a1', b'a3'] assert r.zrem('a', 'b') == 0 assert r.zrange('a', 0, -1) == [b'a1', b'a3'] def test_zrem_multiple_keys(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zrem('a', 'a1', 'a2') == 2 assert r.zrange('a', 0, 5) == [b'a3'] @skip_if_server_version_lt('2.8.9') def test_zremrangebylex(self, r): r.zadd('a', {'a': 0, 'b': 0, 'c': 0, 'd': 0, 'e': 0, 'f': 0, 'g': 0}) assert r.zremrangebylex('a', '-', '[c') == 3 assert r.zrange('a', 0, -1) == [b'd', b'e', b'f', b'g'] assert r.zremrangebylex('a', '[f', '+') == 2 assert r.zrange('a', 0, -1) == [b'd', b'e'] assert r.zremrangebylex('a', '[h', '+') == 0 assert r.zrange('a', 0, -1) == [b'd', b'e'] def test_zremrangebyrank(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) assert r.zremrangebyrank('a', 1, 3) == 3 assert r.zrange('a', 0, 5) == [b'a1', b'a5'] def test_zremrangebyscore(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) assert r.zremrangebyscore('a', 2, 4) == 3 assert r.zrange('a', 0, -1) == [b'a1', b'a5'] assert r.zremrangebyscore('a', 2, 4) == 0 assert r.zrange('a', 0, -1) == [b'a1', b'a5'] def test_zrevrange(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zrevrange('a', 0, 1) == [b'a3', b'a2'] assert r.zrevrange('a', 1, 2) == [b'a2', b'a1'] # withscores assert r.zrevrange('a', 0, 1, withscores=True) == \ [(b'a3', 3.0), (b'a2', 2.0)] assert r.zrevrange('a', 1, 2, withscores=True) == \ [(b'a2', 2.0), (b'a1', 1.0)] # custom score function assert r.zrevrange('a', 0, 1, withscores=True, score_cast_func=int) == \ [(b'a3', 3.0), (b'a2', 2.0)] def test_zrevrangebyscore(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) assert r.zrevrangebyscore('a', 4, 2) == [b'a4', b'a3', b'a2'] # slicing with start/num assert r.zrevrangebyscore('a', 4, 2, start=1, num=2) == \ [b'a3', b'a2'] # withscores assert r.zrevrangebyscore('a', 4, 2, withscores=True) == \ [(b'a4', 4.0), (b'a3', 3.0), (b'a2', 2.0)] # custom score function assert r.zrevrangebyscore('a', 4, 2, withscores=True, score_cast_func=int) == \ [(b'a4', 4), (b'a3', 3), (b'a2', 2)] def test_zrevrank(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) assert r.zrevrank('a', 'a1') == 4 assert r.zrevrank('a', 'a2') == 3 assert r.zrevrank('a', 'a6') is None def test_zscore(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) assert r.zscore('a', 'a1') == 1.0 assert r.zscore('a', 'a2') == 2.0 assert r.zscore('a', 'a4') is None def test_zunionstore_sum(self, r): r.zadd('a', {'a1': 1, 'a2': 1, 'a3': 1}) r.zadd('b', {'a1': 2, 'a2': 2, 'a3': 2}) r.zadd('c', {'a1': 6, 'a3': 5, 'a4': 4}) assert r.zunionstore('d', ['a', 'b', 'c']) == 4 assert r.zrange('d', 0, -1, withscores=True) == \ [(b'a2', 3), (b'a4', 4), (b'a3', 8), (b'a1', 9)] def test_zunionstore_max(self, r): r.zadd('a', {'a1': 1, 'a2': 1, 'a3': 1}) r.zadd('b', {'a1': 2, 'a2': 2, 'a3': 2}) r.zadd('c', {'a1': 6, 'a3': 5, 'a4': 4}) assert r.zunionstore('d', ['a', 'b', 'c'], aggregate='MAX') == 4 assert r.zrange('d', 0, -1, withscores=True) == \ [(b'a2', 2), (b'a4', 4), (b'a3', 5), (b'a1', 6)] def test_zunionstore_min(self, r): r.zadd('a', {'a1': 1, 'a2': 2, 'a3': 3}) r.zadd('b', {'a1': 2, 'a2': 2, 'a3': 4}) r.zadd('c', {'a1': 6, 'a3': 5, 'a4': 4}) assert r.zunionstore('d', ['a', 'b', 'c'], aggregate='MIN') == 4 assert r.zrange('d', 0, -1, withscores=True) == \ [(b'a1', 1), (b'a2', 2), (b'a3', 3), (b'a4', 4)] def test_zunionstore_with_weight(self, r): r.zadd('a', {'a1': 1, 'a2': 1, 'a3': 1}) r.zadd('b', {'a1': 2, 'a2': 2, 'a3': 2}) r.zadd('c', {'a1': 6, 'a3': 5, 'a4': 4}) assert r.zunionstore('d', {'a': 1, 'b': 2, 'c': 3}) == 4 assert r.zrange('d', 0, -1, withscores=True) == \ [(b'a2', 5), (b'a4', 12), (b'a3', 20), (b'a1', 23)] # HYPERLOGLOG TESTS @skip_if_server_version_lt('2.8.9') def test_pfadd(self, r): members = {b'1', b'2', b'3'} assert r.pfadd('a', *members) == 1 assert r.pfadd('a', *members) == 0 assert r.pfcount('a') == len(members) @skip_if_server_version_lt('2.8.9') def test_pfcount(self, r): members = {b'1', b'2', b'3'} r.pfadd('a', *members) assert r.pfcount('a') == len(members) members_b = {b'2', b'3', b'4'} r.pfadd('b', *members_b) assert r.pfcount('b') == len(members_b) assert r.pfcount('a', 'b') == len(members_b.union(members)) @skip_if_server_version_lt('2.8.9') def test_pfmerge(self, r): mema = {b'1', b'2', b'3'} memb = {b'2', b'3', b'4'} memc = {b'5', b'6', b'7'} r.pfadd('a', *mema) r.pfadd('b', *memb) r.pfadd('c', *memc) r.pfmerge('d', 'c', 'a') assert r.pfcount('d') == 6 r.pfmerge('d', 'b') assert r.pfcount('d') == 7 # HASH COMMANDS def test_hget_and_hset(self, r): r.hset('a', mapping={'1': 1, '2': 2, '3': 3}) assert r.hget('a', '1') == b'1' assert r.hget('a', '2') == b'2' assert r.hget('a', '3') == b'3' # field was updated, redis returns 0 assert r.hset('a', '2', 5) == 0 assert r.hget('a', '2') == b'5' # field is new, redis returns 1 assert r.hset('a', '4', 4) == 1 assert r.hget('a', '4') == b'4' # key inside of hash that doesn't exist returns null value assert r.hget('a', 'b') is None # keys with bool(key) == False assert r.hset('a', 0, 10) == 1 assert r.hset('a', '', 10) == 1 def test_hset_with_multi_key_values(self, r): r.hset('a', mapping={'1': 1, '2': 2, '3': 3}) assert r.hget('a', '1') == b'1' assert r.hget('a', '2') == b'2' assert r.hget('a', '3') == b'3' r.hset('b', "foo", "bar", mapping={'1': 1, '2': 2}) assert r.hget('b', '1') == b'1' assert r.hget('b', '2') == b'2' assert r.hget('b', 'foo') == b'bar' def test_hset_without_data(self, r): with pytest.raises(exceptions.DataError): r.hset("x") def test_hdel(self, r): r.hset('a', mapping={'1': 1, '2': 2, '3': 3}) assert r.hdel('a', '2') == 1 assert r.hget('a', '2') is None assert r.hdel('a', '1', '3') == 2 assert r.hlen('a') == 0 def test_hexists(self, r): r.hset('a', mapping={'1': 1, '2': 2, '3': 3}) assert r.hexists('a', '1') assert not r.hexists('a', '4') def test_hgetall(self, r): h = {b'a1': b'1', b'a2': b'2', b'a3': b'3'} r.hset('a', mapping=h) assert r.hgetall('a') == h def test_hincrby(self, r): assert r.hincrby('a', '1') == 1 assert r.hincrby('a', '1', amount=2) == 3 assert r.hincrby('a', '1', amount=-2) == 1 @skip_if_server_version_lt('2.6.0') def test_hincrbyfloat(self, r): assert r.hincrbyfloat('a', '1') == 1.0 assert r.hincrbyfloat('a', '1') == 2.0 assert r.hincrbyfloat('a', '1', 1.2) == 3.2 def test_hkeys(self, r): h = {b'a1': b'1', b'a2': b'2', b'a3': b'3'} r.hset('a', mapping=h) local_keys = list(iterkeys(h)) remote_keys = r.hkeys('a') assert (sorted(local_keys) == sorted(remote_keys)) def test_hlen(self, r): r.hset('a', mapping={'1': 1, '2': 2, '3': 3}) assert r.hlen('a') == 3 def test_hmget(self, r): assert r.hset('a', mapping={'a': 1, 'b': 2, 'c': 3}) assert r.hmget('a', 'a', 'b', 'c') == [b'1', b'2', b'3'] def test_hmset(self, r): warning_message = (r'^Redis\.hmset\(\) is deprecated\. ' r'Use Redis\.hset\(\) instead\.$') h = {b'a': b'1', b'b': b'2', b'c': b'3'} with pytest.warns(DeprecationWarning, match=warning_message): assert r.hmset('a', h) assert r.hgetall('a') == h def test_hsetnx(self, r): # Initially set the hash field assert r.hsetnx('a', '1', 1) assert r.hget('a', '1') == b'1' assert not r.hsetnx('a', '1', 2) assert r.hget('a', '1') == b'1' def test_hvals(self, r): h = {b'a1': b'1', b'a2': b'2', b'a3': b'3'} r.hset('a', mapping=h) local_vals = list(itervalues(h)) remote_vals = r.hvals('a') assert sorted(local_vals) == sorted(remote_vals) @skip_if_server_version_lt('3.2.0') def test_hstrlen(self, r): r.hset('a', mapping={'1': '22', '2': '333'}) assert r.hstrlen('a', '1') == 2 assert r.hstrlen('a', '2') == 3 # SORT def test_sort_basic(self, r): r.rpush('a', '3', '2', '1', '4') assert r.sort('a') == [b'1', b'2', b'3', b'4'] def test_sort_limited(self, r): r.rpush('a', '3', '2', '1', '4') assert r.sort('a', start=1, num=2) == [b'2', b'3'] def test_sort_by(self, r): r['score:1'] = 8 r['score:2'] = 3 r['score:3'] = 5 r.rpush('a', '3', '2', '1') assert r.sort('a', by='score:*') == [b'2', b'3', b'1'] def test_sort_get(self, r): r['user:1'] = 'u1' r['user:2'] = 'u2' r['user:3'] = 'u3' r.rpush('a', '2', '3', '1') assert r.sort('a', get='user:*') == [b'u1', b'u2', b'u3'] def test_sort_get_multi(self, r): r['user:1'] = 'u1' r['user:2'] = 'u2' r['user:3'] = 'u3' r.rpush('a', '2', '3', '1') assert r.sort('a', get=('user:*', '#')) == \ [b'u1', b'1', b'u2', b'2', b'u3', b'3'] def test_sort_get_groups_two(self, r): r['user:1'] = 'u1' r['user:2'] = 'u2' r['user:3'] = 'u3' r.rpush('a', '2', '3', '1') assert r.sort('a', get=('user:*', '#'), groups=True) == \ [(b'u1', b'1'), (b'u2', b'2'), (b'u3', b'3')] def test_sort_groups_string_get(self, r): r['user:1'] = 'u1' r['user:2'] = 'u2' r['user:3'] = 'u3' r.rpush('a', '2', '3', '1') with pytest.raises(exceptions.DataError): r.sort('a', get='user:*', groups=True) def test_sort_groups_just_one_get(self, r): r['user:1'] = 'u1' r['user:2'] = 'u2' r['user:3'] = 'u3' r.rpush('a', '2', '3', '1') with pytest.raises(exceptions.DataError): r.sort('a', get=['user:*'], groups=True) def test_sort_groups_no_get(self, r): r['user:1'] = 'u1' r['user:2'] = 'u2' r['user:3'] = 'u3' r.rpush('a', '2', '3', '1') with pytest.raises(exceptions.DataError): r.sort('a', groups=True) def test_sort_groups_three_gets(self, r): r['user:1'] = 'u1' r['user:2'] = 'u2' r['user:3'] = 'u3' r['door:1'] = 'd1' r['door:2'] = 'd2' r['door:3'] = 'd3' r.rpush('a', '2', '3', '1') assert r.sort('a', get=('user:*', 'door:*', '#'), groups=True) == \ [ (b'u1', b'd1', b'1'), (b'u2', b'd2', b'2'), (b'u3', b'd3', b'3') ] def test_sort_desc(self, r): r.rpush('a', '2', '3', '1') assert r.sort('a', desc=True) == [b'3', b'2', b'1'] def test_sort_alpha(self, r): r.rpush('a', 'e', 'c', 'b', 'd', 'a') assert r.sort('a', alpha=True) == \ [b'a', b'b', b'c', b'd', b'e'] def test_sort_store(self, r): r.rpush('a', '2', '3', '1') assert r.sort('a', store='sorted_values') == 3 assert r.lrange('sorted_values', 0, -1) == [b'1', b'2', b'3'] def test_sort_all_options(self, r): r['user:1:username'] = 'zeus' r['user:2:username'] = 'titan' r['user:3:username'] = 'hermes' r['user:4:username'] = 'hercules' r['user:5:username'] = 'apollo' r['user:6:username'] = 'athena' r['user:7:username'] = 'hades' r['user:8:username'] = 'dionysus' r['user:1:favorite_drink'] = 'yuengling' r['user:2:favorite_drink'] = 'rum' r['user:3:favorite_drink'] = 'vodka' r['user:4:favorite_drink'] = 'milk' r['user:5:favorite_drink'] = 'pinot noir' r['user:6:favorite_drink'] = 'water' r['user:7:favorite_drink'] = 'gin' r['user:8:favorite_drink'] = 'apple juice' r.rpush('gods', '5', '8', '3', '1', '2', '7', '6', '4') num = r.sort('gods', start=2, num=4, by='user:*:username', get='user:*:favorite_drink', desc=True, alpha=True, store='sorted') assert num == 4 assert r.lrange('sorted', 0, 10) == \ [b'vodka', b'milk', b'gin', b'apple juice'] def test_sort_issue_924(self, r): # Tests for issue https://github.com/andymccurdy/redis-py/issues/924 r.execute_command('SADD', 'issue#924', 1) r.execute_command('SORT', 'issue#924') def test_cluster_addslots(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.cluster('ADDSLOTS', 1) is True def test_cluster_count_failure_reports(self, mock_cluster_resp_int): assert isinstance(mock_cluster_resp_int.cluster( 'COUNT-FAILURE-REPORTS', 'node'), int) def test_cluster_countkeysinslot(self, mock_cluster_resp_int): assert isinstance(mock_cluster_resp_int.cluster( 'COUNTKEYSINSLOT', 2), int) def test_cluster_delslots(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.cluster('DELSLOTS', 1) is True def test_cluster_failover(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.cluster('FAILOVER', 1) is True def test_cluster_forget(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.cluster('FORGET', 1) is True def test_cluster_info(self, mock_cluster_resp_info): assert isinstance(mock_cluster_resp_info.cluster('info'), dict) def test_cluster_keyslot(self, mock_cluster_resp_int): assert isinstance(mock_cluster_resp_int.cluster( 'keyslot', 'asdf'), int) def test_cluster_meet(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.cluster('meet', 'ip', 'port', 1) is True def test_cluster_nodes(self, mock_cluster_resp_nodes): assert isinstance(mock_cluster_resp_nodes.cluster('nodes'), dict) def test_cluster_replicate(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.cluster('replicate', 'nodeid') is True def test_cluster_reset(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.cluster('reset', 'hard') is True def test_cluster_saveconfig(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.cluster('saveconfig') is True def test_cluster_setslot(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.cluster('setslot', 1, 'IMPORTING', 'nodeid') is True def test_cluster_slaves(self, mock_cluster_resp_slaves): assert isinstance(mock_cluster_resp_slaves.cluster( 'slaves', 'nodeid'), dict) @skip_if_server_version_lt('3.0.0') def test_readwrite(self, r): assert r.readwrite() @skip_if_server_version_lt('3.0.0') def test_readonly_invalid_cluster_state(self, r): with pytest.raises(exceptions.RedisError): r.readonly() @skip_if_server_version_lt('3.0.0') def test_readonly(self, mock_cluster_resp_ok): assert mock_cluster_resp_ok.readonly() is True # GEO COMMANDS @skip_if_server_version_lt('3.2.0') def test_geoadd(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') assert r.geoadd('barcelona', *values) == 2 assert r.zcard('barcelona') == 2 @skip_if_server_version_lt('3.2.0') def test_geoadd_invalid_params(self, r): with pytest.raises(exceptions.RedisError): r.geoadd('barcelona', *(1, 2)) @skip_if_server_version_lt('3.2.0') def test_geodist(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') assert r.geoadd('barcelona', *values) == 2 assert r.geodist('barcelona', 'place1', 'place2') == 3067.4157 @skip_if_server_version_lt('3.2.0') def test_geodist_units(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) assert r.geodist('barcelona', 'place1', 'place2', 'km') == 3.0674 @skip_if_server_version_lt('3.2.0') def test_geodist_missing_one_member(self, r): values = (2.1909389952632, 41.433791470673, 'place1') r.geoadd('barcelona', *values) assert r.geodist('barcelona', 'place1', 'missing_member', 'km') is None @skip_if_server_version_lt('3.2.0') def test_geodist_invalid_units(self, r): with pytest.raises(exceptions.RedisError): assert r.geodist('x', 'y', 'z', 'inches') @skip_if_server_version_lt('3.2.0') def test_geohash(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) assert r.geohash('barcelona', 'place1', 'place2', 'place3') ==\ ['sp3e9yg3kd0', 'sp3e9cbc3t0', None] @skip_unless_arch_bits(64) @skip_if_server_version_lt('3.2.0') def test_geopos(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) # redis uses 52 bits precision, hereby small errors may be introduced. assert r.geopos('barcelona', 'place1', 'place2') ==\ [(2.19093829393386841, 41.43379028184083523), (2.18737632036209106, 41.40634178640635099)] @skip_if_server_version_lt('4.0.0') def test_geopos_no_value(self, r): assert r.geopos('barcelona', 'place1', 'place2') == [None, None] @skip_if_server_version_lt('3.2.0') @skip_if_server_version_gte('4.0.0') def test_old_geopos_no_value(self, r): assert r.geopos('barcelona', 'place1', 'place2') == [] @skip_if_server_version_lt('3.2.0') def test_georadius(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, b'\x80place2') r.geoadd('barcelona', *values) assert r.georadius('barcelona', 2.191, 41.433, 1000) == [b'place1'] assert r.georadius('barcelona', 2.187, 41.406, 1000) == [b'\x80place2'] @skip_if_server_version_lt('3.2.0') def test_georadius_no_values(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) assert r.georadius('barcelona', 1, 2, 1000) == [] @skip_if_server_version_lt('3.2.0') def test_georadius_units(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) assert r.georadius('barcelona', 2.191, 41.433, 1, unit='km') ==\ [b'place1'] @skip_unless_arch_bits(64) @skip_if_server_version_lt('3.2.0') def test_georadius_with(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) # test a bunch of combinations to test the parse response # function. assert r.georadius('barcelona', 2.191, 41.433, 1, unit='km', withdist=True, withcoord=True, withhash=True) ==\ [[b'place1', 0.0881, 3471609698139488, (2.19093829393386841, 41.43379028184083523)]] assert r.georadius('barcelona', 2.191, 41.433, 1, unit='km', withdist=True, withcoord=True) ==\ [[b'place1', 0.0881, (2.19093829393386841, 41.43379028184083523)]] assert r.georadius('barcelona', 2.191, 41.433, 1, unit='km', withhash=True, withcoord=True) ==\ [[b'place1', 3471609698139488, (2.19093829393386841, 41.43379028184083523)]] # test no values. assert r.georadius('barcelona', 2, 1, 1, unit='km', withdist=True, withcoord=True, withhash=True) == [] @skip_if_server_version_lt('3.2.0') def test_georadius_count(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) assert r.georadius('barcelona', 2.191, 41.433, 3000, count=1) ==\ [b'place1'] @skip_if_server_version_lt('3.2.0') def test_georadius_sort(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) assert r.georadius('barcelona', 2.191, 41.433, 3000, sort='ASC') ==\ [b'place1', b'place2'] assert r.georadius('barcelona', 2.191, 41.433, 3000, sort='DESC') ==\ [b'place2', b'place1'] @skip_if_server_version_lt('3.2.0') def test_georadius_store(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) r.georadius('barcelona', 2.191, 41.433, 1000, store='places_barcelona') assert r.zrange('places_barcelona', 0, -1) == [b'place1'] @skip_unless_arch_bits(64) @skip_if_server_version_lt('3.2.0') def test_georadius_store_dist(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, 'place2') r.geoadd('barcelona', *values) r.georadius('barcelona', 2.191, 41.433, 1000, store_dist='places_barcelona') # instead of save the geo score, the distance is saved. assert r.zscore('places_barcelona', 'place1') == 88.05060698409301 @skip_unless_arch_bits(64) @skip_if_server_version_lt('3.2.0') def test_georadiusmember(self, r): values = (2.1909389952632, 41.433791470673, 'place1') +\ (2.1873744593677, 41.406342043777, b'\x80place2') r.geoadd('barcelona', *values) assert r.georadiusbymember('barcelona', 'place1', 4000) ==\ [b'\x80place2', b'place1'] assert r.georadiusbymember('barcelona', 'place1', 10) == [b'place1'] assert r.georadiusbymember('barcelona', 'place1', 4000, withdist=True, withcoord=True, withhash=True) ==\ [[b'\x80place2', 3067.4157, 3471609625421029, (2.187376320362091, 41.40634178640635)], [b'place1', 0.0, 3471609698139488, (2.1909382939338684, 41.433790281840835)]] @skip_if_server_version_lt('5.0.0') def test_xack(self, r): stream = 'stream' group = 'group' consumer = 'consumer' # xack on a stream that doesn't exist assert r.xack(stream, group, '0-0') == 0 m1 = r.xadd(stream, {'one': 'one'}) m2 = r.xadd(stream, {'two': 'two'}) m3 = r.xadd(stream, {'three': 'three'}) # xack on a group that doesn't exist assert r.xack(stream, group, m1) == 0 r.xgroup_create(stream, group, 0) r.xreadgroup(group, consumer, streams={stream: '>'}) # xack returns the number of ack'd elements assert r.xack(stream, group, m1) == 1 assert r.xack(stream, group, m2, m3) == 2 @skip_if_server_version_lt('5.0.0') def test_xadd(self, r): stream = 'stream' message_id = r.xadd(stream, {'foo': 'bar'}) assert re.match(br'[0-9]+\-[0-9]+', message_id) # explicit message id message_id = b'9999999999999999999-0' assert message_id == r.xadd(stream, {'foo': 'bar'}, id=message_id) # with maxlen, the list evicts the first message r.xadd(stream, {'foo': 'bar'}, maxlen=2, approximate=False) assert r.xlen(stream) == 2 @skip_if_server_version_lt('5.0.0') def test_xclaim(self, r): stream = 'stream' group = 'group' consumer1 = 'consumer1' consumer2 = 'consumer2' message_id = r.xadd(stream, {'john': 'wick'}) message = get_stream_message(r, stream, message_id) r.xgroup_create(stream, group, 0) # trying to claim a message that isn't already pending doesn't # do anything response = r.xclaim(stream, group, consumer2, min_idle_time=0, message_ids=(message_id,)) assert response == [] # read the group as consumer1 to initially claim the messages r.xreadgroup(group, consumer1, streams={stream: '>'}) # claim the message as consumer2 response = r.xclaim(stream, group, consumer2, min_idle_time=0, message_ids=(message_id,)) assert response[0] == message # reclaim the message as consumer1, but use the justid argument # which only returns message ids assert r.xclaim(stream, group, consumer1, min_idle_time=0, message_ids=(message_id,), justid=True) == [message_id] @skip_if_server_version_lt('5.0.0') def test_xclaim_trimmed(self, r): # xclaim should not raise an exception if the item is not there stream = 'stream' group = 'group' r.xgroup_create(stream, group, id="$", mkstream=True) # add a couple of new items sid1 = r.xadd(stream, {"item": 0}) sid2 = r.xadd(stream, {"item": 0}) # read them from consumer1 r.xreadgroup(group, 'consumer1', {stream: ">"}) # add a 3rd and trim the stream down to 2 items r.xadd(stream, {"item": 3}, maxlen=2, approximate=False) # xclaim them from consumer2 # the item that is still in the stream should be returned item = r.xclaim(stream, group, 'consumer2', 0, [sid1, sid2]) assert len(item) == 2 assert item[0] == (None, None) assert item[1][0] == sid2 @skip_if_server_version_lt('5.0.0') def test_xdel(self, r): stream = 'stream' # deleting from an empty stream doesn't do anything assert r.xdel(stream, 1) == 0 m1 = r.xadd(stream, {'foo': 'bar'}) m2 = r.xadd(stream, {'foo': 'bar'}) m3 = r.xadd(stream, {'foo': 'bar'}) # xdel returns the number of deleted elements assert r.xdel(stream, m1) == 1 assert r.xdel(stream, m2, m3) == 2 @skip_if_server_version_lt('5.0.0') def test_xgroup_create(self, r): # tests xgroup_create and xinfo_groups stream = 'stream' group = 'group' r.xadd(stream, {'foo': 'bar'}) # no group is setup yet, no info to obtain assert r.xinfo_groups(stream) == [] assert r.xgroup_create(stream, group, 0) expected = [{ 'name': group.encode(), 'consumers': 0, 'pending': 0, 'last-delivered-id': b'0-0' }] assert r.xinfo_groups(stream) == expected @skip_if_server_version_lt('5.0.0') def test_xgroup_create_mkstream(self, r): # tests xgroup_create and xinfo_groups stream = 'stream' group = 'group' # an error is raised if a group is created on a stream that # doesn't already exist with pytest.raises(exceptions.ResponseError): r.xgroup_create(stream, group, 0) # however, with mkstream=True, the underlying stream is created # automatically assert r.xgroup_create(stream, group, 0, mkstream=True) expected = [{ 'name': group.encode(), 'consumers': 0, 'pending': 0, 'last-delivered-id': b'0-0' }] assert r.xinfo_groups(stream) == expected @skip_if_server_version_lt('5.0.0') def test_xgroup_delconsumer(self, r): stream = 'stream' group = 'group' consumer = 'consumer' r.xadd(stream, {'foo': 'bar'}) r.xadd(stream, {'foo': 'bar'}) r.xgroup_create(stream, group, 0) # a consumer that hasn't yet read any messages doesn't do anything assert r.xgroup_delconsumer(stream, group, consumer) == 0 # read all messages from the group r.xreadgroup(group, consumer, streams={stream: '>'}) # deleting the consumer should return 2 pending messages assert r.xgroup_delconsumer(stream, group, consumer) == 2 @skip_if_server_version_lt('5.0.0') def test_xgroup_destroy(self, r): stream = 'stream' group = 'group' r.xadd(stream, {'foo': 'bar'}) # destroying a nonexistent group returns False assert not r.xgroup_destroy(stream, group) r.xgroup_create(stream, group, 0) assert r.xgroup_destroy(stream, group) @skip_if_server_version_lt('5.0.0') def test_xgroup_setid(self, r): stream = 'stream' group = 'group' message_id = r.xadd(stream, {'foo': 'bar'}) r.xgroup_create(stream, group, 0) # advance the last_delivered_id to the message_id r.xgroup_setid(stream, group, message_id) expected = [{ 'name': group.encode(), 'consumers': 0, 'pending': 0, 'last-delivered-id': message_id }] assert r.xinfo_groups(stream) == expected @skip_if_server_version_lt('5.0.0') def test_xinfo_consumers(self, r): stream = 'stream' group = 'group' consumer1 = 'consumer1' consumer2 = 'consumer2' r.xadd(stream, {'foo': 'bar'}) r.xadd(stream, {'foo': 'bar'}) r.xadd(stream, {'foo': 'bar'}) r.xgroup_create(stream, group, 0) r.xreadgroup(group, consumer1, streams={stream: '>'}, count=1) r.xreadgroup(group, consumer2, streams={stream: '>'}) info = r.xinfo_consumers(stream, group) assert len(info) == 2 expected = [ {'name': consumer1.encode(), 'pending': 1}, {'name': consumer2.encode(), 'pending': 2}, ] # we can't determine the idle time, so just make sure it's an int assert isinstance(info[0].pop('idle'), (int, long)) assert isinstance(info[1].pop('idle'), (int, long)) assert info == expected @skip_if_server_version_lt('5.0.0') def test_xinfo_stream(self, r): stream = 'stream' m1 = r.xadd(stream, {'foo': 'bar'}) m2 = r.xadd(stream, {'foo': 'bar'}) info = r.xinfo_stream(stream) assert info['length'] == 2 assert info['first-entry'] == get_stream_message(r, stream, m1) assert info['last-entry'] == get_stream_message(r, stream, m2) @skip_if_server_version_lt('5.0.0') def test_xlen(self, r): stream = 'stream' assert r.xlen(stream) == 0 r.xadd(stream, {'foo': 'bar'}) r.xadd(stream, {'foo': 'bar'}) assert r.xlen(stream) == 2 @skip_if_server_version_lt('5.0.0') def test_xpending(self, r): stream = 'stream' group = 'group' consumer1 = 'consumer1' consumer2 = 'consumer2' m1 = r.xadd(stream, {'foo': 'bar'}) m2 = r.xadd(stream, {'foo': 'bar'}) r.xgroup_create(stream, group, 0) # xpending on a group that has no consumers yet expected = { 'pending': 0, 'min': None, 'max': None, 'consumers': [] } assert r.xpending(stream, group) == expected # read 1 message from the group with each consumer r.xreadgroup(group, consumer1, streams={stream: '>'}, count=1) r.xreadgroup(group, consumer2, streams={stream: '>'}, count=1) expected = { 'pending': 2, 'min': m1, 'max': m2, 'consumers': [ {'name': consumer1.encode(), 'pending': 1}, {'name': consumer2.encode(), 'pending': 1}, ] } assert r.xpending(stream, group) == expected @skip_if_server_version_lt('5.0.0') def test_xpending_range(self, r): stream = 'stream' group = 'group' consumer1 = 'consumer1' consumer2 = 'consumer2' m1 = r.xadd(stream, {'foo': 'bar'}) m2 = r.xadd(stream, {'foo': 'bar'}) r.xgroup_create(stream, group, 0) # xpending range on a group that has no consumers yet assert r.xpending_range(stream, group, min='-', max='+', count=5) == [] # read 1 message from the group with each consumer r.xreadgroup(group, consumer1, streams={stream: '>'}, count=1) r.xreadgroup(group, consumer2, streams={stream: '>'}, count=1) response = r.xpending_range(stream, group, min='-', max='+', count=5) assert len(response) == 2 assert response[0]['message_id'] == m1 assert response[0]['consumer'] == consumer1.encode() assert response[1]['message_id'] == m2 assert response[1]['consumer'] == consumer2.encode() @skip_if_server_version_lt('5.0.0') def test_xrange(self, r): stream = 'stream' m1 = r.xadd(stream, {'foo': 'bar'}) m2 = r.xadd(stream, {'foo': 'bar'}) m3 = r.xadd(stream, {'foo': 'bar'}) m4 = r.xadd(stream, {'foo': 'bar'}) def get_ids(results): return [result[0] for result in results] results = r.xrange(stream, min=m1) assert get_ids(results) == [m1, m2, m3, m4] results = r.xrange(stream, min=m2, max=m3) assert get_ids(results) == [m2, m3] results = r.xrange(stream, max=m3) assert get_ids(results) == [m1, m2, m3] results = r.xrange(stream, max=m2, count=1) assert get_ids(results) == [m1] @skip_if_server_version_lt('5.0.0') def test_xread(self, r): stream = 'stream' m1 = r.xadd(stream, {'foo': 'bar'}) m2 = r.xadd(stream, {'bing': 'baz'}) expected = [ [ stream.encode(), [ get_stream_message(r, stream, m1), get_stream_message(r, stream, m2), ] ] ] # xread starting at 0 returns both messages assert r.xread(streams={stream: 0}) == expected expected = [ [ stream.encode(), [ get_stream_message(r, stream, m1), ] ] ] # xread starting at 0 and count=1 returns only the first message assert r.xread(streams={stream: 0}, count=1) == expected expected = [ [ stream.encode(), [ get_stream_message(r, stream, m2), ] ] ] # xread starting at m1 returns only the second message assert r.xread(streams={stream: m1}) == expected # xread starting at the last message returns an empty list assert r.xread(streams={stream: m2}) == [] @skip_if_server_version_lt('5.0.0') def test_xreadgroup(self, r): stream = 'stream' group = 'group' consumer = 'consumer' m1 = r.xadd(stream, {'foo': 'bar'}) m2 = r.xadd(stream, {'bing': 'baz'}) r.xgroup_create(stream, group, 0) expected = [ [ stream.encode(), [ get_stream_message(r, stream, m1), get_stream_message(r, stream, m2), ] ] ] # xread starting at 0 returns both messages assert r.xreadgroup(group, consumer, streams={stream: '>'}) == expected r.xgroup_destroy(stream, group) r.xgroup_create(stream, group, 0) expected = [ [ stream.encode(), [ get_stream_message(r, stream, m1), ] ] ] # xread with count=1 returns only the first message assert r.xreadgroup(group, consumer, streams={stream: '>'}, count=1) == expected r.xgroup_destroy(stream, group) # create the group using $ as the last id meaning subsequent reads # will only find messages added after this r.xgroup_create(stream, group, '$') expected = [] # xread starting after the last message returns an empty message list assert r.xreadgroup(group, consumer, streams={stream: '>'}) == expected # xreadgroup with noack does not have any items in the PEL r.xgroup_destroy(stream, group) r.xgroup_create(stream, group, '0') assert len(r.xreadgroup(group, consumer, streams={stream: '>'}, noack=True)[0][1]) == 2 # now there should be nothing pending assert len(r.xreadgroup(group, consumer, streams={stream: '0'})[0][1]) == 0 r.xgroup_destroy(stream, group) r.xgroup_create(stream, group, '0') # delete all the messages in the stream expected = [ [ stream.encode(), [ (m1, {}), (m2, {}), ] ] ] r.xreadgroup(group, consumer, streams={stream: '>'}) r.xtrim(stream, 0) assert r.xreadgroup(group, consumer, streams={stream: '0'}) == expected @skip_if_server_version_lt('5.0.0') def test_xrevrange(self, r): stream = 'stream' m1 = r.xadd(stream, {'foo': 'bar'}) m2 = r.xadd(stream, {'foo': 'bar'}) m3 = r.xadd(stream, {'foo': 'bar'}) m4 = r.xadd(stream, {'foo': 'bar'}) def get_ids(results): return [result[0] for result in results] results = r.xrevrange(stream, max=m4) assert get_ids(results) == [m4, m3, m2, m1] results = r.xrevrange(stream, max=m3, min=m2) assert get_ids(results) == [m3, m2] results = r.xrevrange(stream, min=m3) assert get_ids(results) == [m4, m3] results = r.xrevrange(stream, min=m2, count=1) assert get_ids(results) == [m4] @skip_if_server_version_lt('5.0.0') def test_xtrim(self, r): stream = 'stream' # trimming an empty key doesn't do anything assert r.xtrim(stream, 1000) == 0 r.xadd(stream, {'foo': 'bar'}) r.xadd(stream, {'foo': 'bar'}) r.xadd(stream, {'foo': 'bar'}) r.xadd(stream, {'foo': 'bar'}) # trimming an amount large than the number of messages # doesn't do anything assert r.xtrim(stream, 5, approximate=False) == 0 # 1 message is trimmed assert r.xtrim(stream, 3, approximate=False) == 1 def test_bitfield_operations(self, r): # comments show affected bits bf = r.bitfield('a') resp = (bf .set('u8', 8, 255) # 00000000 11111111 .get('u8', 0) # 00000000 .get('u4', 8) # 1111 .get('u4', 12) # 1111 .get('u4', 13) # 111 0 .execute()) assert resp == [0, 0, 15, 15, 14] # .set() returns the previous value... resp = (bf .set('u8', 4, 1) # 0000 0001 .get('u16', 0) # 00000000 00011111 .set('u16', 0, 0) # 00000000 00000000 .execute()) assert resp == [15, 31, 31] # incrby adds to the value resp = (bf .incrby('u8', 8, 254) # 00000000 11111110 .incrby('u8', 8, 1) # 00000000 11111111 .get('u16', 0) # 00000000 11111111 .execute()) assert resp == [254, 255, 255] # Verify overflow protection works as a method: r.delete('a') resp = (bf .set('u8', 8, 254) # 00000000 11111110 .overflow('fail') .incrby('u8', 8, 2) # incrby 2 would overflow, None returned .incrby('u8', 8, 1) # 00000000 11111111 .incrby('u8', 8, 1) # incrby 1 would overflow, None returned .get('u16', 0) # 00000000 11111111 .execute()) assert resp == [0, None, 255, None, 255] # Verify overflow protection works as arg to incrby: r.delete('a') resp = (bf .set('u8', 8, 255) # 00000000 11111111 .incrby('u8', 8, 1) # 00000000 00000000 wrap default .set('u8', 8, 255) # 00000000 11111111 .incrby('u8', 8, 1, 'FAIL') # 00000000 11111111 fail .incrby('u8', 8, 1) # 00000000 11111111 still fail .get('u16', 0) # 00000000 11111111 .execute()) assert resp == [0, 0, 0, None, None, 255] # test default default_overflow r.delete('a') bf = r.bitfield('a', default_overflow='FAIL') resp = (bf .set('u8', 8, 255) # 00000000 11111111 .incrby('u8', 8, 1) # 00000000 11111111 fail default .get('u16', 0) # 00000000 11111111 .execute()) assert resp == [0, None, 255] @skip_if_server_version_lt('4.0.0') def test_memory_stats(self, r): # put a key into the current db to make sure that "db." # has data r.set('foo', 'bar') stats = r.memory_stats() assert isinstance(stats, dict) for key, value in iteritems(stats): if key.startswith('db.'): assert isinstance(value, dict) @skip_if_server_version_lt('4.0.0') def test_memory_usage(self, r): r.set('foo', 'bar') assert isinstance(r.memory_usage('foo'), int) class TestBinarySave(object): def test_binary_get_set(self, r): assert r.set(' foo bar ', '123') assert r.get(' foo bar ') == b'123' assert r.set(' foo\r\nbar\r\n ', '456') assert r.get(' foo\r\nbar\r\n ') == b'456' assert r.set(' \r\n\t\x07\x13 ', '789') assert r.get(' \r\n\t\x07\x13 ') == b'789' assert sorted(r.keys('*')) == \ [b' \r\n\t\x07\x13 ', b' foo\r\nbar\r\n ', b' foo bar '] assert r.delete(' foo bar ') assert r.delete(' foo\r\nbar\r\n ') assert r.delete(' \r\n\t\x07\x13 ') def test_binary_lists(self, r): mapping = { b'foo bar': [b'1', b'2', b'3'], b'foo\r\nbar\r\n': [b'4', b'5', b'6'], b'foo\tbar\x07': [b'7', b'8', b'9'], } # fill in lists for key, value in iteritems(mapping): r.rpush(key, *value) # check that KEYS returns all the keys as they are assert sorted(r.keys('*')) == sorted(iterkeys(mapping)) # check that it is possible to get list content by key name for key, value in iteritems(mapping): assert r.lrange(key, 0, -1) == value def test_22_info(self, r): """ Older Redis versions contained 'allocation_stats' in INFO that was the cause of a number of bugs when parsing. """ info = "allocation_stats:6=1,7=1,8=7141,9=180,10=92,11=116,12=5330," \ "13=123,14=3091,15=11048,16=225842,17=1784,18=814,19=12020," \ "20=2530,21=645,22=15113,23=8695,24=142860,25=318,26=3303," \ "27=20561,28=54042,29=37390,30=1884,31=18071,32=31367,33=160," \ "34=169,35=201,36=10155,37=1045,38=15078,39=22985,40=12523," \ "41=15588,42=265,43=1287,44=142,45=382,46=945,47=426,48=171," \ "49=56,50=516,51=43,52=41,53=46,54=54,55=75,56=647,57=332," \ "58=32,59=39,60=48,61=35,62=62,63=32,64=221,65=26,66=30," \ "67=36,68=41,69=44,70=26,71=144,72=169,73=24,74=37,75=25," \ "76=42,77=21,78=126,79=374,80=27,81=40,82=43,83=47,84=46," \ "85=114,86=34,87=37,88=7240,89=34,90=38,91=18,92=99,93=20," \ "94=18,95=17,96=15,97=22,98=18,99=69,100=17,101=22,102=15," \ "103=29,104=39,105=30,106=70,107=22,108=21,109=26,110=52," \ "111=45,112=33,113=67,114=41,115=44,116=48,117=53,118=54," \ "119=51,120=75,121=44,122=57,123=44,124=66,125=56,126=52," \ "127=81,128=108,129=70,130=50,131=51,132=53,133=45,134=62," \ "135=12,136=13,137=7,138=15,139=21,140=11,141=20,142=6,143=7," \ "144=11,145=6,146=16,147=19,148=1112,149=1,151=83,154=1," \ "155=1,156=1,157=1,160=1,161=1,162=2,166=1,169=1,170=1,171=2," \ "172=1,174=1,176=2,177=9,178=34,179=73,180=30,181=1,185=3," \ "187=1,188=1,189=1,192=1,196=1,198=1,200=1,201=1,204=1,205=1," \ "207=1,208=1,209=1,214=2,215=31,216=78,217=28,218=5,219=2," \ "220=1,222=1,225=1,227=1,234=1,242=1,250=1,252=1,253=1," \ ">=256=203" parsed = parse_info(info) assert 'allocation_stats' in parsed assert '6' in parsed['allocation_stats'] assert '>=256' in parsed['allocation_stats'] def test_large_responses(self, r): "The PythonParser has some special cases for return values > 1MB" # load up 5MB of data into a key data = ''.join([ascii_letters] * (5000000 // len(ascii_letters))) r['a'] = data assert r['a'] == data.encode() def test_floating_point_encoding(self, r): """ High precision floating point values sent to the server should keep precision. """ timestamp = 1349673917.939762 r.zadd('a', {'a1': timestamp}) assert r.zscore('a', 'a1') == timestamp redis-py-3.5.3/tests/test_connection.py000066400000000000000000000007351366526254200202100ustar00rootroot00000000000000import mock import pytest from redis.exceptions import InvalidResponse from redis.utils import HIREDIS_AVAILABLE @pytest.mark.skipif(HIREDIS_AVAILABLE, reason='PythonParser only') def test_invalid_response(r): raw = b'x' parser = r.connection._parser with mock.patch.object(parser._buffer, 'readline', return_value=raw): with pytest.raises(InvalidResponse) as cm: parser.read_response() assert str(cm.value) == 'Protocol Error: %r' % raw redis-py-3.5.3/tests/test_connection_pool.py000066400000000000000000000753611366526254200212500ustar00rootroot00000000000000import os import mock import pytest import re import redis import time from threading import Thread from redis.connection import ssl_available, to_bool from .conftest import skip_if_server_version_lt, _get_client, REDIS_6_VERSION from .test_pubsub import wait_for_message class DummyConnection(object): description_format = "DummyConnection<>" def __init__(self, **kwargs): self.kwargs = kwargs self.pid = os.getpid() def connect(self): pass def can_read(self): return False class TestConnectionPool(object): def get_pool(self, connection_kwargs=None, max_connections=None, connection_class=redis.Connection): connection_kwargs = connection_kwargs or {} pool = redis.ConnectionPool( connection_class=connection_class, max_connections=max_connections, **connection_kwargs) return pool def test_connection_creation(self): connection_kwargs = {'foo': 'bar', 'biz': 'baz'} pool = self.get_pool(connection_kwargs=connection_kwargs, connection_class=DummyConnection) connection = pool.get_connection('_') assert isinstance(connection, DummyConnection) assert connection.kwargs == connection_kwargs def test_multiple_connections(self): pool = self.get_pool() c1 = pool.get_connection('_') c2 = pool.get_connection('_') assert c1 != c2 def test_max_connections(self): pool = self.get_pool(max_connections=2) pool.get_connection('_') pool.get_connection('_') with pytest.raises(redis.ConnectionError): pool.get_connection('_') def test_reuse_previously_released_connection(self): pool = self.get_pool() c1 = pool.get_connection('_') pool.release(c1) c2 = pool.get_connection('_') assert c1 == c2 def test_repr_contains_db_info_tcp(self): connection_kwargs = { 'host': 'localhost', 'port': 6379, 'db': 1, 'client_name': 'test-client' } pool = self.get_pool(connection_kwargs=connection_kwargs, connection_class=redis.Connection) expected = ('ConnectionPool>') assert repr(pool) == expected def test_repr_contains_db_info_unix(self): connection_kwargs = { 'path': '/abc', 'db': 1, 'client_name': 'test-client' } pool = self.get_pool(connection_kwargs=connection_kwargs, connection_class=redis.UnixDomainSocketConnection) expected = ('ConnectionPool>') assert repr(pool) == expected class TestBlockingConnectionPool(object): def get_pool(self, connection_kwargs=None, max_connections=10, timeout=20): connection_kwargs = connection_kwargs or {} pool = redis.BlockingConnectionPool(connection_class=DummyConnection, max_connections=max_connections, timeout=timeout, **connection_kwargs) return pool def test_connection_creation(self): connection_kwargs = {'foo': 'bar', 'biz': 'baz'} pool = self.get_pool(connection_kwargs=connection_kwargs) connection = pool.get_connection('_') assert isinstance(connection, DummyConnection) assert connection.kwargs == connection_kwargs def test_multiple_connections(self): pool = self.get_pool() c1 = pool.get_connection('_') c2 = pool.get_connection('_') assert c1 != c2 def test_connection_pool_blocks_until_timeout(self): "When out of connections, block for timeout seconds, then raise" pool = self.get_pool(max_connections=1, timeout=0.1) pool.get_connection('_') start = time.time() with pytest.raises(redis.ConnectionError): pool.get_connection('_') # we should have waited at least 0.1 seconds assert time.time() - start >= 0.1 def connection_pool_blocks_until_another_connection_released(self): """ When out of connections, block until another connection is released to the pool """ pool = self.get_pool(max_connections=1, timeout=2) c1 = pool.get_connection('_') def target(): time.sleep(0.1) pool.release(c1) Thread(target=target).start() start = time.time() pool.get_connection('_') assert time.time() - start >= 0.1 def test_reuse_previously_released_connection(self): pool = self.get_pool() c1 = pool.get_connection('_') pool.release(c1) c2 = pool.get_connection('_') assert c1 == c2 def test_repr_contains_db_info_tcp(self): pool = redis.ConnectionPool( host='localhost', port=6379, db=0, client_name='test-client' ) expected = ('ConnectionPool>') assert repr(pool) == expected def test_repr_contains_db_info_unix(self): pool = redis.ConnectionPool( connection_class=redis.UnixDomainSocketConnection, path='abc', db=0, client_name='test-client' ) expected = ('ConnectionPool>') assert repr(pool) == expected class TestConnectionPoolURLParsing(object): def test_defaults(self): pool = redis.ConnectionPool.from_url('redis://localhost') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 0, 'username': None, 'password': None, } def test_hostname(self): pool = redis.ConnectionPool.from_url('redis://myhost') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'myhost', 'port': 6379, 'db': 0, 'username': None, 'password': None, } def test_quoted_hostname(self): pool = redis.ConnectionPool.from_url('redis://my %2F host %2B%3D+', decode_components=True) assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'my / host +=+', 'port': 6379, 'db': 0, 'username': None, 'password': None, } def test_port(self): pool = redis.ConnectionPool.from_url('redis://localhost:6380') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6380, 'db': 0, 'username': None, 'password': None, } @skip_if_server_version_lt(REDIS_6_VERSION) def test_username(self): pool = redis.ConnectionPool.from_url('redis://myuser:@localhost') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 0, 'username': 'myuser', 'password': None, } @skip_if_server_version_lt(REDIS_6_VERSION) def test_quoted_username(self): pool = redis.ConnectionPool.from_url( 'redis://%2Fmyuser%2F%2B name%3D%24+:@localhost', decode_components=True) assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 0, 'username': '/myuser/+ name=$+', 'password': None, } def test_password(self): pool = redis.ConnectionPool.from_url('redis://:mypassword@localhost') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 0, 'username': None, 'password': 'mypassword', } def test_quoted_password(self): pool = redis.ConnectionPool.from_url( 'redis://:%2Fmypass%2F%2B word%3D%24+@localhost', decode_components=True) assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 0, 'username': None, 'password': '/mypass/+ word=$+', } @skip_if_server_version_lt(REDIS_6_VERSION) def test_username_and_password(self): pool = redis.ConnectionPool.from_url('redis://myuser:mypass@localhost') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 0, 'username': 'myuser', 'password': 'mypass', } def test_db_as_argument(self): pool = redis.ConnectionPool.from_url('redis://localhost', db='1') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 1, 'username': None, 'password': None, } def test_db_in_path(self): pool = redis.ConnectionPool.from_url('redis://localhost/2', db='1') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 2, 'username': None, 'password': None, } def test_db_in_querystring(self): pool = redis.ConnectionPool.from_url('redis://localhost/2?db=3', db='1') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 3, 'username': None, 'password': None, } def test_extra_typed_querystring_options(self): pool = redis.ConnectionPool.from_url( 'redis://localhost/2?socket_timeout=20&socket_connect_timeout=10' '&socket_keepalive=&retry_on_timeout=Yes&max_connections=10' ) assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 2, 'socket_timeout': 20.0, 'socket_connect_timeout': 10.0, 'retry_on_timeout': True, 'username': None, 'password': None, } assert pool.max_connections == 10 def test_boolean_parsing(self): for expected, value in ( (None, None), (None, ''), (False, 0), (False, '0'), (False, 'f'), (False, 'F'), (False, 'False'), (False, 'n'), (False, 'N'), (False, 'No'), (True, 1), (True, '1'), (True, 'y'), (True, 'Y'), (True, 'Yes'), ): assert expected is to_bool(value) def test_client_name_in_querystring(self): pool = redis.ConnectionPool.from_url( 'redis://location?client_name=test-client' ) assert pool.connection_kwargs['client_name'] == 'test-client' def test_invalid_extra_typed_querystring_options(self): import warnings with warnings.catch_warnings(record=True) as warning_log: redis.ConnectionPool.from_url( 'redis://localhost/2?socket_timeout=_&' 'socket_connect_timeout=abc' ) # Compare the message values assert [ str(m.message) for m in sorted(warning_log, key=lambda l: str(l.message)) ] == [ 'Invalid value for `socket_connect_timeout` in connection URL.', 'Invalid value for `socket_timeout` in connection URL.', ] def test_extra_querystring_options(self): pool = redis.ConnectionPool.from_url('redis://localhost?a=1&b=2') assert pool.connection_class == redis.Connection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 0, 'username': None, 'password': None, 'a': '1', 'b': '2' } def test_calling_from_subclass_returns_correct_instance(self): pool = redis.BlockingConnectionPool.from_url('redis://localhost') assert isinstance(pool, redis.BlockingConnectionPool) def test_client_creates_connection_pool(self): r = redis.Redis.from_url('redis://myhost') assert r.connection_pool.connection_class == redis.Connection assert r.connection_pool.connection_kwargs == { 'host': 'myhost', 'port': 6379, 'db': 0, 'username': None, 'password': None, } def test_invalid_scheme_raises_error(self): with pytest.raises(ValueError) as cm: redis.ConnectionPool.from_url('localhost') assert str(cm.value) == ( 'Redis URL must specify one of the following schemes ' '(redis://, rediss://, unix://)' ) class TestConnectionPoolUnixSocketURLParsing(object): def test_defaults(self): pool = redis.ConnectionPool.from_url('unix:///socket') assert pool.connection_class == redis.UnixDomainSocketConnection assert pool.connection_kwargs == { 'path': '/socket', 'db': 0, 'username': None, 'password': None, } @skip_if_server_version_lt(REDIS_6_VERSION) def test_username(self): pool = redis.ConnectionPool.from_url('unix://myuser:@/socket') assert pool.connection_class == redis.UnixDomainSocketConnection assert pool.connection_kwargs == { 'path': '/socket', 'db': 0, 'username': 'myuser', 'password': None, } @skip_if_server_version_lt(REDIS_6_VERSION) def test_quoted_username(self): pool = redis.ConnectionPool.from_url( 'unix://%2Fmyuser%2F%2B name%3D%24+:@/socket', decode_components=True) assert pool.connection_class == redis.UnixDomainSocketConnection assert pool.connection_kwargs == { 'path': '/socket', 'db': 0, 'username': '/myuser/+ name=$+', 'password': None, } def test_password(self): pool = redis.ConnectionPool.from_url('unix://:mypassword@/socket') assert pool.connection_class == redis.UnixDomainSocketConnection assert pool.connection_kwargs == { 'path': '/socket', 'db': 0, 'username': None, 'password': 'mypassword', } def test_quoted_password(self): pool = redis.ConnectionPool.from_url( 'unix://:%2Fmypass%2F%2B word%3D%24+@/socket', decode_components=True) assert pool.connection_class == redis.UnixDomainSocketConnection assert pool.connection_kwargs == { 'path': '/socket', 'db': 0, 'username': None, 'password': '/mypass/+ word=$+', } def test_quoted_path(self): pool = redis.ConnectionPool.from_url( 'unix://:mypassword@/my%2Fpath%2Fto%2F..%2F+_%2B%3D%24ocket', decode_components=True) assert pool.connection_class == redis.UnixDomainSocketConnection assert pool.connection_kwargs == { 'path': '/my/path/to/../+_+=$ocket', 'db': 0, 'username': None, 'password': 'mypassword', } def test_db_as_argument(self): pool = redis.ConnectionPool.from_url('unix:///socket', db=1) assert pool.connection_class == redis.UnixDomainSocketConnection assert pool.connection_kwargs == { 'path': '/socket', 'db': 1, 'username': None, 'password': None, } def test_db_in_querystring(self): pool = redis.ConnectionPool.from_url('unix:///socket?db=2', db=1) assert pool.connection_class == redis.UnixDomainSocketConnection assert pool.connection_kwargs == { 'path': '/socket', 'db': 2, 'username': None, 'password': None, } def test_client_name_in_querystring(self): pool = redis.ConnectionPool.from_url( 'redis://location?client_name=test-client' ) assert pool.connection_kwargs['client_name'] == 'test-client' def test_extra_querystring_options(self): pool = redis.ConnectionPool.from_url('unix:///socket?a=1&b=2') assert pool.connection_class == redis.UnixDomainSocketConnection assert pool.connection_kwargs == { 'path': '/socket', 'db': 0, 'username': None, 'password': None, 'a': '1', 'b': '2' } class TestSSLConnectionURLParsing(object): @pytest.mark.skipif(not ssl_available, reason="SSL not installed") def test_defaults(self): pool = redis.ConnectionPool.from_url('rediss://localhost') assert pool.connection_class == redis.SSLConnection assert pool.connection_kwargs == { 'host': 'localhost', 'port': 6379, 'db': 0, 'username': None, 'password': None, } @pytest.mark.skipif(not ssl_available, reason="SSL not installed") def test_cert_reqs_options(self): import ssl class DummyConnectionPool(redis.ConnectionPool): def get_connection(self, *args, **kwargs): return self.make_connection() pool = DummyConnectionPool.from_url( 'rediss://?ssl_cert_reqs=none') assert pool.get_connection('_').cert_reqs == ssl.CERT_NONE pool = DummyConnectionPool.from_url( 'rediss://?ssl_cert_reqs=optional') assert pool.get_connection('_').cert_reqs == ssl.CERT_OPTIONAL pool = DummyConnectionPool.from_url( 'rediss://?ssl_cert_reqs=required') assert pool.get_connection('_').cert_reqs == ssl.CERT_REQUIRED pool = DummyConnectionPool.from_url( 'rediss://?ssl_check_hostname=False') assert pool.get_connection('_').check_hostname is False pool = DummyConnectionPool.from_url( 'rediss://?ssl_check_hostname=True') assert pool.get_connection('_').check_hostname is True class TestConnection(object): def test_on_connect_error(self): """ An error in Connection.on_connect should disconnect from the server see for details: https://github.com/andymccurdy/redis-py/issues/368 """ # this assumes the Redis server being tested against doesn't have # 9999 databases ;) bad_connection = redis.Redis(db=9999) # an error should be raised on connect with pytest.raises(redis.RedisError): bad_connection.info() pool = bad_connection.connection_pool assert len(pool._available_connections) == 1 assert not pool._available_connections[0]._sock @skip_if_server_version_lt('2.8.8') def test_busy_loading_disconnects_socket(self, r): """ If Redis raises a LOADING error, the connection should be disconnected and a BusyLoadingError raised """ with pytest.raises(redis.BusyLoadingError): r.execute_command('DEBUG', 'ERROR', 'LOADING fake message') assert not r.connection._sock @skip_if_server_version_lt('2.8.8') def test_busy_loading_from_pipeline_immediate_command(self, r): """ BusyLoadingErrors should raise from Pipelines that execute a command immediately, like WATCH does. """ pipe = r.pipeline() with pytest.raises(redis.BusyLoadingError): pipe.immediate_execute_command('DEBUG', 'ERROR', 'LOADING fake message') pool = r.connection_pool assert not pipe.connection assert len(pool._available_connections) == 1 assert not pool._available_connections[0]._sock @skip_if_server_version_lt('2.8.8') def test_busy_loading_from_pipeline(self, r): """ BusyLoadingErrors should be raised from a pipeline execution regardless of the raise_on_error flag. """ pipe = r.pipeline() pipe.execute_command('DEBUG', 'ERROR', 'LOADING fake message') with pytest.raises(redis.BusyLoadingError): pipe.execute() pool = r.connection_pool assert not pipe.connection assert len(pool._available_connections) == 1 assert not pool._available_connections[0]._sock @skip_if_server_version_lt('2.8.8') def test_read_only_error(self, r): "READONLY errors get turned in ReadOnlyError exceptions" with pytest.raises(redis.ReadOnlyError): r.execute_command('DEBUG', 'ERROR', 'READONLY blah blah') def test_connect_from_url_tcp(self): connection = redis.Redis.from_url('redis://localhost') pool = connection.connection_pool assert re.match('(.*)<(.*)<(.*)>>', repr(pool)).groups() == ( 'ConnectionPool', 'Connection', 'host=localhost,port=6379,db=0', ) def test_connect_from_url_unix(self): connection = redis.Redis.from_url('unix:///path/to/socket') pool = connection.connection_pool assert re.match('(.*)<(.*)<(.*)>>', repr(pool)).groups() == ( 'ConnectionPool', 'UnixDomainSocketConnection', 'path=/path/to/socket,db=0', ) def test_connect_no_auth_supplied_when_required(self, r): """ AuthenticationError should be raised when the server requires a password but one isn't supplied. """ with pytest.raises(redis.AuthenticationError): r.execute_command('DEBUG', 'ERROR', 'ERR Client sent AUTH, but no password is set') def test_connect_invalid_password_supplied(self, r): "AuthenticationError should be raised when sending the wrong password" with pytest.raises(redis.AuthenticationError): r.execute_command('DEBUG', 'ERROR', 'ERR invalid password') class TestMultiConnectionClient(object): @pytest.fixture() def r(self, request): return _get_client(redis.Redis, request, single_connection_client=False) def test_multi_connection_command(self, r): assert not r.connection assert r.set('a', '123') assert r.get('a') == b'123' class TestHealthCheck(object): interval = 60 @pytest.fixture() def r(self, request): return _get_client(redis.Redis, request, health_check_interval=self.interval) def assert_interval_advanced(self, connection): diff = connection.next_health_check - time.time() assert self.interval > diff > (self.interval - 1) def test_health_check_runs(self, r): r.connection.next_health_check = time.time() - 1 r.connection.check_health() self.assert_interval_advanced(r.connection) def test_arbitrary_command_invokes_health_check(self, r): # invoke a command to make sure the connection is entirely setup r.get('foo') r.connection.next_health_check = time.time() with mock.patch.object(r.connection, 'send_command', wraps=r.connection.send_command) as m: r.get('foo') m.assert_called_with('PING', check_health=False) self.assert_interval_advanced(r.connection) def test_arbitrary_command_advances_next_health_check(self, r): r.get('foo') next_health_check = r.connection.next_health_check r.get('foo') assert next_health_check < r.connection.next_health_check def test_health_check_not_invoked_within_interval(self, r): r.get('foo') with mock.patch.object(r.connection, 'send_command', wraps=r.connection.send_command) as m: r.get('foo') ping_call_spec = (('PING',), {'check_health': False}) assert ping_call_spec not in m.call_args_list def test_health_check_in_pipeline(self, r): with r.pipeline(transaction=False) as pipe: pipe.connection = pipe.connection_pool.get_connection('_') pipe.connection.next_health_check = 0 with mock.patch.object(pipe.connection, 'send_command', wraps=pipe.connection.send_command) as m: responses = pipe.set('foo', 'bar').get('foo').execute() m.assert_any_call('PING', check_health=False) assert responses == [True, b'bar'] def test_health_check_in_transaction(self, r): with r.pipeline(transaction=True) as pipe: pipe.connection = pipe.connection_pool.get_connection('_') pipe.connection.next_health_check = 0 with mock.patch.object(pipe.connection, 'send_command', wraps=pipe.connection.send_command) as m: responses = pipe.set('foo', 'bar').get('foo').execute() m.assert_any_call('PING', check_health=False) assert responses == [True, b'bar'] def test_health_check_in_watched_pipeline(self, r): r.set('foo', 'bar') with r.pipeline(transaction=False) as pipe: pipe.connection = pipe.connection_pool.get_connection('_') pipe.connection.next_health_check = 0 with mock.patch.object(pipe.connection, 'send_command', wraps=pipe.connection.send_command) as m: pipe.watch('foo') # the health check should be called when watching m.assert_called_with('PING', check_health=False) self.assert_interval_advanced(pipe.connection) assert pipe.get('foo') == b'bar' # reset the mock to clear the call list and schedule another # health check m.reset_mock() pipe.connection.next_health_check = 0 pipe.multi() responses = pipe.set('foo', 'not-bar').get('foo').execute() assert responses == [True, b'not-bar'] m.assert_any_call('PING', check_health=False) def test_health_check_in_pubsub_before_subscribe(self, r): "A health check happens before the first [p]subscribe" p = r.pubsub() p.connection = p.connection_pool.get_connection('_') p.connection.next_health_check = 0 with mock.patch.object(p.connection, 'send_command', wraps=p.connection.send_command) as m: assert not p.subscribed p.subscribe('foo') # the connection is not yet in pubsub mode, so the normal # ping/pong within connection.send_command should check # the health of the connection m.assert_any_call('PING', check_health=False) self.assert_interval_advanced(p.connection) subscribe_message = wait_for_message(p) assert subscribe_message['type'] == 'subscribe' def test_health_check_in_pubsub_after_subscribed(self, r): """ Pubsub can handle a new subscribe when it's time to check the connection health """ p = r.pubsub() p.connection = p.connection_pool.get_connection('_') p.connection.next_health_check = 0 with mock.patch.object(p.connection, 'send_command', wraps=p.connection.send_command) as m: p.subscribe('foo') subscribe_message = wait_for_message(p) assert subscribe_message['type'] == 'subscribe' self.assert_interval_advanced(p.connection) # because we weren't subscribed when sending the subscribe # message to 'foo', the connection's standard check_health ran # prior to subscribing. m.assert_any_call('PING', check_health=False) p.connection.next_health_check = 0 m.reset_mock() p.subscribe('bar') # the second subscribe issues exactly only command (the subscribe) # and the health check is not invoked m.assert_called_once_with('SUBSCRIBE', 'bar', check_health=False) # since no message has been read since the health check was # reset, it should still be 0 assert p.connection.next_health_check == 0 subscribe_message = wait_for_message(p) assert subscribe_message['type'] == 'subscribe' assert wait_for_message(p) is None # now that the connection is subscribed, the pubsub health # check should have taken over and include the HEALTH_CHECK_MESSAGE m.assert_any_call('PING', p.HEALTH_CHECK_MESSAGE, check_health=False) self.assert_interval_advanced(p.connection) def test_health_check_in_pubsub_poll(self, r): """ Polling a pubsub connection that's subscribed will regularly check the connection's health. """ p = r.pubsub() p.connection = p.connection_pool.get_connection('_') with mock.patch.object(p.connection, 'send_command', wraps=p.connection.send_command) as m: p.subscribe('foo') subscribe_message = wait_for_message(p) assert subscribe_message['type'] == 'subscribe' self.assert_interval_advanced(p.connection) # polling the connection before the health check interval # doesn't result in another health check m.reset_mock() next_health_check = p.connection.next_health_check assert wait_for_message(p) is None assert p.connection.next_health_check == next_health_check m.assert_not_called() # reset the health check and poll again # we should not receive a pong message, but the next_health_check # should be advanced p.connection.next_health_check = 0 assert wait_for_message(p) is None m.assert_called_with('PING', p.HEALTH_CHECK_MESSAGE, check_health=False) self.assert_interval_advanced(p.connection) redis-py-3.5.3/tests/test_encoding.py000066400000000000000000000075551366526254200176460ustar00rootroot00000000000000from __future__ import unicode_literals import pytest import redis from redis._compat import unichr, unicode from redis.connection import Connection from .conftest import _get_client class TestEncoding(object): @pytest.fixture() def r(self, request): return _get_client(redis.Redis, request=request, decode_responses=True) @pytest.fixture() def r_no_decode(self, request): return _get_client( redis.Redis, request=request, decode_responses=False, ) def test_simple_encoding(self, r_no_decode): unicode_string = unichr(3456) + 'abcd' + unichr(3421) r_no_decode['unicode-string'] = unicode_string.encode('utf-8') cached_val = r_no_decode['unicode-string'] assert isinstance(cached_val, bytes) assert unicode_string == cached_val.decode('utf-8') def test_simple_encoding_and_decoding(self, r): unicode_string = unichr(3456) + 'abcd' + unichr(3421) r['unicode-string'] = unicode_string cached_val = r['unicode-string'] assert isinstance(cached_val, unicode) assert unicode_string == cached_val def test_memoryview_encoding(self, r_no_decode): unicode_string = unichr(3456) + 'abcd' + unichr(3421) unicode_string_view = memoryview(unicode_string.encode('utf-8')) r_no_decode['unicode-string-memoryview'] = unicode_string_view cached_val = r_no_decode['unicode-string-memoryview'] # The cached value won't be a memoryview because it's a copy from Redis assert isinstance(cached_val, bytes) assert unicode_string == cached_val.decode('utf-8') def test_memoryview_encoding_and_decoding(self, r): unicode_string = unichr(3456) + 'abcd' + unichr(3421) unicode_string_view = memoryview(unicode_string.encode('utf-8')) r['unicode-string-memoryview'] = unicode_string_view cached_val = r['unicode-string-memoryview'] assert isinstance(cached_val, unicode) assert unicode_string == cached_val def test_list_encoding(self, r): unicode_string = unichr(3456) + 'abcd' + unichr(3421) result = [unicode_string, unicode_string, unicode_string] r.rpush('a', *result) assert r.lrange('a', 0, -1) == result class TestEncodingErrors(object): def test_ignore(self, request): r = _get_client(redis.Redis, request=request, decode_responses=True, encoding_errors='ignore') r.set('a', b'foo\xff') assert r.get('a') == 'foo' def test_replace(self, request): r = _get_client(redis.Redis, request=request, decode_responses=True, encoding_errors='replace') r.set('a', b'foo\xff') assert r.get('a') == 'foo\ufffd' class TestMemoryviewsAreNotPacked(object): def test_memoryviews_are_not_packed(self): c = Connection() arg = memoryview(b'some_arg') arg_list = ['SOME_COMMAND', arg] cmd = c.pack_command(*arg_list) assert cmd[1] is arg cmds = c.pack_commands([arg_list, arg_list]) assert cmds[1] is arg assert cmds[3] is arg class TestCommandsAreNotEncoded(object): @pytest.fixture() def r(self, request): return _get_client(redis.Redis, request=request, encoding='utf-16') def test_basic_command(self, r): r.set('hello', 'world') class TestInvalidUserInput(object): def test_boolean_fails(self, r): with pytest.raises(redis.DataError): r.set('a', True) def test_none_fails(self, r): with pytest.raises(redis.DataError): r.set('a', None) def test_user_type_fails(self, r): class Foo(object): def __str__(self): return 'Foo' def __unicode__(self): return 'Foo' with pytest.raises(redis.DataError): r.set('a', Foo()) redis-py-3.5.3/tests/test_lock.py000066400000000000000000000173251366526254200170040ustar00rootroot00000000000000import pytest import time from redis.exceptions import LockError, LockNotOwnedError from redis.client import Redis from redis.lock import Lock from .conftest import _get_client class TestLock(object): @pytest.fixture() def r_decoded(self, request): return _get_client(Redis, request=request, decode_responses=True) def get_lock(self, redis, *args, **kwargs): kwargs['lock_class'] = Lock return redis.lock(*args, **kwargs) def test_lock(self, r): lock = self.get_lock(r, 'foo') assert lock.acquire(blocking=False) assert r.get('foo') == lock.local.token assert r.ttl('foo') == -1 lock.release() assert r.get('foo') is None def test_lock_token(self, r): lock = self.get_lock(r, 'foo') self._test_lock_token(r, lock) def test_lock_token_thread_local_false(self, r): lock = self.get_lock(r, 'foo', thread_local=False) self._test_lock_token(r, lock) def _test_lock_token(self, r, lock): assert lock.acquire(blocking=False, token='test') assert r.get('foo') == b'test' assert lock.local.token == b'test' assert r.ttl('foo') == -1 lock.release() assert r.get('foo') is None assert lock.local.token is None def test_locked(self, r): lock = self.get_lock(r, 'foo') assert lock.locked() is False lock.acquire(blocking=False) assert lock.locked() is True lock.release() assert lock.locked() is False def _test_owned(self, client): lock = self.get_lock(client, 'foo') assert lock.owned() is False lock.acquire(blocking=False) assert lock.owned() is True lock.release() assert lock.owned() is False lock2 = self.get_lock(client, 'foo') assert lock.owned() is False assert lock2.owned() is False lock2.acquire(blocking=False) assert lock.owned() is False assert lock2.owned() is True lock2.release() assert lock.owned() is False assert lock2.owned() is False def test_owned(self, r): self._test_owned(r) def test_owned_with_decoded_responses(self, r_decoded): self._test_owned(r_decoded) def test_competing_locks(self, r): lock1 = self.get_lock(r, 'foo') lock2 = self.get_lock(r, 'foo') assert lock1.acquire(blocking=False) assert not lock2.acquire(blocking=False) lock1.release() assert lock2.acquire(blocking=False) assert not lock1.acquire(blocking=False) lock2.release() def test_timeout(self, r): lock = self.get_lock(r, 'foo', timeout=10) assert lock.acquire(blocking=False) assert 8 < r.ttl('foo') <= 10 lock.release() def test_float_timeout(self, r): lock = self.get_lock(r, 'foo', timeout=9.5) assert lock.acquire(blocking=False) assert 8 < r.pttl('foo') <= 9500 lock.release() def test_blocking_timeout(self, r): lock1 = self.get_lock(r, 'foo') assert lock1.acquire(blocking=False) bt = 0.2 sleep = 0.05 lock2 = self.get_lock(r, 'foo', sleep=sleep, blocking_timeout=bt) start = time.time() assert not lock2.acquire() # The elapsed duration should be less than the total blocking_timeout assert bt > (time.time() - start) > bt - sleep lock1.release() def test_context_manager(self, r): # blocking_timeout prevents a deadlock if the lock can't be acquired # for some reason with self.get_lock(r, 'foo', blocking_timeout=0.2) as lock: assert r.get('foo') == lock.local.token assert r.get('foo') is None def test_context_manager_raises_when_locked_not_acquired(self, r): r.set('foo', 'bar') with pytest.raises(LockError): with self.get_lock(r, 'foo', blocking_timeout=0.1): pass def test_high_sleep_small_blocking_timeout(self, r): lock1 = self.get_lock(r, 'foo') assert lock1.acquire(blocking=False) sleep = 60 bt = 1 lock2 = self.get_lock(r, 'foo', sleep=sleep, blocking_timeout=bt) start = time.time() assert not lock2.acquire() # the elapsed timed is less than the blocking_timeout as the lock is # unattainable given the sleep/blocking_timeout configuration assert bt > (time.time() - start) lock1.release() def test_releasing_unlocked_lock_raises_error(self, r): lock = self.get_lock(r, 'foo') with pytest.raises(LockError): lock.release() def test_releasing_lock_no_longer_owned_raises_error(self, r): lock = self.get_lock(r, 'foo') lock.acquire(blocking=False) # manually change the token r.set('foo', 'a') with pytest.raises(LockNotOwnedError): lock.release() # even though we errored, the token is still cleared assert lock.local.token is None def test_extend_lock(self, r): lock = self.get_lock(r, 'foo', timeout=10) assert lock.acquire(blocking=False) assert 8000 < r.pttl('foo') <= 10000 assert lock.extend(10) assert 16000 < r.pttl('foo') <= 20000 lock.release() def test_extend_lock_replace_ttl(self, r): lock = self.get_lock(r, 'foo', timeout=10) assert lock.acquire(blocking=False) assert 8000 < r.pttl('foo') <= 10000 assert lock.extend(10, replace_ttl=True) assert 8000 < r.pttl('foo') <= 10000 lock.release() def test_extend_lock_float(self, r): lock = self.get_lock(r, 'foo', timeout=10.0) assert lock.acquire(blocking=False) assert 8000 < r.pttl('foo') <= 10000 assert lock.extend(10.0) assert 16000 < r.pttl('foo') <= 20000 lock.release() def test_extending_unlocked_lock_raises_error(self, r): lock = self.get_lock(r, 'foo', timeout=10) with pytest.raises(LockError): lock.extend(10) def test_extending_lock_with_no_timeout_raises_error(self, r): lock = self.get_lock(r, 'foo') assert lock.acquire(blocking=False) with pytest.raises(LockError): lock.extend(10) lock.release() def test_extending_lock_no_longer_owned_raises_error(self, r): lock = self.get_lock(r, 'foo', timeout=10) assert lock.acquire(blocking=False) r.set('foo', 'a') with pytest.raises(LockNotOwnedError): lock.extend(10) def test_reacquire_lock(self, r): lock = self.get_lock(r, 'foo', timeout=10) assert lock.acquire(blocking=False) assert r.pexpire('foo', 5000) assert r.pttl('foo') <= 5000 assert lock.reacquire() assert 8000 < r.pttl('foo') <= 10000 lock.release() def test_reacquiring_unlocked_lock_raises_error(self, r): lock = self.get_lock(r, 'foo', timeout=10) with pytest.raises(LockError): lock.reacquire() def test_reacquiring_lock_with_no_timeout_raises_error(self, r): lock = self.get_lock(r, 'foo') assert lock.acquire(blocking=False) with pytest.raises(LockError): lock.reacquire() lock.release() def test_reacquiring_lock_no_longer_owned_raises_error(self, r): lock = self.get_lock(r, 'foo', timeout=10) assert lock.acquire(blocking=False) r.set('foo', 'a') with pytest.raises(LockNotOwnedError): lock.reacquire() class TestLockClassSelection(object): def test_lock_class_argument(self, r): class MyLock(object): def __init__(self, *args, **kwargs): pass lock = r.lock('foo', lock_class=MyLock) assert type(lock) == MyLock redis-py-3.5.3/tests/test_monitor.py000066400000000000000000000034031366526254200175330ustar00rootroot00000000000000from __future__ import unicode_literals from redis._compat import unicode from .conftest import wait_for_command class TestMonitor(object): def test_wait_command_not_found(self, r): "Make sure the wait_for_command func works when command is not found" with r.monitor() as m: response = wait_for_command(r, m, 'nothing') assert response is None def test_response_values(self, r): with r.monitor() as m: r.ping() response = wait_for_command(r, m, 'PING') assert isinstance(response['time'], float) assert response['db'] == 9 assert response['client_type'] in ('tcp', 'unix') assert isinstance(response['client_address'], unicode) assert isinstance(response['client_port'], unicode) assert response['command'] == 'PING' def test_command_with_quoted_key(self, r): with r.monitor() as m: r.get('foo"bar') response = wait_for_command(r, m, 'GET foo"bar') assert response['command'] == 'GET foo"bar' def test_command_with_binary_data(self, r): with r.monitor() as m: byte_string = b'foo\x92' r.get(byte_string) response = wait_for_command(r, m, 'GET foo\\x92') assert response['command'] == 'GET foo\\x92' def test_lua_script(self, r): with r.monitor() as m: script = 'return redis.call("GET", "foo")' assert r.eval(script, 0) is None response = wait_for_command(r, m, 'GET foo') assert response['command'] == 'GET foo' assert response['client_type'] == 'lua' assert response['client_address'] == 'lua' assert response['client_port'] == '' redis-py-3.5.3/tests/test_multiprocessing.py000066400000000000000000000126461366526254200213040ustar00rootroot00000000000000import pytest import multiprocessing import contextlib import redis from redis.connection import Connection, ConnectionPool from redis.exceptions import ConnectionError from .conftest import _get_client @contextlib.contextmanager def exit_callback(callback, *args): try: yield finally: callback(*args) class TestMultiprocessing(object): # Test connection sharing between forks. # See issue #1085 for details. # use a multi-connection client as that's the only type that is # actually fork/process-safe @pytest.fixture() def r(self, request): return _get_client( redis.Redis, request=request, single_connection_client=False) def test_close_connection_in_child(self): """ A connection owned by a parent and closed by a child doesn't destroy the file descriptors so a parent can still use it. """ conn = Connection() conn.send_command('ping') assert conn.read_response() == b'PONG' def target(conn): conn.send_command('ping') assert conn.read_response() == b'PONG' conn.disconnect() proc = multiprocessing.Process(target=target, args=(conn,)) proc.start() proc.join(3) assert proc.exitcode == 0 # The connection was created in the parent but disconnected in the # child. The child called socket.close() but did not call # socket.shutdown() because it wasn't the "owning" process. # Therefore the connection still works in the parent. conn.send_command('ping') assert conn.read_response() == b'PONG' def test_close_connection_in_parent(self): """ A connection owned by a parent is unusable by a child if the parent (the owning process) closes the connection. """ conn = Connection() conn.send_command('ping') assert conn.read_response() == b'PONG' def target(conn, ev): ev.wait() # the parent closed the connection. because it also created the # connection, the connection is shutdown and the child # cannot use it. with pytest.raises(ConnectionError): conn.send_command('ping') ev = multiprocessing.Event() proc = multiprocessing.Process(target=target, args=(conn, ev)) proc.start() conn.disconnect() ev.set() proc.join(3) assert proc.exitcode == 0 @pytest.mark.parametrize('max_connections', [1, 2, None]) def test_pool(self, max_connections): """ A child will create its own connections when using a pool created by a parent. """ pool = ConnectionPool.from_url('redis://localhost', max_connections=max_connections) conn = pool.get_connection('ping') main_conn_pid = conn.pid with exit_callback(pool.release, conn): conn.send_command('ping') assert conn.read_response() == b'PONG' def target(pool): with exit_callback(pool.disconnect): conn = pool.get_connection('ping') assert conn.pid != main_conn_pid with exit_callback(pool.release, conn): assert conn.send_command('ping') is None assert conn.read_response() == b'PONG' proc = multiprocessing.Process(target=target, args=(pool,)) proc.start() proc.join(3) assert proc.exitcode == 0 # Check that connection is still alive after fork process has exited # and disconnected the connections in its pool conn = pool.get_connection('ping') with exit_callback(pool.release, conn): assert conn.send_command('ping') is None assert conn.read_response() == b'PONG' @pytest.mark.parametrize('max_connections', [1, 2, None]) def test_close_pool_in_main(self, max_connections): """ A child process that uses the same pool as its parent isn't affected when the parent disconnects all connections within the pool. """ pool = ConnectionPool.from_url('redis://localhost', max_connections=max_connections) conn = pool.get_connection('ping') assert conn.send_command('ping') is None assert conn.read_response() == b'PONG' def target(pool, disconnect_event): conn = pool.get_connection('ping') with exit_callback(pool.release, conn): assert conn.send_command('ping') is None assert conn.read_response() == b'PONG' disconnect_event.wait() assert conn.send_command('ping') is None assert conn.read_response() == b'PONG' ev = multiprocessing.Event() proc = multiprocessing.Process(target=target, args=(pool, ev)) proc.start() pool.disconnect() ev.set() proc.join(3) assert proc.exitcode == 0 def test_redis_client(self, r): "A redis client created in a parent can also be used in a child" assert r.ping() is True def target(client): assert client.ping() is True del client proc = multiprocessing.Process(target=target, args=(r,)) proc.start() proc.join(3) assert proc.exitcode == 0 assert r.ping() is True redis-py-3.5.3/tests/test_pipeline.py000066400000000000000000000267061366526254200176640ustar00rootroot00000000000000from __future__ import unicode_literals import pytest import redis from redis._compat import unichr, unicode from .conftest import wait_for_command class TestPipeline(object): def test_pipeline_is_true(self, r): "Ensure pipeline instances are not false-y" with r.pipeline() as pipe: assert pipe def test_pipeline(self, r): with r.pipeline() as pipe: (pipe.set('a', 'a1') .get('a') .zadd('z', {'z1': 1}) .zadd('z', {'z2': 4}) .zincrby('z', 1, 'z1') .zrange('z', 0, 5, withscores=True)) assert pipe.execute() == \ [ True, b'a1', True, True, 2.0, [(b'z1', 2.0), (b'z2', 4)], ] def test_pipeline_memoryview(self, r): with r.pipeline() as pipe: (pipe.set('a', memoryview(b'a1')) .get('a')) assert pipe.execute() == \ [ True, b'a1', ] def test_pipeline_length(self, r): with r.pipeline() as pipe: # Initially empty. assert len(pipe) == 0 # Fill 'er up! pipe.set('a', 'a1').set('b', 'b1').set('c', 'c1') assert len(pipe) == 3 # Execute calls reset(), so empty once again. pipe.execute() assert len(pipe) == 0 def test_pipeline_no_transaction(self, r): with r.pipeline(transaction=False) as pipe: pipe.set('a', 'a1').set('b', 'b1').set('c', 'c1') assert pipe.execute() == [True, True, True] assert r['a'] == b'a1' assert r['b'] == b'b1' assert r['c'] == b'c1' def test_pipeline_no_transaction_watch(self, r): r['a'] = 0 with r.pipeline(transaction=False) as pipe: pipe.watch('a') a = pipe.get('a') pipe.multi() pipe.set('a', int(a) + 1) assert pipe.execute() == [True] def test_pipeline_no_transaction_watch_failure(self, r): r['a'] = 0 with r.pipeline(transaction=False) as pipe: pipe.watch('a') a = pipe.get('a') r['a'] = 'bad' pipe.multi() pipe.set('a', int(a) + 1) with pytest.raises(redis.WatchError): pipe.execute() assert r['a'] == b'bad' def test_exec_error_in_response(self, r): """ an invalid pipeline command at exec time adds the exception instance to the list of returned values """ r['c'] = 'a' with r.pipeline() as pipe: pipe.set('a', 1).set('b', 2).lpush('c', 3).set('d', 4) result = pipe.execute(raise_on_error=False) assert result[0] assert r['a'] == b'1' assert result[1] assert r['b'] == b'2' # we can't lpush to a key that's a string value, so this should # be a ResponseError exception assert isinstance(result[2], redis.ResponseError) assert r['c'] == b'a' # since this isn't a transaction, the other commands after the # error are still executed assert result[3] assert r['d'] == b'4' # make sure the pipe was restored to a working state assert pipe.set('z', 'zzz').execute() == [True] assert r['z'] == b'zzz' def test_exec_error_raised(self, r): r['c'] = 'a' with r.pipeline() as pipe: pipe.set('a', 1).set('b', 2).lpush('c', 3).set('d', 4) with pytest.raises(redis.ResponseError) as ex: pipe.execute() assert unicode(ex.value).startswith('Command # 3 (LPUSH c 3) of ' 'pipeline caused error: ') # make sure the pipe was restored to a working state assert pipe.set('z', 'zzz').execute() == [True] assert r['z'] == b'zzz' def test_transaction_with_empty_error_command(self, r): """ Commands with custom EMPTY_ERROR functionality return their default values in the pipeline no matter the raise_on_error preference """ for error_switch in (True, False): with r.pipeline() as pipe: pipe.set('a', 1).mget([]).set('c', 3) result = pipe.execute(raise_on_error=error_switch) assert result[0] assert result[1] == [] assert result[2] def test_pipeline_with_empty_error_command(self, r): """ Commands with custom EMPTY_ERROR functionality return their default values in the pipeline no matter the raise_on_error preference """ for error_switch in (True, False): with r.pipeline(transaction=False) as pipe: pipe.set('a', 1).mget([]).set('c', 3) result = pipe.execute(raise_on_error=error_switch) assert result[0] assert result[1] == [] assert result[2] def test_parse_error_raised(self, r): with r.pipeline() as pipe: # the zrem is invalid because we don't pass any keys to it pipe.set('a', 1).zrem('b').set('b', 2) with pytest.raises(redis.ResponseError) as ex: pipe.execute() assert unicode(ex.value).startswith('Command # 2 (ZREM b) of ' 'pipeline caused error: ') # make sure the pipe was restored to a working state assert pipe.set('z', 'zzz').execute() == [True] assert r['z'] == b'zzz' def test_parse_error_raised_transaction(self, r): with r.pipeline() as pipe: pipe.multi() # the zrem is invalid because we don't pass any keys to it pipe.set('a', 1).zrem('b').set('b', 2) with pytest.raises(redis.ResponseError) as ex: pipe.execute() assert unicode(ex.value).startswith('Command # 2 (ZREM b) of ' 'pipeline caused error: ') # make sure the pipe was restored to a working state assert pipe.set('z', 'zzz').execute() == [True] assert r['z'] == b'zzz' def test_watch_succeed(self, r): r['a'] = 1 r['b'] = 2 with r.pipeline() as pipe: pipe.watch('a', 'b') assert pipe.watching a_value = pipe.get('a') b_value = pipe.get('b') assert a_value == b'1' assert b_value == b'2' pipe.multi() pipe.set('c', 3) assert pipe.execute() == [True] assert not pipe.watching def test_watch_failure(self, r): r['a'] = 1 r['b'] = 2 with r.pipeline() as pipe: pipe.watch('a', 'b') r['b'] = 3 pipe.multi() pipe.get('a') with pytest.raises(redis.WatchError): pipe.execute() assert not pipe.watching def test_watch_failure_in_empty_transaction(self, r): r['a'] = 1 r['b'] = 2 with r.pipeline() as pipe: pipe.watch('a', 'b') r['b'] = 3 pipe.multi() with pytest.raises(redis.WatchError): pipe.execute() assert not pipe.watching def test_unwatch(self, r): r['a'] = 1 r['b'] = 2 with r.pipeline() as pipe: pipe.watch('a', 'b') r['b'] = 3 pipe.unwatch() assert not pipe.watching pipe.get('a') assert pipe.execute() == [b'1'] def test_watch_exec_no_unwatch(self, r): r['a'] = 1 r['b'] = 2 with r.monitor() as m: with r.pipeline() as pipe: pipe.watch('a', 'b') assert pipe.watching a_value = pipe.get('a') b_value = pipe.get('b') assert a_value == b'1' assert b_value == b'2' pipe.multi() pipe.set('c', 3) assert pipe.execute() == [True] assert not pipe.watching unwatch_command = wait_for_command(r, m, 'UNWATCH') assert unwatch_command is None, "should not send UNWATCH" def test_watch_reset_unwatch(self, r): r['a'] = 1 with r.monitor() as m: with r.pipeline() as pipe: pipe.watch('a') assert pipe.watching pipe.reset() assert not pipe.watching unwatch_command = wait_for_command(r, m, 'UNWATCH') assert unwatch_command is not None assert unwatch_command['command'] == 'UNWATCH' def test_transaction_callable(self, r): r['a'] = 1 r['b'] = 2 has_run = [] def my_transaction(pipe): a_value = pipe.get('a') assert a_value in (b'1', b'2') b_value = pipe.get('b') assert b_value == b'2' # silly run-once code... incr's "a" so WatchError should be raised # forcing this all to run again. this should incr "a" once to "2" if not has_run: r.incr('a') has_run.append('it has') pipe.multi() pipe.set('c', int(a_value) + int(b_value)) result = r.transaction(my_transaction, 'a', 'b') assert result == [True] assert r['c'] == b'4' def test_transaction_callable_returns_value_from_callable(self, r): def callback(pipe): # No need to do anything here since we only want the return value return 'a' res = r.transaction(callback, 'my-key', value_from_callable=True) assert res == 'a' def test_exec_error_in_no_transaction_pipeline(self, r): r['a'] = 1 with r.pipeline(transaction=False) as pipe: pipe.llen('a') pipe.expire('a', 100) with pytest.raises(redis.ResponseError) as ex: pipe.execute() assert unicode(ex.value).startswith('Command # 1 (LLEN a) of ' 'pipeline caused error: ') assert r['a'] == b'1' def test_exec_error_in_no_transaction_pipeline_unicode_command(self, r): key = unichr(3456) + 'abcd' + unichr(3421) r[key] = 1 with r.pipeline(transaction=False) as pipe: pipe.llen(key) pipe.expire(key, 100) with pytest.raises(redis.ResponseError) as ex: pipe.execute() expected = unicode('Command # 1 (LLEN %s) of pipeline caused ' 'error: ') % key assert unicode(ex.value).startswith(expected) assert r[key] == b'1' def test_pipeline_with_bitfield(self, r): with r.pipeline() as pipe: pipe.set('a', '1') bf = pipe.bitfield('b') pipe2 = (bf .set('u8', 8, 255) .get('u8', 0) .get('u4', 8) # 1111 .get('u4', 12) # 1111 .get('u4', 13) # 1110 .execute()) pipe.get('a') response = pipe.execute() assert pipe == pipe2 assert response == [True, [0, 0, 15, 15, 14], b'1'] redis-py-3.5.3/tests/test_pubsub.py000066400000000000000000000510221366526254200173440ustar00rootroot00000000000000from __future__ import unicode_literals import pytest import time import redis from redis.exceptions import ConnectionError from redis._compat import basestring, unichr from .conftest import _get_client from .conftest import skip_if_server_version_lt def wait_for_message(pubsub, timeout=0.1, ignore_subscribe_messages=False): now = time.time() timeout = now + timeout while now < timeout: message = pubsub.get_message( ignore_subscribe_messages=ignore_subscribe_messages) if message is not None: return message time.sleep(0.01) now = time.time() return None def make_message(type, channel, data, pattern=None): return { 'type': type, 'pattern': pattern and pattern.encode('utf-8') or None, 'channel': channel and channel.encode('utf-8') or None, 'data': data.encode('utf-8') if isinstance(data, basestring) else data } def make_subscribe_test_data(pubsub, type): if type == 'channel': return { 'p': pubsub, 'sub_type': 'subscribe', 'unsub_type': 'unsubscribe', 'sub_func': pubsub.subscribe, 'unsub_func': pubsub.unsubscribe, 'keys': ['foo', 'bar', 'uni' + unichr(4456) + 'code'] } elif type == 'pattern': return { 'p': pubsub, 'sub_type': 'psubscribe', 'unsub_type': 'punsubscribe', 'sub_func': pubsub.psubscribe, 'unsub_func': pubsub.punsubscribe, 'keys': ['f*', 'b*', 'uni' + unichr(4456) + '*'] } assert False, 'invalid subscribe type: %s' % type class TestPubSubSubscribeUnsubscribe(object): def _test_subscribe_unsubscribe(self, p, sub_type, unsub_type, sub_func, unsub_func, keys): for key in keys: assert sub_func(key) is None # should be a message for each channel/pattern we just subscribed to for i, key in enumerate(keys): assert wait_for_message(p) == make_message(sub_type, key, i + 1) for key in keys: assert unsub_func(key) is None # should be a message for each channel/pattern we just unsubscribed # from for i, key in enumerate(keys): i = len(keys) - 1 - i assert wait_for_message(p) == make_message(unsub_type, key, i) def test_channel_subscribe_unsubscribe(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'channel') self._test_subscribe_unsubscribe(**kwargs) def test_pattern_subscribe_unsubscribe(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'pattern') self._test_subscribe_unsubscribe(**kwargs) def _test_resubscribe_on_reconnection(self, p, sub_type, unsub_type, sub_func, unsub_func, keys): for key in keys: assert sub_func(key) is None # should be a message for each channel/pattern we just subscribed to for i, key in enumerate(keys): assert wait_for_message(p) == make_message(sub_type, key, i + 1) # manually disconnect p.connection.disconnect() # calling get_message again reconnects and resubscribes # note, we may not re-subscribe to channels in exactly the same order # so we have to do some extra checks to make sure we got them all messages = [] for i in range(len(keys)): messages.append(wait_for_message(p)) unique_channels = set() assert len(messages) == len(keys) for i, message in enumerate(messages): assert message['type'] == sub_type assert message['data'] == i + 1 assert isinstance(message['channel'], bytes) channel = message['channel'].decode('utf-8') unique_channels.add(channel) assert len(unique_channels) == len(keys) for channel in unique_channels: assert channel in keys def test_resubscribe_to_channels_on_reconnection(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'channel') self._test_resubscribe_on_reconnection(**kwargs) def test_resubscribe_to_patterns_on_reconnection(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'pattern') self._test_resubscribe_on_reconnection(**kwargs) def _test_subscribed_property(self, p, sub_type, unsub_type, sub_func, unsub_func, keys): assert p.subscribed is False sub_func(keys[0]) # we're now subscribed even though we haven't processed the # reply from the server just yet assert p.subscribed is True assert wait_for_message(p) == make_message(sub_type, keys[0], 1) # we're still subscribed assert p.subscribed is True # unsubscribe from all channels unsub_func() # we're still technically subscribed until we process the # response messages from the server assert p.subscribed is True assert wait_for_message(p) == make_message(unsub_type, keys[0], 0) # now we're no longer subscribed as no more messages can be delivered # to any channels we were listening to assert p.subscribed is False # subscribing again flips the flag back sub_func(keys[0]) assert p.subscribed is True assert wait_for_message(p) == make_message(sub_type, keys[0], 1) # unsubscribe again unsub_func() assert p.subscribed is True # subscribe to another channel before reading the unsubscribe response sub_func(keys[1]) assert p.subscribed is True # read the unsubscribe for key1 assert wait_for_message(p) == make_message(unsub_type, keys[0], 0) # we're still subscribed to key2, so subscribed should still be True assert p.subscribed is True # read the key2 subscribe message assert wait_for_message(p) == make_message(sub_type, keys[1], 1) unsub_func() # haven't read the message yet, so we're still subscribed assert p.subscribed is True assert wait_for_message(p) == make_message(unsub_type, keys[1], 0) # now we're finally unsubscribed assert p.subscribed is False def test_subscribe_property_with_channels(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'channel') self._test_subscribed_property(**kwargs) def test_subscribe_property_with_patterns(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'pattern') self._test_subscribed_property(**kwargs) def test_ignore_all_subscribe_messages(self, r): p = r.pubsub(ignore_subscribe_messages=True) checks = ( (p.subscribe, 'foo'), (p.unsubscribe, 'foo'), (p.psubscribe, 'f*'), (p.punsubscribe, 'f*'), ) assert p.subscribed is False for func, channel in checks: assert func(channel) is None assert p.subscribed is True assert wait_for_message(p) is None assert p.subscribed is False def test_ignore_individual_subscribe_messages(self, r): p = r.pubsub() checks = ( (p.subscribe, 'foo'), (p.unsubscribe, 'foo'), (p.psubscribe, 'f*'), (p.punsubscribe, 'f*'), ) assert p.subscribed is False for func, channel in checks: assert func(channel) is None assert p.subscribed is True message = wait_for_message(p, ignore_subscribe_messages=True) assert message is None assert p.subscribed is False def test_sub_unsub_resub_channels(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'channel') self._test_sub_unsub_resub(**kwargs) def test_sub_unsub_resub_patterns(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'pattern') self._test_sub_unsub_resub(**kwargs) def _test_sub_unsub_resub(self, p, sub_type, unsub_type, sub_func, unsub_func, keys): # https://github.com/andymccurdy/redis-py/issues/764 key = keys[0] sub_func(key) unsub_func(key) sub_func(key) assert p.subscribed is True assert wait_for_message(p) == make_message(sub_type, key, 1) assert wait_for_message(p) == make_message(unsub_type, key, 0) assert wait_for_message(p) == make_message(sub_type, key, 1) assert p.subscribed is True def test_sub_unsub_all_resub_channels(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'channel') self._test_sub_unsub_all_resub(**kwargs) def test_sub_unsub_all_resub_patterns(self, r): kwargs = make_subscribe_test_data(r.pubsub(), 'pattern') self._test_sub_unsub_all_resub(**kwargs) def _test_sub_unsub_all_resub(self, p, sub_type, unsub_type, sub_func, unsub_func, keys): # https://github.com/andymccurdy/redis-py/issues/764 key = keys[0] sub_func(key) unsub_func() sub_func(key) assert p.subscribed is True assert wait_for_message(p) == make_message(sub_type, key, 1) assert wait_for_message(p) == make_message(unsub_type, key, 0) assert wait_for_message(p) == make_message(sub_type, key, 1) assert p.subscribed is True class TestPubSubMessages(object): def setup_method(self, method): self.message = None def message_handler(self, message): self.message = message def test_published_message_to_channel(self, r): p = r.pubsub() p.subscribe('foo') assert wait_for_message(p) == make_message('subscribe', 'foo', 1) assert r.publish('foo', 'test message') == 1 message = wait_for_message(p) assert isinstance(message, dict) assert message == make_message('message', 'foo', 'test message') def test_published_message_to_pattern(self, r): p = r.pubsub() p.subscribe('foo') p.psubscribe('f*') assert wait_for_message(p) == make_message('subscribe', 'foo', 1) assert wait_for_message(p) == make_message('psubscribe', 'f*', 2) # 1 to pattern, 1 to channel assert r.publish('foo', 'test message') == 2 message1 = wait_for_message(p) message2 = wait_for_message(p) assert isinstance(message1, dict) assert isinstance(message2, dict) expected = [ make_message('message', 'foo', 'test message'), make_message('pmessage', 'foo', 'test message', pattern='f*') ] assert message1 in expected assert message2 in expected assert message1 != message2 def test_channel_message_handler(self, r): p = r.pubsub(ignore_subscribe_messages=True) p.subscribe(foo=self.message_handler) assert wait_for_message(p) is None assert r.publish('foo', 'test message') == 1 assert wait_for_message(p) is None assert self.message == make_message('message', 'foo', 'test message') def test_pattern_message_handler(self, r): p = r.pubsub(ignore_subscribe_messages=True) p.psubscribe(**{'f*': self.message_handler}) assert wait_for_message(p) is None assert r.publish('foo', 'test message') == 1 assert wait_for_message(p) is None assert self.message == make_message('pmessage', 'foo', 'test message', pattern='f*') def test_unicode_channel_message_handler(self, r): p = r.pubsub(ignore_subscribe_messages=True) channel = 'uni' + unichr(4456) + 'code' channels = {channel: self.message_handler} p.subscribe(**channels) assert wait_for_message(p) is None assert r.publish(channel, 'test message') == 1 assert wait_for_message(p) is None assert self.message == make_message('message', channel, 'test message') def test_unicode_pattern_message_handler(self, r): p = r.pubsub(ignore_subscribe_messages=True) pattern = 'uni' + unichr(4456) + '*' channel = 'uni' + unichr(4456) + 'code' p.psubscribe(**{pattern: self.message_handler}) assert wait_for_message(p) is None assert r.publish(channel, 'test message') == 1 assert wait_for_message(p) is None assert self.message == make_message('pmessage', channel, 'test message', pattern=pattern) def test_get_message_without_subscribe(self, r): p = r.pubsub() with pytest.raises(RuntimeError) as info: p.get_message() expect = ('connection not set: ' 'did you forget to call subscribe() or psubscribe()?') assert expect in info.exconly() class TestPubSubAutoDecoding(object): "These tests only validate that we get unicode values back" channel = 'uni' + unichr(4456) + 'code' pattern = 'uni' + unichr(4456) + '*' data = 'abc' + unichr(4458) + '123' def make_message(self, type, channel, data, pattern=None): return { 'type': type, 'channel': channel, 'pattern': pattern, 'data': data } def setup_method(self, method): self.message = None def message_handler(self, message): self.message = message @pytest.fixture() def r(self, request): return _get_client(redis.Redis, request=request, decode_responses=True) def test_channel_subscribe_unsubscribe(self, r): p = r.pubsub() p.subscribe(self.channel) assert wait_for_message(p) == self.make_message('subscribe', self.channel, 1) p.unsubscribe(self.channel) assert wait_for_message(p) == self.make_message('unsubscribe', self.channel, 0) def test_pattern_subscribe_unsubscribe(self, r): p = r.pubsub() p.psubscribe(self.pattern) assert wait_for_message(p) == self.make_message('psubscribe', self.pattern, 1) p.punsubscribe(self.pattern) assert wait_for_message(p) == self.make_message('punsubscribe', self.pattern, 0) def test_channel_publish(self, r): p = r.pubsub() p.subscribe(self.channel) assert wait_for_message(p) == self.make_message('subscribe', self.channel, 1) r.publish(self.channel, self.data) assert wait_for_message(p) == self.make_message('message', self.channel, self.data) def test_pattern_publish(self, r): p = r.pubsub() p.psubscribe(self.pattern) assert wait_for_message(p) == self.make_message('psubscribe', self.pattern, 1) r.publish(self.channel, self.data) assert wait_for_message(p) == self.make_message('pmessage', self.channel, self.data, pattern=self.pattern) def test_channel_message_handler(self, r): p = r.pubsub(ignore_subscribe_messages=True) p.subscribe(**{self.channel: self.message_handler}) assert wait_for_message(p) is None r.publish(self.channel, self.data) assert wait_for_message(p) is None assert self.message == self.make_message('message', self.channel, self.data) # test that we reconnected to the correct channel self.message = None p.connection.disconnect() assert wait_for_message(p) is None # should reconnect new_data = self.data + 'new data' r.publish(self.channel, new_data) assert wait_for_message(p) is None assert self.message == self.make_message('message', self.channel, new_data) def test_pattern_message_handler(self, r): p = r.pubsub(ignore_subscribe_messages=True) p.psubscribe(**{self.pattern: self.message_handler}) assert wait_for_message(p) is None r.publish(self.channel, self.data) assert wait_for_message(p) is None assert self.message == self.make_message('pmessage', self.channel, self.data, pattern=self.pattern) # test that we reconnected to the correct pattern self.message = None p.connection.disconnect() assert wait_for_message(p) is None # should reconnect new_data = self.data + 'new data' r.publish(self.channel, new_data) assert wait_for_message(p) is None assert self.message == self.make_message('pmessage', self.channel, new_data, pattern=self.pattern) def test_context_manager(self, r): with r.pubsub() as pubsub: pubsub.subscribe('foo') assert pubsub.connection is not None assert pubsub.connection is None assert pubsub.channels == {} assert pubsub.patterns == {} class TestPubSubRedisDown(object): def test_channel_subscribe(self, r): r = redis.Redis(host='localhost', port=6390) p = r.pubsub() with pytest.raises(ConnectionError): p.subscribe('foo') class TestPubSubSubcommands(object): @skip_if_server_version_lt('2.8.0') def test_pubsub_channels(self, r): p = r.pubsub() p.subscribe('foo', 'bar', 'baz', 'quux') for i in range(4): assert wait_for_message(p)['type'] == 'subscribe' channels = sorted(r.pubsub_channels()) # assert channels == [b'bar', b'baz', b'foo', b'quux'] if channels != [b'bar', b'baz', b'foo', b'quux']: import pdb pdb.set_trace() @skip_if_server_version_lt('2.8.0') def test_pubsub_numsub(self, r): p1 = r.pubsub() p1.subscribe('foo', 'bar', 'baz') for i in range(3): assert wait_for_message(p1)['type'] == 'subscribe' p2 = r.pubsub() p2.subscribe('bar', 'baz') for i in range(2): assert wait_for_message(p2)['type'] == 'subscribe' p3 = r.pubsub() p3.subscribe('baz') assert wait_for_message(p3)['type'] == 'subscribe' channels = [(b'foo', 1), (b'bar', 2), (b'baz', 3)] assert channels == r.pubsub_numsub('foo', 'bar', 'baz') @skip_if_server_version_lt('2.8.0') def test_pubsub_numpat(self, r): p = r.pubsub() p.psubscribe('*oo', '*ar', 'b*z') for i in range(3): assert wait_for_message(p)['type'] == 'psubscribe' assert r.pubsub_numpat() == 3 class TestPubSubPings(object): @skip_if_server_version_lt('3.0.0') def test_send_pubsub_ping(self, r): p = r.pubsub(ignore_subscribe_messages=True) p.subscribe('foo') p.ping() assert wait_for_message(p) == make_message(type='pong', channel=None, data='', pattern=None) @skip_if_server_version_lt('3.0.0') def test_send_pubsub_ping_message(self, r): p = r.pubsub(ignore_subscribe_messages=True) p.subscribe('foo') p.ping(message='hello world') assert wait_for_message(p) == make_message(type='pong', channel=None, data='hello world', pattern=None) class TestPubSubConnectionKilled(object): @skip_if_server_version_lt('3.0.0') def test_connection_error_raised_when_connection_dies(self, r): p = r.pubsub() p.subscribe('foo') assert wait_for_message(p) == make_message('subscribe', 'foo', 1) for client in r.client_list(): if client['cmd'] == 'subscribe': r.client_kill_filter(_id=client['id']) with pytest.raises(ConnectionError): wait_for_message(p) class TestPubSubTimeouts(object): def test_get_message_with_timeout_returns_none(self, r): p = r.pubsub() p.subscribe('foo') assert wait_for_message(p) == make_message('subscribe', 'foo', 1) assert p.get_message(timeout=0.01) is None redis-py-3.5.3/tests/test_scripting.py000066400000000000000000000101331366526254200200440ustar00rootroot00000000000000from __future__ import unicode_literals import pytest from redis import exceptions multiply_script = """ local value = redis.call('GET', KEYS[1]) value = tonumber(value) return value * ARGV[1]""" msgpack_hello_script = """ local message = cmsgpack.unpack(ARGV[1]) local name = message['name'] return "hello " .. name """ msgpack_hello_script_broken = """ local message = cmsgpack.unpack(ARGV[1]) local names = message['name'] return "hello " .. name """ class TestScripting(object): @pytest.fixture(autouse=True) def reset_scripts(self, r): r.script_flush() def test_eval(self, r): r.set('a', 2) # 2 * 3 == 6 assert r.eval(multiply_script, 1, 'a', 3) == 6 def test_evalsha(self, r): r.set('a', 2) sha = r.script_load(multiply_script) # 2 * 3 == 6 assert r.evalsha(sha, 1, 'a', 3) == 6 def test_evalsha_script_not_loaded(self, r): r.set('a', 2) sha = r.script_load(multiply_script) # remove the script from Redis's cache r.script_flush() with pytest.raises(exceptions.NoScriptError): r.evalsha(sha, 1, 'a', 3) def test_script_loading(self, r): # get the sha, then clear the cache sha = r.script_load(multiply_script) r.script_flush() assert r.script_exists(sha) == [False] r.script_load(multiply_script) assert r.script_exists(sha) == [True] def test_script_object(self, r): r.set('a', 2) multiply = r.register_script(multiply_script) precalculated_sha = multiply.sha assert precalculated_sha assert r.script_exists(multiply.sha) == [False] # Test second evalsha block (after NoScriptError) assert multiply(keys=['a'], args=[3]) == 6 # At this point, the script should be loaded assert r.script_exists(multiply.sha) == [True] # Test that the precalculated sha matches the one from redis assert multiply.sha == precalculated_sha # Test first evalsha block assert multiply(keys=['a'], args=[3]) == 6 def test_script_object_in_pipeline(self, r): multiply = r.register_script(multiply_script) precalculated_sha = multiply.sha assert precalculated_sha pipe = r.pipeline() pipe.set('a', 2) pipe.get('a') multiply(keys=['a'], args=[3], client=pipe) assert r.script_exists(multiply.sha) == [False] # [SET worked, GET 'a', result of multiple script] assert pipe.execute() == [True, b'2', 6] # The script should have been loaded by pipe.execute() assert r.script_exists(multiply.sha) == [True] # The precalculated sha should have been the correct one assert multiply.sha == precalculated_sha # purge the script from redis's cache and re-run the pipeline # the multiply script should be reloaded by pipe.execute() r.script_flush() pipe = r.pipeline() pipe.set('a', 2) pipe.get('a') multiply(keys=['a'], args=[3], client=pipe) assert r.script_exists(multiply.sha) == [False] # [SET worked, GET 'a', result of multiple script] assert pipe.execute() == [True, b'2', 6] assert r.script_exists(multiply.sha) == [True] def test_eval_msgpack_pipeline_error_in_lua(self, r): msgpack_hello = r.register_script(msgpack_hello_script) assert msgpack_hello.sha pipe = r.pipeline() # avoiding a dependency to msgpack, this is the output of # msgpack.dumps({"name": "joe"}) msgpack_message_1 = b'\x81\xa4name\xa3Joe' msgpack_hello(args=[msgpack_message_1], client=pipe) assert r.script_exists(msgpack_hello.sha) == [False] assert pipe.execute()[0] == b'hello Joe' assert r.script_exists(msgpack_hello.sha) == [True] msgpack_hello_broken = r.register_script(msgpack_hello_script_broken) msgpack_hello_broken(args=[msgpack_message_1], client=pipe) with pytest.raises(exceptions.ResponseError) as excinfo: pipe.execute() assert excinfo.type == exceptions.ResponseError redis-py-3.5.3/tests/test_sentinel.py000066400000000000000000000137131366526254200176720ustar00rootroot00000000000000import pytest from redis import exceptions from redis.sentinel import (Sentinel, SentinelConnectionPool, MasterNotFoundError, SlaveNotFoundError) from redis._compat import next import redis.sentinel class SentinelTestClient(object): def __init__(self, cluster, id): self.cluster = cluster self.id = id def sentinel_masters(self): self.cluster.connection_error_if_down(self) self.cluster.timeout_if_down(self) return {self.cluster.service_name: self.cluster.master} def sentinel_slaves(self, master_name): self.cluster.connection_error_if_down(self) self.cluster.timeout_if_down(self) if master_name != self.cluster.service_name: return [] return self.cluster.slaves class SentinelTestCluster(object): def __init__(self, service_name='mymaster', ip='127.0.0.1', port=6379): self.clients = {} self.master = { 'ip': ip, 'port': port, 'is_master': True, 'is_sdown': False, 'is_odown': False, 'num-other-sentinels': 0, } self.service_name = service_name self.slaves = [] self.nodes_down = set() self.nodes_timeout = set() def connection_error_if_down(self, node): if node.id in self.nodes_down: raise exceptions.ConnectionError def timeout_if_down(self, node): if node.id in self.nodes_timeout: raise exceptions.TimeoutError def client(self, host, port, **kwargs): return SentinelTestClient(self, (host, port)) @pytest.fixture() def cluster(request): def teardown(): redis.sentinel.Redis = saved_Redis cluster = SentinelTestCluster() saved_Redis = redis.sentinel.Redis redis.sentinel.Redis = cluster.client request.addfinalizer(teardown) return cluster @pytest.fixture() def sentinel(request, cluster): return Sentinel([('foo', 26379), ('bar', 26379)]) def test_discover_master(sentinel): address = sentinel.discover_master('mymaster') assert address == ('127.0.0.1', 6379) def test_discover_master_error(sentinel): with pytest.raises(MasterNotFoundError): sentinel.discover_master('xxx') def test_discover_master_sentinel_down(cluster, sentinel): # Put first sentinel 'foo' down cluster.nodes_down.add(('foo', 26379)) address = sentinel.discover_master('mymaster') assert address == ('127.0.0.1', 6379) # 'bar' is now first sentinel assert sentinel.sentinels[0].id == ('bar', 26379) def test_discover_master_sentinel_timeout(cluster, sentinel): # Put first sentinel 'foo' down cluster.nodes_timeout.add(('foo', 26379)) address = sentinel.discover_master('mymaster') assert address == ('127.0.0.1', 6379) # 'bar' is now first sentinel assert sentinel.sentinels[0].id == ('bar', 26379) def test_master_min_other_sentinels(cluster): sentinel = Sentinel([('foo', 26379)], min_other_sentinels=1) # min_other_sentinels with pytest.raises(MasterNotFoundError): sentinel.discover_master('mymaster') cluster.master['num-other-sentinels'] = 2 address = sentinel.discover_master('mymaster') assert address == ('127.0.0.1', 6379) def test_master_odown(cluster, sentinel): cluster.master['is_odown'] = True with pytest.raises(MasterNotFoundError): sentinel.discover_master('mymaster') def test_master_sdown(cluster, sentinel): cluster.master['is_sdown'] = True with pytest.raises(MasterNotFoundError): sentinel.discover_master('mymaster') def test_discover_slaves(cluster, sentinel): assert sentinel.discover_slaves('mymaster') == [] cluster.slaves = [ {'ip': 'slave0', 'port': 1234, 'is_odown': False, 'is_sdown': False}, {'ip': 'slave1', 'port': 1234, 'is_odown': False, 'is_sdown': False}, ] assert sentinel.discover_slaves('mymaster') == [ ('slave0', 1234), ('slave1', 1234)] # slave0 -> ODOWN cluster.slaves[0]['is_odown'] = True assert sentinel.discover_slaves('mymaster') == [ ('slave1', 1234)] # slave1 -> SDOWN cluster.slaves[1]['is_sdown'] = True assert sentinel.discover_slaves('mymaster') == [] cluster.slaves[0]['is_odown'] = False cluster.slaves[1]['is_sdown'] = False # node0 -> DOWN cluster.nodes_down.add(('foo', 26379)) assert sentinel.discover_slaves('mymaster') == [ ('slave0', 1234), ('slave1', 1234)] cluster.nodes_down.clear() # node0 -> TIMEOUT cluster.nodes_timeout.add(('foo', 26379)) assert sentinel.discover_slaves('mymaster') == [ ('slave0', 1234), ('slave1', 1234)] def test_master_for(cluster, sentinel): master = sentinel.master_for('mymaster', db=9) assert master.ping() assert master.connection_pool.master_address == ('127.0.0.1', 6379) # Use internal connection check master = sentinel.master_for('mymaster', db=9, check_connection=True) assert master.ping() def test_slave_for(cluster, sentinel): cluster.slaves = [ {'ip': '127.0.0.1', 'port': 6379, 'is_odown': False, 'is_sdown': False}, ] slave = sentinel.slave_for('mymaster', db=9) assert slave.ping() def test_slave_for_slave_not_found_error(cluster, sentinel): cluster.master['is_odown'] = True slave = sentinel.slave_for('mymaster', db=9) with pytest.raises(SlaveNotFoundError): slave.ping() def test_slave_round_robin(cluster, sentinel): cluster.slaves = [ {'ip': 'slave0', 'port': 6379, 'is_odown': False, 'is_sdown': False}, {'ip': 'slave1', 'port': 6379, 'is_odown': False, 'is_sdown': False}, ] pool = SentinelConnectionPool('mymaster', sentinel) rotator = pool.rotate_slaves() assert next(rotator) in (('slave0', 6379), ('slave1', 6379)) assert next(rotator) in (('slave0', 6379), ('slave1', 6379)) # Fallback to master assert next(rotator) == ('127.0.0.1', 6379) with pytest.raises(SlaveNotFoundError): next(rotator) redis-py-3.5.3/tox.ini000066400000000000000000000005531366526254200146070ustar00rootroot00000000000000[tox] minversion = 2.4 envlist = {py27,py35,py36,py37,py38,py,py3}-{plain,hiredis}, flake8 [testenv] deps = coverage mock pytest >= 2.7.0 extras = hiredis: hiredis commands = {envpython} -b -m coverage run -m pytest -W always {posargs} [testenv:flake8] basepython = python3.6 deps = flake8 commands = flake8 skipsdist = true skip_install = true redis-py-3.5.3/vagrant/000077500000000000000000000000001366526254200147335ustar00rootroot00000000000000redis-py-3.5.3/vagrant/Vagrantfile000066400000000000000000000022221366526254200171160ustar00rootroot00000000000000# -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| # ubuntu 64bit image config.vm.box = "hashicorp/precise64" # map the root of redis-py to /home/vagrant/redis-py config.vm.synced_folder "../", "/home/vagrant/redis-py" # install the redis server config.vm.provision :shell, :path => "../build_tools/bootstrap.sh" config.vm.provision :shell, :path => "../build_tools/build_redis.sh" config.vm.provision :shell, :path => "../build_tools/install_redis.sh" config.vm.provision :shell, :path => "../build_tools/install_sentinel.sh" config.vm.provision :file, :source => "../build_tools/.bash_profile", :destination => "/home/vagrant/.bash_profile" # setup forwarded ports config.vm.network "forwarded_port", guest: 6379, host: 6379 config.vm.network "forwarded_port", guest: 6380, host: 6380 config.vm.network "forwarded_port", guest: 26379, host: 26379 config.vm.network "forwarded_port", guest: 26380, host: 26380 config.vm.network "forwarded_port", guest: 26381, host: 26381 end