redis-2.7.2/0000755000076500000240000000000012051512742012705 5ustar andystaff00000000000000redis-2.7.2/CHANGES0000644000076500000240000003264112051512231013677 0ustar andystaff00000000000000* 2.7.2 * Parse errors are now *always* raised on multi/exec pipelines, regardless of the `raise_on_error` flag. See https://groups.google.com/forum/?hl=en&fromgroups=#!topic/redis-db/VUiEFT8U8U0 for more info. * 2.7.1 * Packaged tests with source code * 2.7.0 * Added BITOP and BITCOUNT commands. Thanks Mark Tozzi. * Added the TIME command. Thanks Jason Knight. * Added support for LUA scripting. Thanks to Angus Peart, Drew Smathers, Issac Kelly, Louis-Philippe Perron, Sean Bleier, Jeffrey Kaditz, and Dvir Volk for various patches and contributions to this feature. * Changed the default error handling in pipelines. By default, the first error in a pipeline will now be raised. A new parameter to the pipeline's execute, `raise_on_error`, can be set to False to keep the old behavior of embeedding the exception instances in the result. * Fixed a bug with pipelines where parse errors won't corrupt the socket. * Added the optional `number` argument to SRANDMEMBER for use with Redis 2.6+ servers. * Added PEXPIRE/PEXPIREAT/PTTL commands. Thanks Luper Rouch. * Added INCRBYFLOAT/HINCRBYFLOAT commands. Thanks Nikita Uvarov. * High precision floating point values won't lose their precision when being sent to the Redis server. Thanks Jason Oster and Oleg Pudeyev. * Added CLIENT LIST/CLIENT KILL commands * 2.6.2 * `from_url` is now available as a classmethod on client classes. Thanks Jon Parise for the patch. * Fixed several encoding errors resulting from the Python 3.x support. * 2.6.1 * Python 3.x support! Big thanks to Alex Grönholm. * Fixed a bug in the PythonParser's read_response that could hide an error from the client (#251). * 2.6.0 * Changed (p)subscribe and (p)unsubscribe to no longer return messages indicating the channel was subscribed/unsubscribed to. These messages are available in the listen() loop instead. This is to prevent the following scenario: * Client A is subscribed to "foo" * Client B publishes message to "foo" * Client A subscribes to channel "bar" at the same time. Prior to this change, the subscribe() call would return the published messages on "foo" rather than the subscription confirmation to "bar". * Added support for GETRANGE, thanks Jean-Philippe Caruana * A new setting "decode_responses" specifies whether return values from Redis commands get decoded automatically using the client's charset value. Thanks to Frankie Dintino for the patch. * 2.4.13 * redis.from_url() can take an URL representing a Redis connection string and return a client object. Thanks Kenneth Reitz for the patch. * 2.4.12 * ConnectionPool is now fork-safe. Thanks Josiah Carson for the patch. * 2.4.11 * AuthenticationError will now be correctly raised if an invalid password is supplied. * If Hiredis is unavailable, the HiredisParser will raise a RedisError if selected manually. * Made the INFO command more tolerant of Redis changes formatting. Fix for #217. * 2.4.10 * Buffer reads from socket in the PythonParser. Fix for a Windows-specific bug (#205). * Added the OBJECT and DEBUG OBJECT commands. * Added __del__ methods for classes that hold on to resources that need to be cleaned up. This should prevent resource leakage when these objects leave scope due to misuse or unhandled exceptions. Thanks David Wolever for the suggestion. * Added the ECHO command for completeness. * Fixed a bug where attempting to subscribe to a PubSub channel of a Redis server that's down would blow out the stack. Fixes #179 and #195. Thanks Ovidiu Predescu for the test case. * StrictRedis's TTL command now returns a -1 when querying a key with no expiration. The Redis class continues to return None. * ZADD and SADD now return integer values indicating the number of items added. Thanks Homer Strong. * Renamed the base client class to StrictRedis, replacing ZADD and LREM in favor of their official argument order. The Redis class is now a subclass of StrictRedis, implementing the legacy redis-py implementations of ZADD and LREM. Docs have been updated to suggesting the use of StrictRedis. * SETEX in StrictRedis is now compliant with official Redis SETEX command. the name, value, time implementation moved to "Redis" for backwards compatability. * 2.4.9 * Removed socket retry logic in Connection. This is the responsbility of the caller to determine if the command is safe and can be retried. Thanks David Wolver. * Added some extra guards around various types of exceptions being raised when sending or parsing data. Thanks David Wolver and Denis Bilenko. * 2.4.8 * Imported with_statement from __future__ for Python 2.5 compatability. * 2.4.7 * Fixed a bug where some connections were not getting released back to the connection pool after pipeline execution. * Pipelines can now be used as context managers. This is the preferred way of use to ensure that connections get cleaned up properly. Thanks David Wolever. * Added a convenience method called transaction() on the base Redis class. This method eliminates much of the boilerplate used when using pipelines to watch Redis keys. See the documentation for details on usage. * 2.4.6 * Variadic arguments for SADD, SREM, ZREN, HDEL, LPUSH, and RPUSH. Thanks Raphaël Vinot. * (CRITICAL) Fixed an error in the Hiredis parser that occasionally caused the socket connection to become corrupted and unusable. This became noticeable once connection pools started to be used. * ZRANGE, ZREVRANGE, ZRANGEBYSCORE, and ZREVRANGEBYSCORE now take an additional optional argument, score_cast_func, which is a callable used to cast the score value in the return type. The default is float. * Removed the PUBLISH method from the PubSub class. Connections that are [P]SUBSCRIBEd cannot issue PUBLISH commands, so it doesn't make sense to have it here. * Pipelines now contain WATCH and UNWATCH. Calling WATCH or UNWATCH from the base client class will result in a deprecation warning. After WATCHing one or more keys, the pipeline will be placed in immediate execution mode until UNWATCH or MULTI are called. Refer to the new pipeline docs in the README for more information. Thanks to David Wolever and Randall Leeds for greatly helping with this. * 2.4.5 * The PythonParser now works better when reading zero length strings. * 2.4.4 * Fixed a typo introduced in 2.4.3 * 2.4.3 * Fixed a bug in the UnixDomainSocketConnection caused when trying to form an error message after a socket error. * 2.4.2 * Fixed a bug in pipeline that caused an exception while trying to reconnect after a connection timeout. * 2.4.1 * Fixed a bug in the PythonParser if disconnect is called before connect. * 2.4.0 * WARNING: 2.4 contains several backwards incompatible changes. * Completely refactored Connection objects. Moved much of the Redis protocol packing for requests here, and eliminated the nasty dependencies it had on the client to do AUTH and SELECT commands on connect. * Connection objects now have a parser attribute. Parsers are responsible for reading data Redis sends. Two parsers ship with redis-py: a PythonParser and the HiRedis parser. redis-py will automatically use the HiRedis parser if you have the Python hiredis module installed, otherwise it will fall back to the PythonParser. You can force or the other, or even an external one by passing the `parser_class` argument to ConnectionPool. * Added a UnixDomainSocketConnection for users wanting to talk to the Redis instance running on a local machine only. You can use this connection by passing it to the `connection_class` argument of the ConnectionPool. * Connections no longer derive from threading.local. See threading.local note below. * ConnectionPool has been comletely refactored. The ConnectionPool now maintains a list of connections. The redis-py client only hangs on to a ConnectionPool instance, calling get_connection() anytime it needs to send a command. When get_connection() is called, the command name and any keys involved in the command are passed as arguments. Subclasses of ConnectionPool could use this information to identify the shard the keys belong to and return a connection to it. ConnectionPool also implements disconnect() to force all connections in the pool to disconnect from the Redis server. * redis-py no longer support the SELECT command. You can still connect to a specific database by specifing it when instantiating a client instance or by creating a connection pool. If you need to talk to multiplate databases within your application, you should use a separate client instance for each database you want to talk to. * Completely refactored Publish/Subscribe support. The subscribe and listen commands are no longer available on the redis-py Client class. Instead, the `pubsub` method returns an instance of the PubSub class which contains all publish/subscribe support. Note, you can still PUBLISH from the redis-py client class if you desire. * Removed support for all previously deprecated commands or options. * redis-py no longer uses threading.local in any way. Since the Client class no longer holds on to a connection, it's no longer needed. You can now pass client instances between threads, and commands run on those threads will retrieve an available connection from the pool, use it and release it. It should now be trivial to use redis-py with eventlet or greenlet. * ZADD now accepts pairs of value=score keyword arguements. This should help resolve the long standing #72. The older value and score arguements have been deprecated in favor of the keyword argument style. * Client instances now get their own copy of RESPONSE_CALLBACKS. The new set_response_callback method adds a user defined callback to the instance. * Support Jython, fixing #97. Thanks to Adam Vandenberg for the patch. * Using __getitem__ now properly raises a KeyError when the key is not found. Thanks Ionuț Arțăriși for the patch. * Newer Redis versions return a LOADING message for some commands while the database is loading from disk during server start. This could cause problems with SELECT. We now force a socket disconnection prior to raising a ResponseError so subsuquent connections have to reconnect and re-select the appropriate database. Thanks to Benjamin Anderson for finding this and fixing. * 2.2.4 * WARNING: Potential backwards incompatible change - Changed order of parameters of ZREVRANGEBYSCORE to match those of the actual Redis command. This is only backwards-incompatible if you were passing max and min via keyword args. If passing by normal args, nothing in user code should have to change. Thanks Stéphane Angel for the fix. * Fixed INFO to properly parse the Redis data correctly for both 2.2.x and 2.3+. Thanks Stéphane Angel for the fix. * Lock objects now store their timeout value as a float. This allows floats to be used as timeout values. No changes to existing code required. * WATCH now supports multiple keys. Thanks Rich Schumacher. * Broke out some code that was Python 2.4 incompatible. redis-py should now be useable on 2.4, but this hasn't actually been tested. Thanks Dan Colish for the patch. * Optimized some code using izip and islice. Should have a pretty good speed up on larger data sets. Thanks Dan Colish. * Better error handling when submitting an empty mapping to HMSET. Thanks Dan Colish. * Subscription status is now reset after every (re)connection. * 2.2.3 * Added support for Hiredis. To use, simply "pip install hiredis" or "easy_install hiredis". Thanks for Pieter Noordhuis for the hiredis-py bindings and the patch to redis-py. * The connection class is chosen based on whether hiredis is installed or not. To force the use of the PythonConnection, simply create your own ConnectionPool instance with the connection_class argument assigned to to PythonConnection class. * Added missing command ZREVRANGEBYSCORE. Thanks Jay Baird for the patch. * The INFO command should be parsed correctly on 2.2.x server versions and is backwards compatible with older versions. Thanks Brett Hoerner. * 2.2.2 * Fixed a bug in ZREVRANK where retriving the rank of a value not in the zset would raise an error. * Fixed a bug in Connection.send where the errno import was getting overwritten by a local variable. * Fixed a bug in SLAVEOF when promoting an existing slave to a master. * Reverted change of download URL back to redis-VERSION.tar.gz. 2.2.1's change of this actually broke Pypi for Pip installs. Sorry! * 2.2.1 * Changed archive name to redis-py-VERSION.tar.gz to not conflict with the Redis server archive. * 2.2.0 * Implemented SLAVEOF * Implemented CONFIG as config_get and config_set * Implemented GETBIT/SETBIT * Implemented BRPOPLPUSH * Implemented STRLEN * Implemented PERSIST * Implemented SETRANGE redis-2.7.2/INSTALL0000644000076500000240000000013412010053446013730 0ustar andystaff00000000000000 Please use python setup.py install and report errors to Andy McCurdy (sedrik@gmail.com) redis-2.7.2/LICENSE0000644000076500000240000000206212034566033013715 0ustar andystaff00000000000000Copyright (c) 2012 Andy McCurdy Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. redis-2.7.2/MANIFEST.in0000644000076500000240000000013412034567623014452 0ustar andystaff00000000000000include CHANGES include INSTALL include LICENSE include README.md recursive-include tests * redis-2.7.2/PKG-INFO0000644000076500000240000004375712051512742014022 0ustar andystaff00000000000000Metadata-Version: 1.1 Name: redis Version: 2.7.2 Summary: Python client for Redis key-value store Home-page: http://github.com/andymccurdy/redis-py Author: Andy McCurdy Author-email: sedrik@gmail.com License: MIT Description: # redis-py The Python interface to the Redis key-value store. [![Build Status](https://secure.travis-ci.org/andymccurdy/redis-py.png?branch=master)](http://travis-ci.org/andymccurdy/redis-py) ## Installation $ sudo pip install redis or alternatively (you really should be using pip though): $ sudo easy_install redis From source: $ sudo python setup.py install ## Getting Started >>> import redis >>> r = redis.StrictRedis(host='localhost', port=6379, db=0) >>> r.set('foo', 'bar') True >>> r.get('foo') 'bar' ## API Reference The official Redis documentation does a great job of explaining each command in detail (http://redis.io/commands). redis-py exposes two client classes that implement these commands. The StrictRedis class attempts to adhere to the official official command syntax. There are a few exceptions: * SELECT: Not implemented. See the explanation in the Thread Safety section below. * DEL: 'del' is a reserved keyword in the Python syntax. Therefore redis-py uses 'delete' instead. * CONFIG GET|SET: These are implemented separately as config_get or config_set. * MULTI/EXEC: These are implemented as part of the Pipeline class. Calling the pipeline method and specifying use_transaction=True will cause the pipeline to be wrapped with the MULTI and EXEC statements when it is executed. See more about Pipelines below. * SUBSCRIBE/LISTEN: Similar to pipelines, PubSub is implemented as a separate class as it places the underlying connection in a state where it can't execute non-pubsub commands. Calling the pubsub method from the Redis client will return a PubSub instance where you can subscribe to channels and listen for messages. You can call PUBLISH from both classes. In addition to the changes above, the Redis class, a subclass of StrictRedis, overrides several other commands to provide backwards compatibility with older versions of redis-py: * LREM: Order of 'num' and 'value' arguments reversed such that 'num' can provide a default value of zero. * ZADD: Redis specifies the 'score' argument before 'value'. These were swapped accidentally when being implemented and not discovered until after people were already using it. The Redis class expects *args in the form of: name1, score1, name2, score2, ... * SETEX: Order of 'time' and 'value' arguments reversed. ## More Detail ### Connection Pools Behind the scenes, redis-py uses a connection pool to manage connections to a Redis server. By default, each Redis instance you create will in turn create its own connection pool. You can override this behavior and use an existing connection pool by passing an already created connection pool instance to the connection_pool argument of the Redis class. You may choose to do this in order to implement client side sharding or have finer grain control of how connections are managed. >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0) >>> r = redis.Redis(connection_pool=pool) ### Connections ConnectionPools manage a set of Connection instances. redis-py ships with two types of Connections. The default, Connection, is a normal TCP socket based connection. The UnixDomainSocketConnection allows for clients running on the same device as the server to connect via a unix domain socket. To use a UnixDomainSocketConnection connection, simply pass the unix_socket_path argument, which is a string to the unix domain socket file. Additionally, make sure the unixsocket parameter is defined in your redis.conf file. It's commented out by default. >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock') You can create your own Connection subclasses as well. This may be useful if you want to control the socket behavior within an async framework. To instantiate a client class using your own connection, you need to create a connection pool, passing your class to the connection_class argument. Other keyword parameters your pass to the pool will be passed to the class specified during initialization. >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass, your_arg='...', ...) ### Parsers Parser classes provide a way to control how responses from the Redis server are parsed. redis-py ships with two parser classes, the PythonParser and the HiredisParser. By default, redis-py will attempt to use the HiredisParser if you have the hiredis module installed and will fallback to the PythonParser otherwise. Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was kind enough to create Python bindings. Using Hiredis can provide up to a 10x speed improvement in parsing responses from the Redis server. The performance increase is most noticeable when retrieving many pieces of data, such as from LRANGE or SMEMBERS operations. Hiredis is available on Pypi, and can be installed via pip or easy_install just like redis-py. $ pip install hiredis or $ easy_install hiredis ### Response Callbacks The client class uses a set of callbacks to cast Redis responses to the appropriate Python type. There are a number of these callbacks defined on the Redis client class in a dictionary called RESPONSE_CALLBACKS. Custom callbacks can be added on a per-instance basis using the set_response_callback method. This method accepts two arguments: a command name and the callback. Callbacks added in this manner are only valid on the instance the callback is added to. If you want to define or override a callback globally, you should make a subclass of the Redis client and add your callback to its REDIS_CALLBACKS class dictionary. Response callbacks take at least one parameter: the response from the Redis server. Keyword arguments may also be accepted in order to further control how to interpret the response. These keyword arguments are specified during the command's call to execute_command. The ZRANGE implementation demonstrates the use of response callback keyword arguments with its "withscores" argument. ## Thread Safety Redis client instances can safely be shared between threads. Internally, connection instances are only retrieved from the connection pool during command execution, and returned to the pool directly after. Command execution never modifies state on the client instance. However, there is one caveat: the Redis SELECT command. The SELECT command allows you to switch the database currently in use by the connection. That database remains selected until another is selected or until the connection is closed. This creates an issue in that connections could be returned to the pool that are connected to a different database. As a result, redis-py does not implement the SELECT command on client instances. If you use multiple Redis databases within the same application, you should create a separate client instance (and possibly a separate connection pool) for each database. It is not safe to pass PubSub or Pipeline objects between threads. ## Pipelines Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request. They can be used to dramatically increase the performance of groups of commands by reducing the number of back-and-forth TCP packets between the client and server. Pipelines are quite simple to use: >>> r = redis.Redis(...) >>> r.set('bing', 'baz') >>> # Use the pipeline() method to create a pipeline instance >>> pipe = r.pipeline() >>> # The following SET commands are buffered >>> pipe.set('foo', 'bar') >>> pipe.get('bing') >>> # the EXECUTE call sends all buffered commands to the server, returning >>> # a list of responses, one for each command. >>> pipe.execute() [True, 'baz'] For ease of use, all commands being buffered into the pipeline return the pipeline object itself. Therefore calls can be chained like: >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute() [True, True, 6] In addition, pipelines can also ensure the buffered commands are executed atomically as a group. This happens by default. If you want to disable the atomic nature of a pipeline but still want to buffer commands, you can turn off transactions. >>> pipe = r.pipeline(transaction=False) A common issue occurs when requiring atomic transactions but needing to retrieve values in Redis prior for use within the transaction. For instance, let's assume that the INCR command didn't exist and we need to build an atomic version of INCR in Python. The completely naive implementation could GET the value, increment it in Python, and SET the new value back. However, this is not atomic because multiple clients could be doing this at the same time, each getting the same value from GET. Enter the WATCH command. WATCH provides the ability to monitor one or more keys prior to starting a transaction. If any of those keys change prior the execution of that transaction, the entire transaction will be canceled and a WatchError will be raised. To implement our own client-side INCR command, we could do something like this: >>> with r.pipeline() as pipe: ... while 1: ... try: ... # put a WATCH on the key that holds our sequence value ... pipe.watch('OUR-SEQUENCE-KEY') ... # after WATCHing, the pipeline is put into immediate execution ... # mode until we tell it to start buffering commands again. ... # this allows us to get the current value of our sequence ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... # now we can put the pipeline back into buffered mode with MULTI ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) ... # and finally, execute the pipeline (the set command) ... pipe.execute() ... # if a WatchError wasn't raised during execution, everything ... # we just did happened atomically. ... break ... except WatchError: ... # another client must have changed 'OUR-SEQUENCE-KEY' between ... # the time we started WATCHing it and the pipeline's execution. ... # our best bet is to just retry. ... continue Note that, because the Pipeline must bind to a single connection for the duration of a WATCH, care must be taken to ensure that the connection is returned to the connection pool by calling the reset() method. If the Pipeline is used as a context manager (as in the example above) reset() will be called automatically. Of course you can do this the manual way by explicity calling reset(): >>> pipe = r.pipeline() >>> while 1: ... try: ... pipe.watch('OUR-SEQUENCE-KEY') ... ... ... pipe.execute() ... break ... except WatchError: ... continue ... finally: ... pipe.reset() A convenience method named "transaction" exists for handling all the boilerplate of handling and retrying watch errors. It takes a callable that should expect a single parameter, a pipeline object, and any number of keys to be WATCHed. Our client-side INCR command above can be written like this, which is much easier to read: >>> def client_side_incr(pipe): ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) >>> >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY') [True] ## LUA Scripting redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are a number of edge cases that make these commands tedious to use in real world scenarios. Therefore, redis-py exposes a Script object that makes scripting much easier to use. To create a Script instance, use the `register_script` function on a client instance passing the LUA code as the first argument. `register_script` returns a Script instance that you can use throughout your code. The following trivial LUA script accepts two parameters: the name of a key and a multiplier value. The script fetches the value stored in the key, multiplies it with the multiplier value and returns the result. >>> r = redis.StrictRedis() >>> lua = """ ... local value = redis.call('GET', KEYS[1]) ... value = tonumber(value) ... return value * ARGV[1]""" >>> multiply = r.register_script(lua) `multiply` is now a Script instance that is invoked by calling it like a function. Script instances accept the following optional arguments: * keys: A list of key names that the script will access. This becomes the KEYS list in LUA. * args: A list of argument values. This becomes the ARGV list in LUA. * client: A redis-py Client or Pipeline instance that will invoke the script. If client isn't specified, the client that intiially created the Script instance (the one that `register_script` was invoked from) will be used. Continuing the example from above: >>> r.set('foo', 2) >>> multiply(keys=['foo'], args=[5]) 10 The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is passed to the script along with the multiplier value of 5. LUA executes the script and returns the result, 10. Script instances can be executed using a different client instance, even one that points to a completely different Redis server. >>> r2 = redis.StrictRedis('redis2.example.com') >>> r2.set('foo', 3) >>> multiply(keys=['foo'], args=[5], client=r2) 15 The Script object ensures that the LUA script is loaded into Redis's script cache. In the event of a NOSCRIPT error, it will load the script and retry executing it. Script objects can also be used in pipelines. The pipeline instance should be passed as the client argument when calling the script. Care is taken to ensure that the script is registered in Redis's script cache just prior to pipeline execution. >>> pipe = r.pipeline() >>> pipe.set('foo', 5) >>> multiply(keys=['foo'], args=[5], client=pipe) >>> pipe.execute() [True, 25] Author ------ redis-py is developed and maintained by Andy McCurdy (sedrik@gmail.com). It can be found here: http://github.com/andymccurdy/redis-py Special thanks to: * Ludovico Magnocavallo, author of the original Python Redis client, from which some of the socket code is still used. * Alexander Solovyov for ideas on the generic response callback system. * Paul Hubbard for initial packaging support. Keywords: Redis,key-value store Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.2 Classifier: Programming Language :: Python :: 3.3 redis-2.7.2/README.md0000644000076500000240000003465412034444513014201 0ustar andystaff00000000000000# redis-py The Python interface to the Redis key-value store. [![Build Status](https://secure.travis-ci.org/andymccurdy/redis-py.png?branch=master)](http://travis-ci.org/andymccurdy/redis-py) ## Installation $ sudo pip install redis or alternatively (you really should be using pip though): $ sudo easy_install redis From source: $ sudo python setup.py install ## Getting Started >>> import redis >>> r = redis.StrictRedis(host='localhost', port=6379, db=0) >>> r.set('foo', 'bar') True >>> r.get('foo') 'bar' ## API Reference The official Redis documentation does a great job of explaining each command in detail (http://redis.io/commands). redis-py exposes two client classes that implement these commands. The StrictRedis class attempts to adhere to the official official command syntax. There are a few exceptions: * SELECT: Not implemented. See the explanation in the Thread Safety section below. * DEL: 'del' is a reserved keyword in the Python syntax. Therefore redis-py uses 'delete' instead. * CONFIG GET|SET: These are implemented separately as config_get or config_set. * MULTI/EXEC: These are implemented as part of the Pipeline class. Calling the pipeline method and specifying use_transaction=True will cause the pipeline to be wrapped with the MULTI and EXEC statements when it is executed. See more about Pipelines below. * SUBSCRIBE/LISTEN: Similar to pipelines, PubSub is implemented as a separate class as it places the underlying connection in a state where it can't execute non-pubsub commands. Calling the pubsub method from the Redis client will return a PubSub instance where you can subscribe to channels and listen for messages. You can call PUBLISH from both classes. In addition to the changes above, the Redis class, a subclass of StrictRedis, overrides several other commands to provide backwards compatibility with older versions of redis-py: * LREM: Order of 'num' and 'value' arguments reversed such that 'num' can provide a default value of zero. * ZADD: Redis specifies the 'score' argument before 'value'. These were swapped accidentally when being implemented and not discovered until after people were already using it. The Redis class expects *args in the form of: name1, score1, name2, score2, ... * SETEX: Order of 'time' and 'value' arguments reversed. ## More Detail ### Connection Pools Behind the scenes, redis-py uses a connection pool to manage connections to a Redis server. By default, each Redis instance you create will in turn create its own connection pool. You can override this behavior and use an existing connection pool by passing an already created connection pool instance to the connection_pool argument of the Redis class. You may choose to do this in order to implement client side sharding or have finer grain control of how connections are managed. >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0) >>> r = redis.Redis(connection_pool=pool) ### Connections ConnectionPools manage a set of Connection instances. redis-py ships with two types of Connections. The default, Connection, is a normal TCP socket based connection. The UnixDomainSocketConnection allows for clients running on the same device as the server to connect via a unix domain socket. To use a UnixDomainSocketConnection connection, simply pass the unix_socket_path argument, which is a string to the unix domain socket file. Additionally, make sure the unixsocket parameter is defined in your redis.conf file. It's commented out by default. >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock') You can create your own Connection subclasses as well. This may be useful if you want to control the socket behavior within an async framework. To instantiate a client class using your own connection, you need to create a connection pool, passing your class to the connection_class argument. Other keyword parameters your pass to the pool will be passed to the class specified during initialization. >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass, your_arg='...', ...) ### Parsers Parser classes provide a way to control how responses from the Redis server are parsed. redis-py ships with two parser classes, the PythonParser and the HiredisParser. By default, redis-py will attempt to use the HiredisParser if you have the hiredis module installed and will fallback to the PythonParser otherwise. Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was kind enough to create Python bindings. Using Hiredis can provide up to a 10x speed improvement in parsing responses from the Redis server. The performance increase is most noticeable when retrieving many pieces of data, such as from LRANGE or SMEMBERS operations. Hiredis is available on Pypi, and can be installed via pip or easy_install just like redis-py. $ pip install hiredis or $ easy_install hiredis ### Response Callbacks The client class uses a set of callbacks to cast Redis responses to the appropriate Python type. There are a number of these callbacks defined on the Redis client class in a dictionary called RESPONSE_CALLBACKS. Custom callbacks can be added on a per-instance basis using the set_response_callback method. This method accepts two arguments: a command name and the callback. Callbacks added in this manner are only valid on the instance the callback is added to. If you want to define or override a callback globally, you should make a subclass of the Redis client and add your callback to its REDIS_CALLBACKS class dictionary. Response callbacks take at least one parameter: the response from the Redis server. Keyword arguments may also be accepted in order to further control how to interpret the response. These keyword arguments are specified during the command's call to execute_command. The ZRANGE implementation demonstrates the use of response callback keyword arguments with its "withscores" argument. ## Thread Safety Redis client instances can safely be shared between threads. Internally, connection instances are only retrieved from the connection pool during command execution, and returned to the pool directly after. Command execution never modifies state on the client instance. However, there is one caveat: the Redis SELECT command. The SELECT command allows you to switch the database currently in use by the connection. That database remains selected until another is selected or until the connection is closed. This creates an issue in that connections could be returned to the pool that are connected to a different database. As a result, redis-py does not implement the SELECT command on client instances. If you use multiple Redis databases within the same application, you should create a separate client instance (and possibly a separate connection pool) for each database. It is not safe to pass PubSub or Pipeline objects between threads. ## Pipelines Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request. They can be used to dramatically increase the performance of groups of commands by reducing the number of back-and-forth TCP packets between the client and server. Pipelines are quite simple to use: >>> r = redis.Redis(...) >>> r.set('bing', 'baz') >>> # Use the pipeline() method to create a pipeline instance >>> pipe = r.pipeline() >>> # The following SET commands are buffered >>> pipe.set('foo', 'bar') >>> pipe.get('bing') >>> # the EXECUTE call sends all buffered commands to the server, returning >>> # a list of responses, one for each command. >>> pipe.execute() [True, 'baz'] For ease of use, all commands being buffered into the pipeline return the pipeline object itself. Therefore calls can be chained like: >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute() [True, True, 6] In addition, pipelines can also ensure the buffered commands are executed atomically as a group. This happens by default. If you want to disable the atomic nature of a pipeline but still want to buffer commands, you can turn off transactions. >>> pipe = r.pipeline(transaction=False) A common issue occurs when requiring atomic transactions but needing to retrieve values in Redis prior for use within the transaction. For instance, let's assume that the INCR command didn't exist and we need to build an atomic version of INCR in Python. The completely naive implementation could GET the value, increment it in Python, and SET the new value back. However, this is not atomic because multiple clients could be doing this at the same time, each getting the same value from GET. Enter the WATCH command. WATCH provides the ability to monitor one or more keys prior to starting a transaction. If any of those keys change prior the execution of that transaction, the entire transaction will be canceled and a WatchError will be raised. To implement our own client-side INCR command, we could do something like this: >>> with r.pipeline() as pipe: ... while 1: ... try: ... # put a WATCH on the key that holds our sequence value ... pipe.watch('OUR-SEQUENCE-KEY') ... # after WATCHing, the pipeline is put into immediate execution ... # mode until we tell it to start buffering commands again. ... # this allows us to get the current value of our sequence ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... # now we can put the pipeline back into buffered mode with MULTI ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) ... # and finally, execute the pipeline (the set command) ... pipe.execute() ... # if a WatchError wasn't raised during execution, everything ... # we just did happened atomically. ... break ... except WatchError: ... # another client must have changed 'OUR-SEQUENCE-KEY' between ... # the time we started WATCHing it and the pipeline's execution. ... # our best bet is to just retry. ... continue Note that, because the Pipeline must bind to a single connection for the duration of a WATCH, care must be taken to ensure that the connection is returned to the connection pool by calling the reset() method. If the Pipeline is used as a context manager (as in the example above) reset() will be called automatically. Of course you can do this the manual way by explicity calling reset(): >>> pipe = r.pipeline() >>> while 1: ... try: ... pipe.watch('OUR-SEQUENCE-KEY') ... ... ... pipe.execute() ... break ... except WatchError: ... continue ... finally: ... pipe.reset() A convenience method named "transaction" exists for handling all the boilerplate of handling and retrying watch errors. It takes a callable that should expect a single parameter, a pipeline object, and any number of keys to be WATCHed. Our client-side INCR command above can be written like this, which is much easier to read: >>> def client_side_incr(pipe): ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) >>> >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY') [True] ## LUA Scripting redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are a number of edge cases that make these commands tedious to use in real world scenarios. Therefore, redis-py exposes a Script object that makes scripting much easier to use. To create a Script instance, use the `register_script` function on a client instance passing the LUA code as the first argument. `register_script` returns a Script instance that you can use throughout your code. The following trivial LUA script accepts two parameters: the name of a key and a multiplier value. The script fetches the value stored in the key, multiplies it with the multiplier value and returns the result. >>> r = redis.StrictRedis() >>> lua = """ ... local value = redis.call('GET', KEYS[1]) ... value = tonumber(value) ... return value * ARGV[1]""" >>> multiply = r.register_script(lua) `multiply` is now a Script instance that is invoked by calling it like a function. Script instances accept the following optional arguments: * keys: A list of key names that the script will access. This becomes the KEYS list in LUA. * args: A list of argument values. This becomes the ARGV list in LUA. * client: A redis-py Client or Pipeline instance that will invoke the script. If client isn't specified, the client that intiially created the Script instance (the one that `register_script` was invoked from) will be used. Continuing the example from above: >>> r.set('foo', 2) >>> multiply(keys=['foo'], args=[5]) 10 The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is passed to the script along with the multiplier value of 5. LUA executes the script and returns the result, 10. Script instances can be executed using a different client instance, even one that points to a completely different Redis server. >>> r2 = redis.StrictRedis('redis2.example.com') >>> r2.set('foo', 3) >>> multiply(keys=['foo'], args=[5], client=r2) 15 The Script object ensures that the LUA script is loaded into Redis's script cache. In the event of a NOSCRIPT error, it will load the script and retry executing it. Script objects can also be used in pipelines. The pipeline instance should be passed as the client argument when calling the script. Care is taken to ensure that the script is registered in Redis's script cache just prior to pipeline execution. >>> pipe = r.pipeline() >>> pipe.set('foo', 5) >>> multiply(keys=['foo'], args=[5], client=pipe) >>> pipe.execute() [True, 25] Author ------ redis-py is developed and maintained by Andy McCurdy (sedrik@gmail.com). It can be found here: http://github.com/andymccurdy/redis-py Special thanks to: * Ludovico Magnocavallo, author of the original Python Redis client, from which some of the socket code is still used. * Alexander Solovyov for ideas on the generic response callback system. * Paul Hubbard for initial packaging support. redis-2.7.2/redis/0000755000076500000240000000000012051512742014013 5ustar andystaff00000000000000redis-2.7.2/redis/__init__.py0000644000076500000240000000127112051512321016116 0ustar andystaff00000000000000from redis.client import Redis, StrictRedis from redis.connection import ( ConnectionPool, Connection, UnixDomainSocketConnection ) from redis.utils import from_url from redis.exceptions import ( AuthenticationError, ConnectionError, DataError, InvalidResponse, PubSubError, RedisError, ResponseError, WatchError, ) __version__ = '2.7.2' VERSION = tuple(map(int, __version__.split('.'))) __all__ = [ 'Redis', 'StrictRedis', 'ConnectionPool', 'Connection', 'UnixDomainSocketConnection', 'RedisError', 'ConnectionError', 'ResponseError', 'AuthenticationError', 'InvalidResponse', 'DataError', 'PubSubError', 'WatchError', 'from_url', ] redis-2.7.2/redis/_compat.py0000644000076500000240000000252312010053446016005 0ustar andystaff00000000000000"""Internal module for Python 2 backwards compatibility.""" import sys if sys.version_info[0] < 3: from urlparse import urlparse from itertools import imap, izip from string import letters as ascii_letters try: from cStringIO import StringIO as BytesIO except ImportError: from StringIO import StringIO as BytesIO iteritems = lambda x: x.iteritems() dictkeys = lambda x: x.keys() dictvalues = lambda x: x.values() nativestr = lambda x: \ x if isinstance(x, str) else x.encode('utf-8', 'replace') u = lambda x: x.decode() b = lambda x: x next = lambda x: x.next() byte_to_chr = lambda x: x unichr = unichr xrange = xrange basestring = basestring unicode = unicode bytes = str long = long else: from urllib.parse import urlparse from io import BytesIO from string import ascii_letters iteritems = lambda x: x.items() dictkeys = lambda x: list(x.keys()) dictvalues = lambda x: list(x.values()) byte_to_chr = lambda x: chr(x) nativestr = lambda x: \ x if isinstance(x, str) else x.decode('utf-8', 'replace') u = lambda x: x b = lambda x: x.encode('iso-8859-1') next = next unichr = chr imap = map izip = zip xrange = range basestring = str unicode = str bytes = bytes long = int redis-2.7.2/redis/client.py0000644000076500000240000021543312051511647015656 0ustar andystaff00000000000000from __future__ import with_statement from itertools import chain, starmap import datetime import sys import warnings import time as mod_time from redis._compat import (b, izip, imap, iteritems, dictkeys, dictvalues, basestring, long, nativestr, urlparse) from redis.connection import ConnectionPool, UnixDomainSocketConnection from redis.exceptions import ( ConnectionError, DataError, RedisError, ResponseError, WatchError, NoScriptError, ExecAbortError, ) SYM_EMPTY = b('') def list_or_args(keys, args): # returns a single list combining keys and args try: iter(keys) # a string can be iterated, but indicates # keys wasn't passed as a list if isinstance(keys, basestring): keys = [keys] except TypeError: keys = [keys] if args: keys.extend(args) return keys def timestamp_to_datetime(response): "Converts a unix timestamp to a Python datetime object" if not response: return None try: response = int(response) except ValueError: return None return datetime.datetime.fromtimestamp(response) def string_keys_to_dict(key_string, callback): return dict.fromkeys(key_string.split(), callback) def dict_merge(*dicts): merged = {} [merged.update(d) for d in dicts] return merged def parse_debug_object(response): "Parse the results of Redis's DEBUG OBJECT command into a Python dict" # The 'type' of the object is the first item in the response, but isn't # prefixed with a name response = nativestr(response) response = 'type:' + response response = dict([kv.split(':') for kv in response.split()]) # parse some expected int values from the string response # note: this cmd isn't spec'd so these may not appear in all redis versions int_fields = ('refcount', 'serializedlength', 'lru', 'lru_seconds_idle') for field in int_fields: if field in response: response[field] = int(response[field]) return response def parse_object(response, infotype): "Parse the results of an OBJECT command" if infotype in ('idletime', 'refcount'): return int(response) return response def parse_info(response): "Parse the result of Redis's INFO command into a Python dict" info = {} response = nativestr(response) def get_value(value): if ',' not in value or '=' not in value: return value sub_dict = {} for item in value.split(','): k, v = item.rsplit('=', 1) try: sub_dict[k] = int(v) except ValueError: sub_dict[k] = v return sub_dict for line in response.splitlines(): if line and not line.startswith('#'): key, value = line.split(':') try: if '.' in value: info[key] = float(value) else: info[key] = int(value) except ValueError: info[key] = get_value(value) return info def pairs_to_dict(response): "Create a dict given a list of key/value pairs" it = iter(response) return dict(izip(it, it)) def zset_score_pairs(response, **options): """ If ``withscores`` is specified in the options, return the response as a list of (value, score) pairs """ if not response or not options['withscores']: return response score_cast_func = options.get('score_cast_func', float) it = iter(response) return list(izip(it, imap(score_cast_func, it))) def int_or_none(response): if response is None: return None return int(response) def float_or_none(response): if response is None: return None return float(response) def parse_client(response, **options): parse = options['parse'] if parse == 'LIST': clients = [] for c in nativestr(response).splitlines(): clients.append(dict([pair.split('=') for pair in c.split(' ')])) return clients elif parse == 'KILL': return bool(response) def parse_config(response, **options): if options['parse'] == 'GET': response = [nativestr(i) if i is not None else None for i in response] return response and pairs_to_dict(response) or {} return nativestr(response) == 'OK' def parse_script(response, **options): parse = options['parse'] if parse in ('FLUSH', 'KILL'): return response == 'OK' if parse == 'EXISTS': return list(imap(bool, response)) return response class StrictRedis(object): """ Implementation of the Redis protocol. This abstract class provides a Python interface to all Redis commands and an implementation of the Redis protocol. Connection and Pipeline derive from this, implementing how the commands are sent and received to the Redis server """ RESPONSE_CALLBACKS = dict_merge( string_keys_to_dict( 'AUTH DEL EXISTS EXPIRE EXPIREAT HDEL HEXISTS HMSET MOVE MSETNX ' 'PERSIST RENAMENX SISMEMBER SMOVE SETEX SETNX SREM ZREM', bool ), string_keys_to_dict( 'BITCOUNT DECRBY GETBIT HLEN INCRBY LINSERT LLEN LPUSHX RPUSHX ' 'SADD SCARD SDIFFSTORE SETBIT SETRANGE SINTERSTORE STRLEN ' 'SUNIONSTORE ZADD ZCARD ZREMRANGEBYRANK ZREMRANGEBYSCORE', int ), string_keys_to_dict('INCRBYFLOAT HINCRBYFLOAT', float), string_keys_to_dict( # these return OK, or int if redis-server is >=1.3.4 'LPUSH RPUSH', lambda r: isinstance(r, long) and r or nativestr(r) == 'OK' ), string_keys_to_dict('ZSCORE ZINCRBY', float_or_none), string_keys_to_dict( 'FLUSHALL FLUSHDB LSET LTRIM MSET RENAME ' 'SAVE SELECT SET SHUTDOWN SLAVEOF WATCH UNWATCH', lambda r: nativestr(r) == 'OK' ), string_keys_to_dict('BLPOP BRPOP', lambda r: r and tuple(r) or None), string_keys_to_dict( 'SDIFF SINTER SMEMBERS SUNION', lambda r: r and set(r) or set() ), string_keys_to_dict( 'ZRANGE ZRANGEBYSCORE ZREVRANGE ZREVRANGEBYSCORE', zset_score_pairs ), string_keys_to_dict('ZRANK ZREVRANK', int_or_none), { 'BGREWRITEAOF': ( lambda r: r == 'Background rewriting of AOF file started' ), 'BGSAVE': lambda r: r == 'Background saving started', 'BRPOPLPUSH': lambda r: r and r or None, 'CLIENT': parse_client, 'CONFIG': parse_config, 'DEBUG': parse_debug_object, 'HGETALL': lambda r: r and pairs_to_dict(r) or {}, 'INFO': parse_info, 'LASTSAVE': timestamp_to_datetime, 'OBJECT': parse_object, 'PING': lambda r: nativestr(r) == 'PONG', 'RANDOMKEY': lambda r: r and r or None, 'SCRIPT': parse_script, 'TIME': lambda x: (int(x[0]), int(x[1])) } ) @classmethod def from_url(cls, url, db=None, **kwargs): """ Return a Redis client object configured from the given URL. For example:: redis://username:password@localhost:6379/0 If ``db`` is None, this method will attempt to extract the database ID from the URL path component. Any additional keyword arguments will be passed along to the Redis class's initializer. """ url = urlparse(url) # We only support redis:// schemes. assert url.scheme == 'redis' or not url.scheme # Extract the database ID from the path component if hasn't been given. if db is None: try: db = int(url.path.replace('/', '')) except (AttributeError, ValueError): db = 0 return cls(host=url.hostname, port=url.port, db=db, password=url.password, **kwargs) def __init__(self, host='localhost', port=6379, db=0, password=None, socket_timeout=None, connection_pool=None, charset='utf-8', errors='strict', decode_responses=False, unix_socket_path=None): if not connection_pool: kwargs = { 'db': db, 'password': password, 'socket_timeout': socket_timeout, 'encoding': charset, 'encoding_errors': errors, 'decode_responses': decode_responses, } # based on input, setup appropriate connection args if unix_socket_path: kwargs.update({ 'path': unix_socket_path, 'connection_class': UnixDomainSocketConnection }) else: kwargs.update({ 'host': host, 'port': port }) connection_pool = ConnectionPool(**kwargs) self.connection_pool = connection_pool self.response_callbacks = self.__class__.RESPONSE_CALLBACKS.copy() def set_response_callback(self, command, callback): "Set a custom Response Callback" self.response_callbacks[command] = callback def pipeline(self, transaction=True, shard_hint=None): """ Return a new pipeline object that can queue multiple commands for later execution. ``transaction`` indicates whether all commands should be executed atomically. Apart from making a group of operations atomic, pipelines are useful for reducing the back-and-forth overhead between the client and server. """ return StrictPipeline( self.connection_pool, self.response_callbacks, transaction, shard_hint) def transaction(self, func, *watches, **kwargs): """ Convenience method for executing the callable `func` as a transaction while watching all keys specified in `watches`. The 'func' callable should expect a single arguement which is a Pipeline object. """ shard_hint = kwargs.pop('shard_hint', None) with self.pipeline(True, shard_hint) as pipe: while 1: try: if watches: pipe.watch(*watches) func(pipe) return pipe.execute() except WatchError: continue def lock(self, name, timeout=None, sleep=0.1): """ Return a new Lock object using key ``name`` that mimics the behavior of threading.Lock. If specified, ``timeout`` indicates a maximum life for the lock. By default, it will remain locked until release() is called. ``sleep`` indicates the amount of time to sleep per loop iteration when the lock is in blocking mode and another client is currently holding the lock. """ return Lock(self, name, timeout=timeout, sleep=sleep) def pubsub(self, shard_hint=None): """ Return a Publish/Subscribe object. With this object, you can subscribe to channels and listen for messages that get published to them. """ return PubSub(self.connection_pool, shard_hint) #### COMMAND EXECUTION AND PROTOCOL PARSING #### def execute_command(self, *args, **options): "Execute a command and return a parsed response" pool = self.connection_pool command_name = args[0] connection = pool.get_connection(command_name, **options) try: connection.send_command(*args) return self.parse_response(connection, command_name, **options) except ConnectionError: connection.disconnect() connection.send_command(*args) return self.parse_response(connection, command_name, **options) finally: pool.release(connection) def parse_response(self, connection, command_name, **options): "Parses a response from the Redis server" response = connection.read_response() if command_name in self.response_callbacks: return self.response_callbacks[command_name](response, **options) return response #### SERVER INFORMATION #### def bgrewriteaof(self): "Tell the Redis server to rewrite the AOF file from data in memory." return self.execute_command('BGREWRITEAOF') def bgsave(self): """ Tell the Redis server to save its data to disk. Unlike save(), this method is asynchronous and returns immediately. """ return self.execute_command('BGSAVE') def client_kill(self, address): "Disconnects the client at ``address`` (ip:port)" return self.execute_command('CLIENT', 'KILL', address, parse='KILL') def client_list(self): "Returns a list of currently connected clients" return self.execute_command('CLIENT', 'LIST', parse='LIST') def config_get(self, pattern="*"): "Return a dictionary of configuration based on the ``pattern``" return self.execute_command('CONFIG', 'GET', pattern, parse='GET') def config_set(self, name, value): "Set config item ``name`` with ``value``" return self.execute_command('CONFIG', 'SET', name, value, parse='SET') def dbsize(self): "Returns the number of keys in the current database" return self.execute_command('DBSIZE') def time(self): """ Returns the server time as a 2-item tuple of ints: (seconds since epoch, microseconds into this second). """ return self.execute_command('TIME') def debug_object(self, key): "Returns version specific metainformation about a give key" return self.execute_command('DEBUG', 'OBJECT', key) def delete(self, *names): "Delete one or more keys specified by ``names``" return self.execute_command('DEL', *names) __delitem__ = delete def echo(self, value): "Echo the string back from the server" return self.execute_command('ECHO', value) def flushall(self): "Delete all keys in all databases on the current host" return self.execute_command('FLUSHALL') def flushdb(self): "Delete all keys in the current database" return self.execute_command('FLUSHDB') def info(self): "Returns a dictionary containing information about the Redis server" return self.execute_command('INFO') def lastsave(self): """ Return a Python datetime object representing the last time the Redis database was saved to disk """ return self.execute_command('LASTSAVE') def object(self, infotype, key): "Return the encoding, idletime, or refcount about the key" return self.execute_command('OBJECT', infotype, key, infotype=infotype) def ping(self): "Ping the Redis server" return self.execute_command('PING') def save(self): """ Tell the Redis server to save its data to disk, blocking until the save is complete """ return self.execute_command('SAVE') def shutdown(self): "Shutdown the server" try: self.execute_command('SHUTDOWN') except ConnectionError: # a ConnectionError here is expected return raise RedisError("SHUTDOWN seems to have failed.") def slaveof(self, host=None, port=None): """ Set the server to be a replicated slave of the instance identified by the ``host`` and ``port``. If called without arguements, the instance is promoted to a master instead. """ if host is None and port is None: return self.execute_command("SLAVEOF", "NO", "ONE") return self.execute_command("SLAVEOF", host, port) #### BASIC KEY COMMANDS #### def append(self, key, value): """ Appends the string ``value`` to the value at ``key``. If ``key`` doesn't already exist, create it with a value of ``value``. Returns the new length of the value at ``key``. """ return self.execute_command('APPEND', key, value) def getrange(self, key, start, end): """ Returns the substring of the string value stored at ``key``, determined by the offsets ``start`` and ``end`` (both are inclusive) """ return self.execute_command('GETRANGE', key, start, end) def bitcount(self, key, start=None, end=None): """ Returns the count of set bits in the value of ``key``. Optional ``start`` and ``end`` paramaters indicate which bytes to consider """ params = [key] if start and end: params.append(start) params.append(end) elif (start and not end) or (end and not start): raise RedisError("Both start and end must be specified") return self.execute_command('BITCOUNT', *params) def bitop(self, operation, dest, *keys): """ Perform a bitwise operation using ``operation`` between ``keys`` and store the result in ``dest``. """ return self.execute_command('BITOP', operation, dest, *keys) def decr(self, name, amount=1): """ Decrements the value of ``key`` by ``amount``. If no key exists, the value will be initialized as 0 - ``amount`` """ return self.execute_command('DECRBY', name, amount) def exists(self, name): "Returns a boolean indicating whether key ``name`` exists" return self.execute_command('EXISTS', name) __contains__ = exists def expire(self, name, time): """ Set an expire flag on key ``name`` for ``time`` seconds. ``time`` can be represented by an integer or a Python timedelta object. """ if isinstance(time, datetime.timedelta): time = time.seconds + time.days * 24 * 3600 return self.execute_command('EXPIRE', name, time) def expireat(self, name, when): """ Set an expire flag on key ``name``. ``when`` can be represented as an integer indicating unix time or a Python datetime object. """ if isinstance(when, datetime.datetime): when = int(mod_time.mktime(when.timetuple())) return self.execute_command('EXPIREAT', name, when) def get(self, name): """ Return the value at key ``name``, or None if the key doesn't exist """ return self.execute_command('GET', name) def __getitem__(self, name): """ Return the value at key ``name``, raises a KeyError if the key doesn't exist. """ value = self.get(name) if value: return value raise KeyError(name) def getbit(self, name, offset): "Returns a boolean indicating the value of ``offset`` in ``name``" return self.execute_command('GETBIT', name, offset) def getset(self, name, value): """ Set the value at key ``name`` to ``value`` if key doesn't exist Return the value at key ``name`` atomically """ return self.execute_command('GETSET', name, value) def incr(self, name, amount=1): """ Increments the value of ``key`` by ``amount``. If no key exists, the value will be initialized as ``amount`` """ return self.execute_command('INCRBY', name, amount) def incrbyfloat(self, name, amount=1.0): """ Increments the value at key ``name`` by floating ``amount``. If no key exists, the value will be initialized as ``amount`` """ return self.execute_command('INCRBYFLOAT', name, amount) def keys(self, pattern='*'): "Returns a list of keys matching ``pattern``" return self.execute_command('KEYS', pattern) def mget(self, keys, *args): """ Returns a list of values ordered identically to ``keys`` """ args = list_or_args(keys, args) return self.execute_command('MGET', *args) def mset(self, mapping): "Sets each key in the ``mapping`` dict to its corresponding value" items = [] for pair in iteritems(mapping): items.extend(pair) return self.execute_command('MSET', *items) def msetnx(self, mapping): """ Sets each key in the ``mapping`` dict to its corresponding value if none of the keys are already set """ items = [] for pair in iteritems(mapping): items.extend(pair) return self.execute_command('MSETNX', *items) def move(self, name, db): "Moves the key ``name`` to a different Redis database ``db``" return self.execute_command('MOVE', name, db) def persist(self, name): "Removes an expiration on ``name``" return self.execute_command('PERSIST', name) def pexpire(self, name, time): """ Set an expire flag on key ``name`` for ``time`` milliseconds. ``time`` can be represented by an integer or a Python timedelta object. """ if isinstance(time, datetime.timedelta): ms = int(time.microseconds / 1000) time = time.seconds + time.days * 24 * 3600 * 1000 + ms return self.execute_command('PEXPIRE', name, time) def pexpireat(self, name, when): """ Set an expire flag on key ``name``. ``when`` can be represented as an integer representing unix time in milliseconds (unix time * 1000) or a Python datetime object. """ if isinstance(when, datetime.datetime): ms = int(when.microsecond / 1000) when = int(mod_time.mktime(when.timetuple())) * 1000 + ms return self.execute_command('PEXPIREAT', name, when) def pttl(self, name): "Returns the number of milliseconds until the key ``name`` will expire" return self.execute_command('PTTL', name) def randomkey(self): "Returns the name of a random key" return self.execute_command('RANDOMKEY') def rename(self, src, dst): """ Rename key ``src`` to ``dst`` """ return self.execute_command('RENAME', src, dst) def renamenx(self, src, dst): "Rename key ``src`` to ``dst`` if ``dst`` doesn't already exist" return self.execute_command('RENAMENX', src, dst) def set(self, name, value): "Set the value at key ``name`` to ``value``" return self.execute_command('SET', name, value) __setitem__ = set def setbit(self, name, offset, value): """ Flag the ``offset`` in ``name`` as ``value``. Returns a boolean indicating the previous value of ``offset``. """ value = value and 1 or 0 return self.execute_command('SETBIT', name, offset, value) def setex(self, name, time, value): """ Set the value of key ``name`` to ``value`` that expires in ``time`` seconds. ``time`` can be represented by an integer or a Python timedelta object. """ if isinstance(time, datetime.timedelta): time = time.seconds + time.days * 24 * 3600 return self.execute_command('SETEX', name, time, value) def setnx(self, name, value): "Set the value of key ``name`` to ``value`` if key doesn't exist" return self.execute_command('SETNX', name, value) def setrange(self, name, offset, value): """ Overwrite bytes in the value of ``name`` starting at ``offset`` with ``value``. If ``offset`` plus the length of ``value`` exceeds the length of the original value, the new value will be larger than before. If ``offset`` exceeds the length of the original value, null bytes will be used to pad between the end of the previous value and the start of what's being injected. Returns the length of the new string. """ return self.execute_command('SETRANGE', name, offset, value) def strlen(self, name): "Return the number of bytes stored in the value of ``name``" return self.execute_command('STRLEN', name) def substr(self, name, start, end=-1): """ Return a substring of the string at key ``name``. ``start`` and ``end`` are 0-based integers specifying the portion of the string to return. """ return self.execute_command('SUBSTR', name, start, end) def ttl(self, name): "Returns the number of seconds until the key ``name`` will expire" return self.execute_command('TTL', name) def type(self, name): "Returns the type of key ``name``" return self.execute_command('TYPE', name) def watch(self, *names): """ Watches the values at keys ``names``, or None if the key doesn't exist """ warnings.warn(DeprecationWarning('Call WATCH from a Pipeline object')) def unwatch(self): """ Unwatches the value at key ``name``, or None of the key doesn't exist """ warnings.warn( DeprecationWarning('Call UNWATCH from a Pipeline object')) #### LIST COMMANDS #### def blpop(self, keys, timeout=0): """ LPOP a value off of the first non-empty list named in the ``keys`` list. If none of the lists in ``keys`` has a value to LPOP, then block for ``timeout`` seconds, or until a value gets pushed on to one of the lists. If timeout is 0, then block indefinitely. """ if timeout is None: timeout = 0 if isinstance(keys, basestring): keys = [keys] else: keys = list(keys) keys.append(timeout) return self.execute_command('BLPOP', *keys) def brpop(self, keys, timeout=0): """ RPOP a value off of the first non-empty list named in the ``keys`` list. If none of the lists in ``keys`` has a value to LPOP, then block for ``timeout`` seconds, or until a value gets pushed on to one of the lists. If timeout is 0, then block indefinitely. """ if timeout is None: timeout = 0 if isinstance(keys, basestring): keys = [keys] else: keys = list(keys) keys.append(timeout) return self.execute_command('BRPOP', *keys) def brpoplpush(self, src, dst, timeout=0): """ Pop a value off the tail of ``src``, push it on the head of ``dst`` and then return it. This command blocks until a value is in ``src`` or until ``timeout`` seconds elapse, whichever is first. A ``timeout`` value of 0 blocks forever. """ if timeout is None: timeout = 0 return self.execute_command('BRPOPLPUSH', src, dst, timeout) def lindex(self, name, index): """ Return the item from list ``name`` at position ``index`` Negative indexes are supported and will return an item at the end of the list """ return self.execute_command('LINDEX', name, index) def linsert(self, name, where, refvalue, value): """ Insert ``value`` in list ``name`` either immediately before or after [``where``] ``refvalue`` Returns the new length of the list on success or -1 if ``refvalue`` is not in the list. """ return self.execute_command('LINSERT', name, where, refvalue, value) def llen(self, name): "Return the length of the list ``name``" return self.execute_command('LLEN', name) def lpop(self, name): "Remove and return the first item of the list ``name``" return self.execute_command('LPOP', name) def lpush(self, name, *values): "Push ``values`` onto the head of the list ``name``" return self.execute_command('LPUSH', name, *values) def lpushx(self, name, value): "Push ``value`` onto the head of the list ``name`` if ``name`` exists" return self.execute_command('LPUSHX', name, value) def lrange(self, name, start, end): """ Return a slice of the list ``name`` between position ``start`` and ``end`` ``start`` and ``end`` can be negative numbers just like Python slicing notation """ return self.execute_command('LRANGE', name, start, end) def lrem(self, name, count, value): """ Remove the first ``count`` occurrences of elements equal to ``value`` from the list stored at ``name``. The count argument influences the operation in the following ways: count > 0: Remove elements equal to value moving from head to tail. count < 0: Remove elements equal to value moving from tail to head. count = 0: Remove all elements equal to value. """ return self.execute_command('LREM', name, count, value) def lset(self, name, index, value): "Set ``position`` of list ``name`` to ``value``" return self.execute_command('LSET', name, index, value) def ltrim(self, name, start, end): """ Trim the list ``name``, removing all values not within the slice between ``start`` and ``end`` ``start`` and ``end`` can be negative numbers just like Python slicing notation """ return self.execute_command('LTRIM', name, start, end) def rpop(self, name): "Remove and return the last item of the list ``name``" return self.execute_command('RPOP', name) def rpoplpush(self, src, dst): """ RPOP a value off of the ``src`` list and atomically LPUSH it on to the ``dst`` list. Returns the value. """ return self.execute_command('RPOPLPUSH', src, dst) def rpush(self, name, *values): "Push ``values`` onto the tail of the list ``name``" return self.execute_command('RPUSH', name, *values) def rpushx(self, name, value): "Push ``value`` onto the tail of the list ``name`` if ``name`` exists" return self.execute_command('RPUSHX', name, value) def sort(self, name, start=None, num=None, by=None, get=None, desc=False, alpha=False, store=None): """ Sort and return the list, set or sorted set at ``name``. ``start`` and ``num`` allow for paging through the sorted data ``by`` allows using an external key to weight and sort the items. Use an "*" to indicate where in the key the item value is located ``get`` allows for returning items from external keys rather than the sorted data itself. Use an "*" to indicate where int he key the item value is located ``desc`` allows for reversing the sort ``alpha`` allows for sorting lexicographically rather than numerically ``store`` allows for storing the result of the sort into the key ``store`` """ if (start is not None and num is None) or \ (num is not None and start is None): raise RedisError("``start`` and ``num`` must both be specified") pieces = [name] if by is not None: pieces.append('BY') pieces.append(by) if start is not None and num is not None: pieces.append('LIMIT') pieces.append(start) pieces.append(num) if get is not None: # If get is a string assume we want to get a single value. # Otherwise assume it's an interable and we want to get multiple # values. We can't just iterate blindly because strings are # iterable. if isinstance(get, basestring): pieces.append('GET') pieces.append(get) else: for g in get: pieces.append('GET') pieces.append(g) if desc: pieces.append('DESC') if alpha: pieces.append('ALPHA') if store is not None: pieces.append('STORE') pieces.append(store) return self.execute_command('SORT', *pieces) #### SET COMMANDS #### def sadd(self, name, *values): "Add ``value(s)`` to set ``name``" return self.execute_command('SADD', name, *values) def scard(self, name): "Return the number of elements in set ``name``" return self.execute_command('SCARD', name) def sdiff(self, keys, *args): "Return the difference of sets specified by ``keys``" args = list_or_args(keys, args) return self.execute_command('SDIFF', *args) def sdiffstore(self, dest, keys, *args): """ Store the difference of sets specified by ``keys`` into a new set named ``dest``. Returns the number of keys in the new set. """ args = list_or_args(keys, args) return self.execute_command('SDIFFSTORE', dest, *args) def sinter(self, keys, *args): "Return the intersection of sets specified by ``keys``" args = list_or_args(keys, args) return self.execute_command('SINTER', *args) def sinterstore(self, dest, keys, *args): """ Store the intersection of sets specified by ``keys`` into a new set named ``dest``. Returns the number of keys in the new set. """ args = list_or_args(keys, args) return self.execute_command('SINTERSTORE', dest, *args) def sismember(self, name, value): "Return a boolean indicating if ``value`` is a member of set ``name``" return self.execute_command('SISMEMBER', name, value) def smembers(self, name): "Return all members of the set ``name``" return self.execute_command('SMEMBERS', name) def smove(self, src, dst, value): "Move ``value`` from set ``src`` to set ``dst`` atomically" return self.execute_command('SMOVE', src, dst, value) def spop(self, name): "Remove and return a random member of set ``name``" return self.execute_command('SPOP', name) def srandmember(self, name, number=None): """ If ``number`` is None, returns a random member of set ``name``. If ``number`` is supplied, returns a list of ``number`` random memebers of set ``name``. Note this is only available when running Redis 2.6+. """ args = number and [number] or [] return self.execute_command('SRANDMEMBER', name, *args) def srem(self, name, *values): "Remove ``values`` from set ``name``" return self.execute_command('SREM', name, *values) def sunion(self, keys, *args): "Return the union of sets specifiued by ``keys``" args = list_or_args(keys, args) return self.execute_command('SUNION', *args) def sunionstore(self, dest, keys, *args): """ Store the union of sets specified by ``keys`` into a new set named ``dest``. Returns the number of keys in the new set. """ args = list_or_args(keys, args) return self.execute_command('SUNIONSTORE', dest, *args) #### SORTED SET COMMANDS #### def zadd(self, name, *args, **kwargs): """ Set any number of score, element-name pairs to the key ``name``. Pairs can be specified in two ways: As *args, in the form of: score1, name1, score2, name2, ... or as **kwargs, in the form of: name1=score1, name2=score2, ... The following example would add four values to the 'my-key' key: redis.zadd('my-key', 1.1, 'name1', 2.2, 'name2', name3=3.3, name4=4.4) """ pieces = [] if args: if len(args) % 2 != 0: raise RedisError("ZADD requires an equal number of " "values and scores") pieces.extend(args) for pair in iteritems(kwargs): pieces.append(pair[1]) pieces.append(pair[0]) return self.execute_command('ZADD', name, *pieces) def zcard(self, name): "Return the number of elements in the sorted set ``name``" return self.execute_command('ZCARD', name) def zcount(self, name, min, max): return self.execute_command('ZCOUNT', name, min, max) def zincrby(self, name, value, amount=1): "Increment the score of ``value`` in sorted set ``name`` by ``amount``" return self.execute_command('ZINCRBY', name, amount, value) def zinterstore(self, dest, keys, aggregate=None): """ Intersect multiple sorted sets specified by ``keys`` into a new sorted set, ``dest``. Scores in the destination will be aggregated based on the ``aggregate``, or SUM if none is provided. """ return self._zaggregate('ZINTERSTORE', dest, keys, aggregate) def zrange(self, name, start, end, desc=False, withscores=False, score_cast_func=float): """ Return a range of values from sorted set ``name`` between ``start`` and ``end`` sorted in ascending order. ``start`` and ``end`` can be negative, indicating the end of the range. ``desc`` a boolean indicating whether to sort the results descendingly ``withscores`` indicates to return the scores along with the values. The return type is a list of (value, score) pairs ``score_cast_func`` a callable used to cast the score return value """ if desc: return self.zrevrange(name, start, end, withscores, score_cast_func) pieces = ['ZRANGE', name, start, end] if withscores: pieces.append('withscores') options = { 'withscores': withscores, 'score_cast_func': score_cast_func} return self.execute_command(*pieces, **options) def zrangebyscore(self, name, min, max, start=None, num=None, withscores=False, score_cast_func=float): """ Return a range of values from the sorted set ``name`` with scores between ``min`` and ``max``. If ``start`` and ``num`` are specified, then return a slice of the range. ``withscores`` indicates to return the scores along with the values. The return type is a list of (value, score) pairs `score_cast_func`` a callable used to cast the score return value """ if (start is not None and num is None) or \ (num is not None and start is None): raise RedisError("``start`` and ``num`` must both be specified") pieces = ['ZRANGEBYSCORE', name, min, max] if start is not None and num is not None: pieces.extend(['LIMIT', start, num]) if withscores: pieces.append('withscores') options = { 'withscores': withscores, 'score_cast_func': score_cast_func} return self.execute_command(*pieces, **options) def zrank(self, name, value): """ Returns a 0-based value indicating the rank of ``value`` in sorted set ``name`` """ return self.execute_command('ZRANK', name, value) def zrem(self, name, *values): "Remove member ``values`` from sorted set ``name``" return self.execute_command('ZREM', name, *values) def zremrangebyrank(self, name, min, max): """ Remove all elements in the sorted set ``name`` with ranks between ``min`` and ``max``. Values are 0-based, ordered from smallest score to largest. Values can be negative indicating the highest scores. Returns the number of elements removed """ return self.execute_command('ZREMRANGEBYRANK', name, min, max) def zremrangebyscore(self, name, min, max): """ Remove all elements in the sorted set ``name`` with scores between ``min`` and ``max``. Returns the number of elements removed. """ return self.execute_command('ZREMRANGEBYSCORE', name, min, max) def zrevrange(self, name, start, num, withscores=False, score_cast_func=float): """ Return a range of values from sorted set ``name`` between ``start`` and ``num`` sorted in descending order. ``start`` and ``num`` can be negative, indicating the end of the range. ``withscores`` indicates to return the scores along with the values The return type is a list of (value, score) pairs ``score_cast_func`` a callable used to cast the score return value """ pieces = ['ZREVRANGE', name, start, num] if withscores: pieces.append('withscores') options = { 'withscores': withscores, 'score_cast_func': score_cast_func} return self.execute_command(*pieces, **options) def zrevrangebyscore(self, name, max, min, start=None, num=None, withscores=False, score_cast_func=float): """ Return a range of values from the sorted set ``name`` with scores between ``min`` and ``max`` in descending order. If ``start`` and ``num`` are specified, then return a slice of the range. ``withscores`` indicates to return the scores along with the values. The return type is a list of (value, score) pairs ``score_cast_func`` a callable used to cast the score return value """ if (start is not None and num is None) or \ (num is not None and start is None): raise RedisError("``start`` and ``num`` must both be specified") pieces = ['ZREVRANGEBYSCORE', name, max, min] if start is not None and num is not None: pieces.extend(['LIMIT', start, num]) if withscores: pieces.append('withscores') options = { 'withscores': withscores, 'score_cast_func': score_cast_func} return self.execute_command(*pieces, **options) def zrevrank(self, name, value): """ Returns a 0-based value indicating the descending rank of ``value`` in sorted set ``name`` """ return self.execute_command('ZREVRANK', name, value) def zscore(self, name, value): "Return the score of element ``value`` in sorted set ``name``" return self.execute_command('ZSCORE', name, value) def zunionstore(self, dest, keys, aggregate=None): """ Union multiple sorted sets specified by ``keys`` into a new sorted set, ``dest``. Scores in the destination will be aggregated based on the ``aggregate``, or SUM if none is provided. """ return self._zaggregate('ZUNIONSTORE', dest, keys, aggregate) def _zaggregate(self, command, dest, keys, aggregate=None): pieces = [command, dest, len(keys)] if isinstance(keys, dict): keys, weights = dictkeys(keys), dictvalues(keys) else: weights = None pieces.extend(keys) if weights: pieces.append('WEIGHTS') pieces.extend(weights) if aggregate: pieces.append('AGGREGATE') pieces.append(aggregate) return self.execute_command(*pieces) #### HASH COMMANDS #### def hdel(self, name, *keys): "Delete ``keys`` from hash ``name``" return self.execute_command('HDEL', name, *keys) def hexists(self, name, key): "Returns a boolean indicating if ``key`` exists within hash ``name``" return self.execute_command('HEXISTS', name, key) def hget(self, name, key): "Return the value of ``key`` within the hash ``name``" return self.execute_command('HGET', name, key) def hgetall(self, name): "Return a Python dict of the hash's name/value pairs" return self.execute_command('HGETALL', name) def hincrby(self, name, key, amount=1): "Increment the value of ``key`` in hash ``name`` by ``amount``" return self.execute_command('HINCRBY', name, key, amount) def hincrbyfloat(self, name, key, amount=1.0): """ Increment the value of ``key`` in hash ``name`` by floating ``amount`` """ return self.execute_command('HINCRBYFLOAT', name, key, amount) def hkeys(self, name): "Return the list of keys within hash ``name``" return self.execute_command('HKEYS', name) def hlen(self, name): "Return the number of elements in hash ``name``" return self.execute_command('HLEN', name) def hset(self, name, key, value): """ Set ``key`` to ``value`` within hash ``name`` Returns 1 if HSET created a new field, otherwise 0 """ return self.execute_command('HSET', name, key, value) def hsetnx(self, name, key, value): """ Set ``key`` to ``value`` within hash ``name`` if ``key`` does not exist. Returns 1 if HSETNX created a field, otherwise 0. """ return self.execute_command("HSETNX", name, key, value) def hmset(self, name, mapping): """ Sets each key in the ``mapping`` dict to its corresponding value in the hash ``name`` """ if not mapping: raise DataError("'hmset' with 'mapping' of length 0") items = [] for pair in iteritems(mapping): items.extend(pair) return self.execute_command('HMSET', name, *items) def hmget(self, name, keys, *args): "Returns a list of values ordered identically to ``keys``" args = list_or_args(keys, args) return self.execute_command('HMGET', name, *args) def hvals(self, name): "Return the list of values within hash ``name``" return self.execute_command('HVALS', name) def publish(self, channel, message): """ Publish ``message`` on ``channel``. Returns the number of subscribers the message was delivered to. """ return self.execute_command('PUBLISH', channel, message) def eval(self, script, numkeys, *keys_and_args): """ Execute the LUA ``script``, specifying the ``numkeys`` the script will touch and the key names and argument values in ``keys_and_args``. Returns the result of the script. In practice, use the object returned by ``register_script``. This function exists purely for Redis API completion. """ return self.execute_command('EVAL', script, numkeys, *keys_and_args) def evalsha(self, sha, numkeys, *keys_and_args): """ Use the ``sha`` to execute a LUA script already registered via EVAL or SCRIPT LOAD. Specify the ``numkeys`` the script will touch and the key names and argument values in ``keys_and_args``. Returns the result of the script. In practice, use the object returned by ``register_script``. This function exists purely for Redis API completion. """ return self.execute_command('EVALSHA', sha, numkeys, *keys_and_args) def script_exists(self, *args): """ Check if a script exists in the script cache by specifying the SHAs of each script as ``args``. Returns a list of boolean values indicating if if each already script exists in the cache. """ options = {'parse': 'EXISTS'} return self.execute_command('SCRIPT', 'EXISTS', *args, **options) def script_flush(self): "Flush all scripts from the script cache" options = {'parse': 'FLUSH'} return self.execute_command('SCRIPT', 'FLUSH', **options) def script_kill(self): "Kill the currently executing LUA script" options = {'parse': 'KILL'} return self.execute_command('SCRIPT', 'KILL', **options) def script_load(self, script): "Load a LUA ``script`` into the script cache. Returns the SHA." options = {'parse': 'LOAD'} return self.execute_command('SCRIPT', 'LOAD', script, **options) def register_script(self, script): """ Register a LUA ``script`` specifying the ``keys`` it will touch. Returns a Script object that is callable and hides the complexity of deal with scripts, keys, and shas. This is the preferred way to work with LUA scripts. """ return Script(self, script) class Redis(StrictRedis): """ Provides backwards compatibility with older versions of redis-py that changed arguments to some commands to be more Pythonic, sane, or by accident. """ # Overridden callbacks RESPONSE_CALLBACKS = dict_merge( StrictRedis.RESPONSE_CALLBACKS, { 'TTL': lambda r: r != -1 and r or None, 'PTTL': lambda r: r != -1 and r or None, } ) def pipeline(self, transaction=True, shard_hint=None): """ Return a new pipeline object that can queue multiple commands for later execution. ``transaction`` indicates whether all commands should be executed atomically. Apart from making a group of operations atomic, pipelines are useful for reducing the back-and-forth overhead between the client and server. """ return Pipeline( self.connection_pool, self.response_callbacks, transaction, shard_hint) def setex(self, name, value, time): """ Set the value of key ``name`` to ``value`` that expires in ``time`` seconds. ``time`` can be represented by an integer or a Python timedelta object. """ if isinstance(time, datetime.timedelta): time = time.seconds + time.days * 24 * 3600 return self.execute_command('SETEX', name, time, value) def lrem(self, name, value, num=0): """ Remove the first ``num`` occurrences of elements equal to ``value`` from the list stored at ``name``. The ``num`` argument influences the operation in the following ways: num > 0: Remove elements equal to value moving from head to tail. num < 0: Remove elements equal to value moving from tail to head. num = 0: Remove all elements equal to value. """ return self.execute_command('LREM', name, num, value) def zadd(self, name, *args, **kwargs): """ NOTE: The order of arguments differs from that of the official ZADD command. For backwards compatability, this method accepts arguments in the form of name1, score1, name2, score2, while the official Redis documents expects score1, name1, score2, name2. If you're looking to use the standard syntax, consider using the StrictRedis class. See the API Reference section of the docs for more information. Set any number of element-name, score pairs to the key ``name``. Pairs can be specified in two ways: As *args, in the form of: name1, score1, name2, score2, ... or as **kwargs, in the form of: name1=score1, name2=score2, ... The following example would add four values to the 'my-key' key: redis.zadd('my-key', 'name1', 1.1, 'name2', 2.2, name3=3.3, name4=4.4) """ pieces = [] if args: if len(args) % 2 != 0: raise RedisError("ZADD requires an equal number of " "values and scores") pieces.extend(reversed(args)) for pair in iteritems(kwargs): pieces.append(pair[1]) pieces.append(pair[0]) return self.execute_command('ZADD', name, *pieces) class PubSub(object): """ PubSub provides publish, subscribe and listen support to Redis channels. After subscribing to one or more channels, the listen() method will block until a message arrives on one of the subscribed channels. That message will be returned and it's safe to start listening again. """ def __init__(self, connection_pool, shard_hint=None): self.connection_pool = connection_pool self.shard_hint = shard_hint self.connection = None self.channels = set() self.patterns = set() self.subscription_count = 0 self.subscribe_commands = set( ('subscribe', 'psubscribe', 'unsubscribe', 'punsubscribe') ) def __del__(self): try: # if this object went out of scope prior to shutting down # subscriptions, close the connection manually before # returning it to the connection pool if self.connection and (self.channels or self.patterns): self.connection.disconnect() self.reset() except: pass def reset(self): if self.connection: self.connection.disconnect() self.connection_pool.release(self.connection) self.connection = None def close(self): self.reset() def execute_command(self, *args, **kwargs): "Execute a publish/subscribe command" # NOTE: don't parse the response in this function. it could pull a # legitmate message off the stack if the connection is already # subscribed to one or more channels if self.connection is None: self.connection = self.connection_pool.get_connection( 'pubsub', self.shard_hint ) connection = self.connection try: connection.send_command(*args) except ConnectionError: connection.disconnect() # Connect manually here. If the Redis server is down, this will # fail and raise a ConnectionError as desired. connection.connect() # resubscribe to all channels and patterns before # resending the current command for channel in self.channels: self.subscribe(channel) for pattern in self.patterns: self.psubscribe(pattern) connection.send_command(*args) def parse_response(self): "Parse the response from a publish/subscribe command" response = self.connection.read_response() if nativestr(response[0]) in self.subscribe_commands: self.subscription_count = response[2] # if we've just unsubscribed from the remaining channels, # release the connection back to the pool if not self.subscription_count: self.reset() return response def psubscribe(self, patterns): "Subscribe to all channels matching any pattern in ``patterns``" if isinstance(patterns, basestring): patterns = [patterns] for pattern in patterns: self.patterns.add(pattern) return self.execute_command('PSUBSCRIBE', *patterns) def punsubscribe(self, patterns=[]): """ Unsubscribe from any channel matching any pattern in ``patterns``. If empty, unsubscribe from all channels. """ if isinstance(patterns, basestring): patterns = [patterns] for pattern in patterns: try: self.patterns.remove(pattern) except KeyError: pass return self.execute_command('PUNSUBSCRIBE', *patterns) def subscribe(self, channels): "Subscribe to ``channels``, waiting for messages to be published" if isinstance(channels, basestring): channels = [channels] for channel in channels: self.channels.add(channel) return self.execute_command('SUBSCRIBE', *channels) def unsubscribe(self, channels=[]): """ Unsubscribe from ``channels``. If empty, unsubscribe from all channels """ if isinstance(channels, basestring): channels = [channels] for channel in channels: try: self.channels.remove(channel) except KeyError: pass return self.execute_command('UNSUBSCRIBE', *channels) def listen(self): "Listen for messages on channels this client has been subscribed to" while self.subscription_count or self.channels or self.patterns: r = self.parse_response() msg_type = nativestr(r[0]) if msg_type == 'pmessage': msg = { 'type': msg_type, 'pattern': nativestr(r[1]), 'channel': nativestr(r[2]), 'data': r[3] } else: msg = { 'type': msg_type, 'pattern': None, 'channel': nativestr(r[1]), 'data': r[2] } yield msg class BasePipeline(object): """ Pipelines provide a way to transmit multiple commands to the Redis server in one transmission. This is convenient for batch processing, such as saving all the values in a list to Redis. All commands executed within a pipeline are wrapped with MULTI and EXEC calls. This guarantees all commands executed in the pipeline will be executed atomically. Any command raising an exception does *not* halt the execution of subsequent commands in the pipeline. Instead, the exception is caught and its instance is placed into the response list returned by execute(). Code iterating over the response list should be able to deal with an instance of an exception as a potential value. In general, these will be ResponseError exceptions, such as those raised when issuing a command on a key of a different datatype. """ UNWATCH_COMMANDS = set(('DISCARD', 'EXEC', 'UNWATCH')) def __init__(self, connection_pool, response_callbacks, transaction, shard_hint): self.connection_pool = connection_pool self.connection = None self.response_callbacks = response_callbacks self.transaction = transaction self.shard_hint = shard_hint self.watching = False self.reset() def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.reset() def __del__(self): try: self.reset() except: pass def reset(self): self.command_stack = [] self.scripts = set() # make sure to reset the connection state in the event that we were # watching something if self.watching and self.connection: try: # call this manually since our unwatch or # immediate_execute_command methods can call reset() self.connection.send_command('UNWATCH') self.connection.read_response() except ConnectionError: # disconnect will also remove any previous WATCHes self.connection.disconnect() # clean up the other instance attributes self.watching = False self.explicit_transaction = False # we can safely return the connection to the pool here since we're # sure we're no longer WATCHing anything if self.connection: self.connection_pool.release(self.connection) self.connection = None def multi(self): """ Start a transactional block of the pipeline after WATCH commands are issued. End the transactional block with `execute`. """ if self.explicit_transaction: raise RedisError('Cannot issue nested calls to MULTI') if self.command_stack: raise RedisError('Commands without an initial WATCH have already ' 'been issued') self.explicit_transaction = True def execute_command(self, *args, **kwargs): if (self.watching or args[0] == 'WATCH') and \ not self.explicit_transaction: return self.immediate_execute_command(*args, **kwargs) return self.pipeline_execute_command(*args, **kwargs) def immediate_execute_command(self, *args, **options): """ Execute a command immediately, but don't auto-retry on a ConnectionError if we're already WATCHing a variable. Used when issuing WATCH or subsequent commands retrieving their values but before MULTI is called. """ command_name = args[0] conn = self.connection # if this is the first call, we need a connection if not conn: conn = self.connection_pool.get_connection(command_name, self.shard_hint) self.connection = conn try: conn.send_command(*args) return self.parse_response(conn, command_name, **options) except ConnectionError: conn.disconnect() # if we're not already watching, we can safely retry the command # assuming it was a connection timeout if not self.watching: conn.send_command(*args) return self.parse_response(conn, command_name, **options) self.reset() raise def pipeline_execute_command(self, *args, **options): """ Stage a command to be executed when execute() is next called Returns the current Pipeline object back so commands can be chained together, such as: pipe = pipe.set('foo', 'bar').incr('baz').decr('bang') At some other point, you can then run: pipe.execute(), which will execute all commands queued in the pipe. """ self.command_stack.append((args, options)) return self def _execute_transaction(self, connection, commands, raise_on_error): cmds = chain([(('MULTI', ), {})], commands, [(('EXEC', ), {})]) all_cmds = SYM_EMPTY.join( starmap(connection.pack_command, [args for args, options in cmds])) connection.send_packed_command(all_cmds) # parse off the response for MULTI self.parse_response(connection, '_') # and all the other commands errors = [] for i, _ in enumerate(commands): try: self.parse_response(connection, '_') except ResponseError: errors.append((i, sys.exc_info()[1])) # parse the EXEC. try: response = self.parse_response(connection, '_') except ExecAbortError: self.immediate_execute_command('DISCARD') if errors: raise errors[0][1] raise sys.exc_info()[1] if response is None: raise WatchError("Watched variable changed.") # put any parse errors into the response for i, e in errors: response.insert(i, e) if len(response) != len(commands): raise ResponseError("Wrong number of response items from " "pipeline execution") # find any errors in the response and raise if necessary if raise_on_error: self.raise_first_error(response) # We have to run response callbacks manually data = [] for r, cmd in izip(response, commands): if not isinstance(r, Exception): args, options = cmd command_name = args[0] if command_name in self.response_callbacks: r = self.response_callbacks[command_name](r, **options) data.append(r) return data def _execute_pipeline(self, connection, commands, raise_on_error): # build up all commands into a single request to increase network perf all_cmds = SYM_EMPTY.join( starmap(connection.pack_command, [args for args, options in commands])) connection.send_packed_command(all_cmds) response = [self.parse_response(connection, args[0], **options) for args, options in commands] if raise_on_error: self.raise_first_error(response) return response def raise_first_error(self, response): for r in response: if isinstance(r, ResponseError): raise r def parse_response(self, connection, command_name, **options): result = StrictRedis.parse_response( self, connection, command_name, **options) if command_name in self.UNWATCH_COMMANDS: self.watching = False elif command_name == 'WATCH': self.watching = True return result def load_scripts(self): # make sure all scripts that are about to be run on this pipeline exist scripts = list(self.scripts) immediate = self.immediate_execute_command shas = [s.sha for s in scripts] exists = immediate('SCRIPT', 'EXISTS', *shas, **{'parse': 'EXISTS'}) if not all(exists): for s, exist in izip(scripts, exists): if not exist: immediate('SCRIPT', 'LOAD', s.script, **{'parse': 'LOAD'}) def execute(self, raise_on_error=True): "Execute all the commands in the current pipeline" if self.scripts: self.load_scripts() stack = self.command_stack if self.transaction or self.explicit_transaction: execute = self._execute_transaction else: execute = self._execute_pipeline conn = self.connection if not conn: conn = self.connection_pool.get_connection('MULTI', self.shard_hint) # assign to self.connection so reset() releases the connection # back to the pool after we're done self.connection = conn try: return execute(conn, stack, raise_on_error) except ConnectionError: conn.disconnect() # if we were watching a variable, the watch is no longer valid # since this connection has died. raise a WatchError, which # indicates the user should retry his transaction. If this is more # than a temporary failure, the WATCH that the user next issue # will fail, propegating the real ConnectionError if self.watching: raise WatchError("A ConnectionError occured on while watching " "one or more keys") # otherwise, it's safe to retry since the transaction isn't # predicated on any state return execute(conn, stack, raise_on_error) finally: self.reset() def watch(self, *names): "Watches the values at keys ``names``" if self.explicit_transaction: raise RedisError('Cannot issue a WATCH after a MULTI') return self.execute_command('WATCH', *names) def unwatch(self): "Unwatches all previously specified keys" return self.watching and self.execute_command('UNWATCH') or True def script_load_for_pipeline(self, script): "Make sure scripts are loaded prior to pipeline execution" self.scripts.add(script) class StrictPipeline(BasePipeline, StrictRedis): "Pipeline for the StrictRedis class" pass class Pipeline(BasePipeline, Redis): "Pipeline for the Redis class" pass class Script(object): "An executable LUA script object returned by ``register_script``" def __init__(self, registered_client, script): self.registered_client = registered_client self.script = script self.sha = registered_client.script_load(script) def __call__(self, keys=[], args=[], client=None): "Execute the script, passing any required ``args``" client = client or self.registered_client args = tuple(keys) + tuple(args) # make sure the Redis server knows about the script if isinstance(client, BasePipeline): # make sure this script is good to go on pipeline client.script_load_for_pipeline(self) try: return client.evalsha(self.sha, len(keys), *args) except NoScriptError: # Maybe the client is pointed to a differnet server than the client # that created this instance? self.sha = client.script_load(self.script) return client.evalsha(self.sha, len(keys), *args) class LockError(RedisError): "Errors thrown from the Lock" pass class Lock(object): """ A shared, distributed Lock. Using Redis for locking allows the Lock to be shared across processes and/or machines. It's left to the user to resolve deadlock issues and make sure multiple clients play nicely together. """ LOCK_FOREVER = float(2 ** 31 + 1) # 1 past max unix time def __init__(self, redis, name, timeout=None, sleep=0.1): """ Create a new Lock instnace named ``name`` using the Redis client supplied by ``redis``. ``timeout`` indicates a maximum life for the lock. By default, it will remain locked until release() is called. ``sleep`` indicates the amount of time to sleep per loop iteration when the lock is in blocking mode and another client is currently holding the lock. Note: If using ``timeout``, you should make sure all the hosts that are running clients have their time synchronized with a network time service like ntp. """ self.redis = redis self.name = name self.acquired_until = None self.timeout = timeout self.sleep = sleep if self.timeout and self.sleep > self.timeout: raise LockError("'sleep' must be less than 'timeout'") def __enter__(self): return self.acquire() def __exit__(self, exc_type, exc_value, traceback): self.release() def acquire(self, blocking=True): """ Use Redis to hold a shared, distributed lock named ``name``. Returns True once the lock is acquired. If ``blocking`` is False, always return immediately. If the lock was acquired, return True, otherwise return False. """ sleep = self.sleep timeout = self.timeout while 1: unixtime = int(mod_time.time()) if timeout: timeout_at = unixtime + timeout else: timeout_at = Lock.LOCK_FOREVER timeout_at = float(timeout_at) if self.redis.setnx(self.name, timeout_at): self.acquired_until = timeout_at return True # We want blocking, but didn't acquire the lock # check to see if the current lock is expired existing = float(self.redis.get(self.name) or 1) if existing < unixtime: # the previous lock is expired, attempt to overwrite it existing = float(self.redis.getset(self.name, timeout_at) or 1) if existing < unixtime: # we successfully acquired the lock self.acquired_until = timeout_at return True if not blocking: return False mod_time.sleep(sleep) def release(self): "Releases the already acquired lock" if self.acquired_until is None: raise ValueError("Cannot release an unlocked lock") existing = float(self.redis.get(self.name) or 1) # if the lock time is in the future, delete the lock if existing >= self.acquired_until: self.redis.delete(self.name) self.acquired_until = None redis-2.7.2/redis/connection.py0000644000076500000240000003362512051511442016531 0ustar andystaff00000000000000from itertools import chain import os import socket import sys from redis._compat import (b, xrange, imap, byte_to_chr, unicode, bytes, long, BytesIO, nativestr, basestring) from redis.exceptions import ( RedisError, ConnectionError, ResponseError, InvalidResponse, AuthenticationError, NoScriptError, ExecAbortError, ) try: import hiredis hiredis_available = True except ImportError: hiredis_available = False SYM_STAR = b('*') SYM_DOLLAR = b('$') SYM_CRLF = b('\r\n') SYM_LF = b('\n') class PythonParser(object): "Plain Python parsing class" MAX_READ_LENGTH = 1000000 encoding = None EXCEPTION_CLASSES = { 'ERR': ResponseError, 'NOSCRIPT': NoScriptError, 'EXECABORT': ExecAbortError, } def __init__(self): self._fp = None def __del__(self): try: self.on_disconnect() except: pass def on_connect(self, connection): "Called when the socket connects" self._fp = connection._sock.makefile('rb') if connection.decode_responses: self.encoding = connection.encoding def on_disconnect(self): "Called when the socket disconnects" if self._fp is not None: self._fp.close() self._fp = None def read(self, length=None): """ Read a line from the socket is no length is specified, otherwise read ``length`` bytes. Always strip away the newlines. """ try: if length is not None: bytes_left = length + 2 # read the line ending if length > self.MAX_READ_LENGTH: # apparently reading more than 1MB or so from a windows # socket can cause MemoryErrors. See: # https://github.com/andymccurdy/redis-py/issues/205 # read smaller chunks at a time to work around this try: buf = BytesIO() while bytes_left > 0: read_len = min(bytes_left, self.MAX_READ_LENGTH) buf.write(self._fp.read(read_len)) bytes_left -= read_len buf.seek(0) return buf.read(length) finally: buf.close() return self._fp.read(bytes_left)[:-2] # no length, read a full line return self._fp.readline()[:-2] except (socket.error, socket.timeout): e = sys.exc_info()[1] raise ConnectionError("Error while reading from socket: %s" % (e.args,)) def parse_error(self, response): "Parse an error response" error_code = response.split(' ')[0] if error_code in self.EXCEPTION_CLASSES: response = response[len(error_code) + 1:] return self.EXCEPTION_CLASSES[error_code](response) return ResponseError(response) def read_response(self): response = self.read() if not response: raise ConnectionError("Socket closed on remote end") byte, response = byte_to_chr(response[0]), response[1:] if byte not in ('-', '+', ':', '$', '*'): raise InvalidResponse("Protocol Error") # server returned an error if byte == '-': response = nativestr(response) if response.startswith('LOADING '): # if we're loading the dataset into memory, kill the socket # so we re-initialize (and re-SELECT) next time. raise ConnectionError("Redis is loading data into memory") # *return*, not raise the exception class. if it is meant to be # raised, it will be at a higher level. return self.parse_error(response) # single value elif byte == '+': pass # int value elif byte == ':': response = long(response) # bulk response elif byte == '$': length = int(response) if length == -1: return None response = self.read(length) # multi-bulk response elif byte == '*': length = int(response) if length == -1: return None response = [self.read_response() for i in xrange(length)] if isinstance(response, bytes) and self.encoding: response = response.decode(self.encoding) return response class HiredisParser(object): "Parser class for connections using Hiredis" def __init__(self): if not hiredis_available: raise RedisError("Hiredis is not installed") def __del__(self): try: self.on_disconnect() except: pass def on_connect(self, connection): self._sock = connection._sock kwargs = { 'protocolError': InvalidResponse, 'replyError': ResponseError, } if connection.decode_responses: kwargs['encoding'] = connection.encoding self._reader = hiredis.Reader(**kwargs) def on_disconnect(self): self._sock = None self._reader = None def read_response(self): if not self._reader: raise ConnectionError("Socket closed on remote end") response = self._reader.gets() while response is False: try: buffer = self._sock.recv(4096) except (socket.error, socket.timeout): e = sys.exc_info()[1] raise ConnectionError("Error while reading from socket: %s" % (e.args,)) if not buffer: raise ConnectionError("Socket closed on remote end") self._reader.feed(buffer) # proactively, but not conclusively, check if more data is in the # buffer. if the data received doesn't end with \n, there's more. if not buffer.endswith(SYM_LF): continue response = self._reader.gets() return response if hiredis_available: DefaultParser = HiredisParser else: DefaultParser = PythonParser class Connection(object): "Manages TCP communication to and from a Redis server" def __init__(self, host='localhost', port=6379, db=0, password=None, socket_timeout=None, encoding='utf-8', encoding_errors='strict', decode_responses=False, parser_class=DefaultParser): self.pid = os.getpid() self.host = host self.port = port self.db = db self.password = password self.socket_timeout = socket_timeout self.encoding = encoding self.encoding_errors = encoding_errors self.decode_responses = decode_responses self._sock = None self._parser = parser_class() def __del__(self): try: self.disconnect() except: pass def connect(self): "Connects to the Redis server if not already connected" if self._sock: return try: sock = self._connect() except socket.error: e = sys.exc_info()[1] raise ConnectionError(self._error_message(e)) self._sock = sock self.on_connect() def _connect(self): "Create a TCP socket connection" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.settimeout(self.socket_timeout) sock.connect((self.host, self.port)) return sock def _error_message(self, exception): # args for socket.error can either be (errno, "message") # or just "message" if len(exception.args) == 1: return "Error connecting to %s:%s. %s." % \ (self.host, self.port, exception.args[0]) else: return "Error %s connecting %s:%s. %s." % \ (exception.args[0], self.host, self.port, exception.args[1]) def on_connect(self): "Initialize the connection, authenticate and select a database" self._parser.on_connect(self) # if a password is specified, authenticate if self.password: self.send_command('AUTH', self.password) if nativestr(self.read_response()) != 'OK': raise AuthenticationError('Invalid Password') # if a database is specified, switch to it if self.db: self.send_command('SELECT', self.db) if nativestr(self.read_response()) != 'OK': raise ConnectionError('Invalid Database') def disconnect(self): "Disconnects from the Redis server" self._parser.on_disconnect() if self._sock is None: return try: self._sock.close() except socket.error: pass self._sock = None def send_packed_command(self, command): "Send an already packed command to the Redis server" if not self._sock: self.connect() try: self._sock.sendall(command) except socket.error: e = sys.exc_info()[1] self.disconnect() if len(e.args) == 1: _errno, errmsg = 'UNKNOWN', e.args[0] else: _errno, errmsg = e.args raise ConnectionError("Error %s while writing to socket. %s." % (_errno, errmsg)) except: self.disconnect() raise def send_command(self, *args): "Pack and send a command to the Redis server" self.send_packed_command(self.pack_command(*args)) def read_response(self): "Read the response from a previously sent command" try: response = self._parser.read_response() except: self.disconnect() raise if isinstance(response, ResponseError): raise response return response def encode(self, value): "Return a bytestring representation of the value" if isinstance(value, bytes): return value if isinstance(value, float): value = repr(value) if not isinstance(value, basestring): value = str(value) if isinstance(value, unicode): value = value.encode(self.encoding, self.encoding_errors) return value def pack_command(self, *args): "Pack a series of arguments into a value Redis command" output = SYM_STAR + b(str(len(args))) + SYM_CRLF for enc_value in imap(self.encode, args): output += SYM_DOLLAR output += b(str(len(enc_value))) output += SYM_CRLF output += enc_value output += SYM_CRLF return output class UnixDomainSocketConnection(Connection): def __init__(self, path='', db=0, password=None, socket_timeout=None, encoding='utf-8', encoding_errors='strict', decode_responses=False, parser_class=DefaultParser): self.pid = os.getpid() self.path = path self.db = db self.password = password self.socket_timeout = socket_timeout self.encoding = encoding self.encoding_errors = encoding_errors self.decode_responses = decode_responses self._sock = None self._parser = parser_class() def _connect(self): "Create a Unix domain socket connection" sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) sock.settimeout(self.socket_timeout) sock.connect(self.path) return sock def _error_message(self, exception): # args for socket.error can either be (errno, "message") # or just "message" if len(exception.args) == 1: return "Error connecting to unix socket: %s. %s." % \ (self.path, exception.args[0]) else: return "Error %s connecting to unix socket: %s. %s." % \ (exception.args[0], self.path, exception.args[1]) # TODO: add ability to block waiting on a connection to be released class ConnectionPool(object): "Generic connection pool" def __init__(self, connection_class=Connection, max_connections=None, **connection_kwargs): self.pid = os.getpid() self.connection_class = connection_class self.connection_kwargs = connection_kwargs self.max_connections = max_connections or 2 ** 31 self._created_connections = 0 self._available_connections = [] self._in_use_connections = set() def _checkpid(self): if self.pid != os.getpid(): self.disconnect() self.__init__(self.connection_class, self.max_connections, **self.connection_kwargs) def get_connection(self, command_name, *keys, **options): "Get a connection from the pool" self._checkpid() try: connection = self._available_connections.pop() except IndexError: connection = self.make_connection() self._in_use_connections.add(connection) return connection def make_connection(self): "Create a new connection" if self._created_connections >= self.max_connections: raise ConnectionError("Too many connections") self._created_connections += 1 return self.connection_class(**self.connection_kwargs) def release(self, connection): "Releases the connection back to the pool" self._checkpid() if connection.pid == self.pid: self._in_use_connections.remove(connection) self._available_connections.append(connection) def disconnect(self): "Disconnects all connections in the pool" all_conns = chain(self._available_connections, self._in_use_connections) for connection in all_conns: connection.disconnect() redis-2.7.2/redis/exceptions.py0000644000076500000240000000074712051511440016550 0ustar andystaff00000000000000"Core exceptions raised by the Redis client" class RedisError(Exception): pass class AuthenticationError(RedisError): pass class ConnectionError(RedisError): pass class ResponseError(RedisError): pass class InvalidResponse(RedisError): pass class DataError(RedisError): pass class PubSubError(RedisError): pass class WatchError(RedisError): pass class NoScriptError(ResponseError): pass class ExecAbortError(ResponseError): pass redis-2.7.2/redis/utils.py0000644000076500000240000000045212012541637015530 0ustar andystaff00000000000000from redis.client import Redis def from_url(url, db=None, **kwargs): """Returns an active Redis client generated from the given database URL. Will attempt to extract the database id from the path url fragment, if none is provided. """ return Redis.from_url(url, db, **kwargs) redis-2.7.2/redis.egg-info/0000755000076500000240000000000012051512742015505 5ustar andystaff00000000000000redis-2.7.2/redis.egg-info/dependency_links.txt0000644000076500000240000000000112051512742021553 0ustar andystaff00000000000000 redis-2.7.2/redis.egg-info/PKG-INFO0000644000076500000240000004375712051512742016622 0ustar andystaff00000000000000Metadata-Version: 1.1 Name: redis Version: 2.7.2 Summary: Python client for Redis key-value store Home-page: http://github.com/andymccurdy/redis-py Author: Andy McCurdy Author-email: sedrik@gmail.com License: MIT Description: # redis-py The Python interface to the Redis key-value store. [![Build Status](https://secure.travis-ci.org/andymccurdy/redis-py.png?branch=master)](http://travis-ci.org/andymccurdy/redis-py) ## Installation $ sudo pip install redis or alternatively (you really should be using pip though): $ sudo easy_install redis From source: $ sudo python setup.py install ## Getting Started >>> import redis >>> r = redis.StrictRedis(host='localhost', port=6379, db=0) >>> r.set('foo', 'bar') True >>> r.get('foo') 'bar' ## API Reference The official Redis documentation does a great job of explaining each command in detail (http://redis.io/commands). redis-py exposes two client classes that implement these commands. The StrictRedis class attempts to adhere to the official official command syntax. There are a few exceptions: * SELECT: Not implemented. See the explanation in the Thread Safety section below. * DEL: 'del' is a reserved keyword in the Python syntax. Therefore redis-py uses 'delete' instead. * CONFIG GET|SET: These are implemented separately as config_get or config_set. * MULTI/EXEC: These are implemented as part of the Pipeline class. Calling the pipeline method and specifying use_transaction=True will cause the pipeline to be wrapped with the MULTI and EXEC statements when it is executed. See more about Pipelines below. * SUBSCRIBE/LISTEN: Similar to pipelines, PubSub is implemented as a separate class as it places the underlying connection in a state where it can't execute non-pubsub commands. Calling the pubsub method from the Redis client will return a PubSub instance where you can subscribe to channels and listen for messages. You can call PUBLISH from both classes. In addition to the changes above, the Redis class, a subclass of StrictRedis, overrides several other commands to provide backwards compatibility with older versions of redis-py: * LREM: Order of 'num' and 'value' arguments reversed such that 'num' can provide a default value of zero. * ZADD: Redis specifies the 'score' argument before 'value'. These were swapped accidentally when being implemented and not discovered until after people were already using it. The Redis class expects *args in the form of: name1, score1, name2, score2, ... * SETEX: Order of 'time' and 'value' arguments reversed. ## More Detail ### Connection Pools Behind the scenes, redis-py uses a connection pool to manage connections to a Redis server. By default, each Redis instance you create will in turn create its own connection pool. You can override this behavior and use an existing connection pool by passing an already created connection pool instance to the connection_pool argument of the Redis class. You may choose to do this in order to implement client side sharding or have finer grain control of how connections are managed. >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0) >>> r = redis.Redis(connection_pool=pool) ### Connections ConnectionPools manage a set of Connection instances. redis-py ships with two types of Connections. The default, Connection, is a normal TCP socket based connection. The UnixDomainSocketConnection allows for clients running on the same device as the server to connect via a unix domain socket. To use a UnixDomainSocketConnection connection, simply pass the unix_socket_path argument, which is a string to the unix domain socket file. Additionally, make sure the unixsocket parameter is defined in your redis.conf file. It's commented out by default. >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock') You can create your own Connection subclasses as well. This may be useful if you want to control the socket behavior within an async framework. To instantiate a client class using your own connection, you need to create a connection pool, passing your class to the connection_class argument. Other keyword parameters your pass to the pool will be passed to the class specified during initialization. >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass, your_arg='...', ...) ### Parsers Parser classes provide a way to control how responses from the Redis server are parsed. redis-py ships with two parser classes, the PythonParser and the HiredisParser. By default, redis-py will attempt to use the HiredisParser if you have the hiredis module installed and will fallback to the PythonParser otherwise. Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was kind enough to create Python bindings. Using Hiredis can provide up to a 10x speed improvement in parsing responses from the Redis server. The performance increase is most noticeable when retrieving many pieces of data, such as from LRANGE or SMEMBERS operations. Hiredis is available on Pypi, and can be installed via pip or easy_install just like redis-py. $ pip install hiredis or $ easy_install hiredis ### Response Callbacks The client class uses a set of callbacks to cast Redis responses to the appropriate Python type. There are a number of these callbacks defined on the Redis client class in a dictionary called RESPONSE_CALLBACKS. Custom callbacks can be added on a per-instance basis using the set_response_callback method. This method accepts two arguments: a command name and the callback. Callbacks added in this manner are only valid on the instance the callback is added to. If you want to define or override a callback globally, you should make a subclass of the Redis client and add your callback to its REDIS_CALLBACKS class dictionary. Response callbacks take at least one parameter: the response from the Redis server. Keyword arguments may also be accepted in order to further control how to interpret the response. These keyword arguments are specified during the command's call to execute_command. The ZRANGE implementation demonstrates the use of response callback keyword arguments with its "withscores" argument. ## Thread Safety Redis client instances can safely be shared between threads. Internally, connection instances are only retrieved from the connection pool during command execution, and returned to the pool directly after. Command execution never modifies state on the client instance. However, there is one caveat: the Redis SELECT command. The SELECT command allows you to switch the database currently in use by the connection. That database remains selected until another is selected or until the connection is closed. This creates an issue in that connections could be returned to the pool that are connected to a different database. As a result, redis-py does not implement the SELECT command on client instances. If you use multiple Redis databases within the same application, you should create a separate client instance (and possibly a separate connection pool) for each database. It is not safe to pass PubSub or Pipeline objects between threads. ## Pipelines Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request. They can be used to dramatically increase the performance of groups of commands by reducing the number of back-and-forth TCP packets between the client and server. Pipelines are quite simple to use: >>> r = redis.Redis(...) >>> r.set('bing', 'baz') >>> # Use the pipeline() method to create a pipeline instance >>> pipe = r.pipeline() >>> # The following SET commands are buffered >>> pipe.set('foo', 'bar') >>> pipe.get('bing') >>> # the EXECUTE call sends all buffered commands to the server, returning >>> # a list of responses, one for each command. >>> pipe.execute() [True, 'baz'] For ease of use, all commands being buffered into the pipeline return the pipeline object itself. Therefore calls can be chained like: >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute() [True, True, 6] In addition, pipelines can also ensure the buffered commands are executed atomically as a group. This happens by default. If you want to disable the atomic nature of a pipeline but still want to buffer commands, you can turn off transactions. >>> pipe = r.pipeline(transaction=False) A common issue occurs when requiring atomic transactions but needing to retrieve values in Redis prior for use within the transaction. For instance, let's assume that the INCR command didn't exist and we need to build an atomic version of INCR in Python. The completely naive implementation could GET the value, increment it in Python, and SET the new value back. However, this is not atomic because multiple clients could be doing this at the same time, each getting the same value from GET. Enter the WATCH command. WATCH provides the ability to monitor one or more keys prior to starting a transaction. If any of those keys change prior the execution of that transaction, the entire transaction will be canceled and a WatchError will be raised. To implement our own client-side INCR command, we could do something like this: >>> with r.pipeline() as pipe: ... while 1: ... try: ... # put a WATCH on the key that holds our sequence value ... pipe.watch('OUR-SEQUENCE-KEY') ... # after WATCHing, the pipeline is put into immediate execution ... # mode until we tell it to start buffering commands again. ... # this allows us to get the current value of our sequence ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... # now we can put the pipeline back into buffered mode with MULTI ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) ... # and finally, execute the pipeline (the set command) ... pipe.execute() ... # if a WatchError wasn't raised during execution, everything ... # we just did happened atomically. ... break ... except WatchError: ... # another client must have changed 'OUR-SEQUENCE-KEY' between ... # the time we started WATCHing it and the pipeline's execution. ... # our best bet is to just retry. ... continue Note that, because the Pipeline must bind to a single connection for the duration of a WATCH, care must be taken to ensure that the connection is returned to the connection pool by calling the reset() method. If the Pipeline is used as a context manager (as in the example above) reset() will be called automatically. Of course you can do this the manual way by explicity calling reset(): >>> pipe = r.pipeline() >>> while 1: ... try: ... pipe.watch('OUR-SEQUENCE-KEY') ... ... ... pipe.execute() ... break ... except WatchError: ... continue ... finally: ... pipe.reset() A convenience method named "transaction" exists for handling all the boilerplate of handling and retrying watch errors. It takes a callable that should expect a single parameter, a pipeline object, and any number of keys to be WATCHed. Our client-side INCR command above can be written like this, which is much easier to read: >>> def client_side_incr(pipe): ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) >>> >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY') [True] ## LUA Scripting redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are a number of edge cases that make these commands tedious to use in real world scenarios. Therefore, redis-py exposes a Script object that makes scripting much easier to use. To create a Script instance, use the `register_script` function on a client instance passing the LUA code as the first argument. `register_script` returns a Script instance that you can use throughout your code. The following trivial LUA script accepts two parameters: the name of a key and a multiplier value. The script fetches the value stored in the key, multiplies it with the multiplier value and returns the result. >>> r = redis.StrictRedis() >>> lua = """ ... local value = redis.call('GET', KEYS[1]) ... value = tonumber(value) ... return value * ARGV[1]""" >>> multiply = r.register_script(lua) `multiply` is now a Script instance that is invoked by calling it like a function. Script instances accept the following optional arguments: * keys: A list of key names that the script will access. This becomes the KEYS list in LUA. * args: A list of argument values. This becomes the ARGV list in LUA. * client: A redis-py Client or Pipeline instance that will invoke the script. If client isn't specified, the client that intiially created the Script instance (the one that `register_script` was invoked from) will be used. Continuing the example from above: >>> r.set('foo', 2) >>> multiply(keys=['foo'], args=[5]) 10 The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is passed to the script along with the multiplier value of 5. LUA executes the script and returns the result, 10. Script instances can be executed using a different client instance, even one that points to a completely different Redis server. >>> r2 = redis.StrictRedis('redis2.example.com') >>> r2.set('foo', 3) >>> multiply(keys=['foo'], args=[5], client=r2) 15 The Script object ensures that the LUA script is loaded into Redis's script cache. In the event of a NOSCRIPT error, it will load the script and retry executing it. Script objects can also be used in pipelines. The pipeline instance should be passed as the client argument when calling the script. Care is taken to ensure that the script is registered in Redis's script cache just prior to pipeline execution. >>> pipe = r.pipeline() >>> pipe.set('foo', 5) >>> multiply(keys=['foo'], args=[5], client=pipe) >>> pipe.execute() [True, 25] Author ------ redis-py is developed and maintained by Andy McCurdy (sedrik@gmail.com). It can be found here: http://github.com/andymccurdy/redis-py Special thanks to: * Ludovico Magnocavallo, author of the original Python Redis client, from which some of the socket code is still used. * Alexander Solovyov for ideas on the generic response callback system. * Paul Hubbard for initial packaging support. Keywords: Redis,key-value store Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.5 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.2 Classifier: Programming Language :: Python :: 3.3 redis-2.7.2/redis.egg-info/SOURCES.txt0000644000076500000240000000063212051512742017372 0ustar andystaff00000000000000CHANGES INSTALL LICENSE MANIFEST.in README.md setup.py redis/__init__.py redis/_compat.py redis/client.py redis/connection.py redis/exceptions.py redis/utils.py redis.egg-info/PKG-INFO redis.egg-info/SOURCES.txt redis.egg-info/dependency_links.txt redis.egg-info/top_level.txt tests/__init__.py tests/connection_pool.py tests/encoding.py tests/lock.py tests/pipeline.py tests/pubsub.py tests/server_commands.pyredis-2.7.2/redis.egg-info/top_level.txt0000644000076500000240000000000612051512742020233 0ustar andystaff00000000000000redis redis-2.7.2/setup.cfg0000644000076500000240000000007312051512742014526 0ustar andystaff00000000000000[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 redis-2.7.2/setup.py0000644000076500000240000000243412034565730014430 0ustar andystaff00000000000000#!/usr/bin/env python import os from redis import __version__ try: from setuptools import setup except ImportError: from distutils.core import setup f = open(os.path.join(os.path.dirname(__file__), 'README.md')) long_description = f.read() f.close() setup( name='redis', version=__version__, description='Python client for Redis key-value store', long_description=long_description, url='http://github.com/andymccurdy/redis-py', author='Andy McCurdy', author_email='sedrik@gmail.com', maintainer='Andy McCurdy', maintainer_email='sedrik@gmail.com', keywords=['Redis', 'key-value store'], license='MIT', packages=['redis'], test_suite='tests.all_tests', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Environment :: Console', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 2.5', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.2', 'Programming Language :: Python :: 3.3', ] ) redis-2.7.2/tests/0000755000076500000240000000000012051512742014047 5ustar andystaff00000000000000redis-2.7.2/tests/__init__.py0000644000076500000240000000203412010053446016153 0ustar andystaff00000000000000import unittest from tests.server_commands import ServerCommandsTestCase from tests.connection_pool import ConnectionPoolTestCase from tests.pipeline import PipelineTestCase from tests.lock import LockTestCase from tests.pubsub import PubSubTestCase, PubSubRedisDownTestCase from tests.encoding import (PythonParserEncodingTestCase, HiredisEncodingTestCase) try: import hiredis use_hiredis = True except ImportError: use_hiredis = False def all_tests(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(ServerCommandsTestCase)) suite.addTest(unittest.makeSuite(ConnectionPoolTestCase)) suite.addTest(unittest.makeSuite(PipelineTestCase)) suite.addTest(unittest.makeSuite(LockTestCase)) suite.addTest(unittest.makeSuite(PubSubTestCase)) suite.addTest(unittest.makeSuite(PubSubRedisDownTestCase)) suite.addTest(unittest.makeSuite(PythonParserEncodingTestCase)) if use_hiredis: suite.addTest(unittest.makeSuite(HiredisEncodingTestCase)) return suite redis-2.7.2/tests/connection_pool.py0000644000076500000240000000257012010053446017611 0ustar andystaff00000000000000import os import unittest import redis class DummyConnection(object): def __init__(self, **kwargs): self.kwargs = kwargs self.pid = os.getpid() class ConnectionPoolTestCase(unittest.TestCase): def get_pool(self, connection_info=None, max_connections=None): connection_info = connection_info or {'a': 1, 'b': 2, 'c': 3} pool = redis.ConnectionPool( connection_class=DummyConnection, max_connections=max_connections, **connection_info) return pool def test_connection_creation(self): connection_info = {'foo': 'bar', 'biz': 'baz'} pool = self.get_pool(connection_info=connection_info) connection = pool.get_connection('_') self.assertEquals(connection.kwargs, connection_info) def test_multiple_connections(self): pool = self.get_pool() c1 = pool.get_connection('_') c2 = pool.get_connection('_') self.assert_(c1 != c2) def test_max_connections(self): pool = self.get_pool(max_connections=2) c1 = pool.get_connection('_') c2 = pool.get_connection('_') self.assertRaises(redis.ConnectionError, pool.get_connection, '_') def test_release(self): pool = self.get_pool() c1 = pool.get_connection('_') pool.release(c1) c2 = pool.get_connection('_') self.assertEquals(c1, c2) redis-2.7.2/tests/encoding.py0000644000076500000240000000353412010053446016210 0ustar andystaff00000000000000from __future__ import with_statement import unittest from redis._compat import unichr, u, unicode from redis.connection import ConnectionPool, PythonParser, HiredisParser import redis class EncodingTestCase(unittest.TestCase): def setUp(self): self.client = redis.Redis( host='localhost', port=6379, db=9, charset='utf-8') self.client.flushdb() def tearDown(self): self.client.flushdb() def test_simple_encoding(self): unicode_string = unichr(3456) + u('abcd') + unichr(3421) self.client.set('unicode-string', unicode_string) cached_val = self.client.get('unicode-string') self.assertEquals( unicode.__name__, type(cached_val).__name__, 'Cache returned value with type "%s", expected "%s"' % (type(cached_val).__name__, unicode.__name__)) self.assertEqual(unicode_string, cached_val) def test_list_encoding(self): unicode_string = unichr(3456) + u('abcd') + unichr(3421) result = [unicode_string, unicode_string, unicode_string] for i in range(len(result)): self.client.rpush('a', unicode_string) self.assertEquals(self.client.lrange('a', 0, -1), result) class PythonParserEncodingTestCase(EncodingTestCase): def setUp(self): pool = ConnectionPool( host='localhost', port=6379, db=9, encoding='utf-8', decode_responses=True, parser_class=PythonParser) self.client = redis.Redis(connection_pool=pool) self.client.flushdb() class HiredisEncodingTestCase(EncodingTestCase): def setUp(self): pool = ConnectionPool( host='localhost', port=6379, db=9, encoding='utf-8', decode_responses=True, parser_class=HiredisParser) self.client = redis.Redis(connection_pool=pool) self.client.flushdb() redis-2.7.2/tests/lock.py0000644000076500000240000000461612010053446015354 0ustar andystaff00000000000000from __future__ import with_statement import time import unittest from redis.client import Lock, LockError import redis class LockTestCase(unittest.TestCase): def setUp(self): self.client = redis.Redis(host='localhost', port=6379, db=9) self.client.flushdb() def tearDown(self): self.client.flushdb() def test_lock(self): lock = self.client.lock('foo') self.assert_(lock.acquire()) self.assertEquals(self.client['foo'], str(Lock.LOCK_FOREVER).encode()) lock.release() self.assertEquals(self.client.get('foo'), None) def test_competing_locks(self): lock1 = self.client.lock('foo') lock2 = self.client.lock('foo') self.assert_(lock1.acquire()) self.assertFalse(lock2.acquire(blocking=False)) lock1.release() self.assert_(lock2.acquire()) self.assertFalse(lock1.acquire(blocking=False)) lock2.release() def test_timeouts(self): lock1 = self.client.lock('foo', timeout=1) lock2 = self.client.lock('foo') self.assert_(lock1.acquire()) self.assertEquals(lock1.acquired_until, float(int(time.time())) + 1) self.assertEquals(lock1.acquired_until, float(self.client['foo'])) self.assertFalse(lock2.acquire(blocking=False)) time.sleep(2) # need to wait up to 2 seconds for lock to timeout self.assert_(lock2.acquire(blocking=False)) lock2.release() def test_non_blocking(self): lock1 = self.client.lock('foo') self.assert_(lock1.acquire(blocking=False)) self.assert_(lock1.acquired_until) lock1.release() self.assert_(lock1.acquired_until is None) def test_context_manager(self): with self.client.lock('foo'): self.assertEquals( self.client['foo'], str(Lock.LOCK_FOREVER).encode()) self.assertEquals(self.client.get('foo'), None) def test_float_timeout(self): lock1 = self.client.lock('foo', timeout=1.5) lock2 = self.client.lock('foo', timeout=1.5) self.assert_(lock1.acquire()) self.assertFalse(lock2.acquire(blocking=False)) lock1.release() def test_high_sleep_raises_error(self): "If sleep is higher than timeout, it should raise an error" self.assertRaises( LockError, self.client.lock, 'foo', timeout=1, sleep=2 ) redis-2.7.2/tests/pipeline.py0000644000076500000240000001372312051511724016233 0ustar andystaff00000000000000from __future__ import with_statement import unittest import redis from redis._compat import b class PipelineTestCase(unittest.TestCase): def setUp(self): self.client = redis.Redis(host='localhost', port=6379, db=9) self.client.flushdb() def tearDown(self): self.client.flushdb() def test_pipeline(self): with self.client.pipeline() as pipe: pipe.set('a', 'a1').get('a').zadd('z', z1=1).zadd('z', z2=4) pipe.zincrby('z', 'z1').zrange('z', 0, 5, withscores=True) self.assertEquals( pipe.execute(), [ True, b('a1'), True, True, 2.0, [(b('z1'), 2.0), (b('z2'), 4)], ] ) def test_pipeline_no_transaction(self): with self.client.pipeline(transaction=False) as pipe: pipe.set('a', 'a1').set('b', 'b1').set('c', 'c1') self.assertEquals(pipe.execute(), [True, True, True]) self.assertEquals(self.client['a'], b('a1')) self.assertEquals(self.client['b'], b('b1')) self.assertEquals(self.client['c'], b('c1')) def test_pipeline_no_transaction_watch(self): self.client.set('a', 0) with self.client.pipeline(transaction=False) as pipe: pipe.watch('a') a = pipe.get('a') pipe.multi() pipe.set('a', int(a) + 1) result = pipe.execute() self.assertEquals(result, [True]) def test_pipeline_no_transaction_watch_failure(self): self.client.set('a', 0) with self.client.pipeline(transaction=False) as pipe: pipe.watch('a') a = pipe.get('a') self.client.set('a', 'bad') pipe.multi() pipe.set('a', int(a) + 1) self.assertRaises(redis.WatchError, pipe.execute) def test_exec_error_in_response(self): # an invalid pipeline command at exec time adds the exception instance # to the list of returned values self.client['c'] = 'a' with self.client.pipeline() as pipe: pipe.set('a', 1).set('b', 2).lpush('c', 3).set('d', 4) result = pipe.execute(raise_on_error=False) self.assertEquals(result[0], True) self.assertEquals(self.client['a'], b('1')) self.assertEquals(result[1], True) self.assertEquals(self.client['b'], b('2')) # we can't lpush to a key that's a string value, so this should # be a ResponseError exception self.assert_(isinstance(result[2], redis.ResponseError)) self.assertEquals(self.client['c'], b('a')) self.assertEquals(result[3], True) self.assertEquals(self.client['d'], b('4')) # make sure the pipe was restored to a working state self.assertEquals(pipe.set('z', 'zzz').execute(), [True]) self.assertEquals(self.client['z'], b('zzz')) def test_exec_error_raised(self): self.client['c'] = 'a' with self.client.pipeline() as pipe: pipe.set('a', 1).set('b', 2).lpush('c', 3).set('d', 4) self.assertRaises(redis.ResponseError, pipe.execute) # make sure the pipe was restored to a working state self.assertEquals(pipe.set('z', 'zzz').execute(), [True]) self.assertEquals(self.client['z'], b('zzz')) def test_parse_error_raised(self): with self.client.pipeline() as pipe: # the zrem is invalid because we don't pass any keys to it pipe.set('a', 1).zrem('b').set('b', 2) self.assertRaises(redis.ResponseError, pipe.execute) # make sure the pipe was restored to a working state self.assertEquals(pipe.set('z', 'zzz').execute(), [True]) self.assertEquals(self.client['z'], b('zzz')) def test_watch_succeed(self): self.client.set('a', 1) self.client.set('b', 2) with self.client.pipeline() as pipe: pipe.watch('a', 'b') self.assertEquals(pipe.watching, True) a_value = pipe.get('a') b_value = pipe.get('b') self.assertEquals(a_value, b('1')) self.assertEquals(b_value, b('2')) pipe.multi() pipe.set('c', 3) self.assertEquals(pipe.execute(), [True]) self.assertEquals(pipe.watching, False) def test_watch_failure(self): self.client.set('a', 1) self.client.set('b', 2) with self.client.pipeline() as pipe: pipe.watch('a', 'b') self.client.set('b', 3) pipe.multi() pipe.get('a') self.assertRaises(redis.WatchError, pipe.execute) self.assertEquals(pipe.watching, False) def test_unwatch(self): self.client.set('a', 1) self.client.set('b', 2) with self.client.pipeline() as pipe: pipe.watch('a', 'b') self.client.set('b', 3) pipe.unwatch() self.assertEquals(pipe.watching, False) pipe.get('a') self.assertEquals(pipe.execute(), [b('1')]) def test_transaction_callable(self): self.client.set('a', 1) self.client.set('b', 2) has_run = [] def my_transaction(pipe): a_value = pipe.get('a') self.assert_(a_value in (b('1'), b('2'))) b_value = pipe.get('b') self.assertEquals(b_value, b('2')) # silly run-once code... incr's a so WatchError should be raised # forcing this all to run again if not has_run: self.client.incr('a') has_run.append('it has') pipe.multi() pipe.set('c', int(a_value) + int(b_value)) result = self.client.transaction(my_transaction, 'a', 'b') self.assertEquals(result, [True]) self.assertEquals(self.client.get('c'), b('4')) redis-2.7.2/tests/pubsub.py0000644000076500000240000000666212012656272015737 0ustar andystaff00000000000000import unittest from redis._compat import b, next from redis.exceptions import ConnectionError import redis class PubSubTestCase(unittest.TestCase): def setUp(self): self.connection_pool = redis.ConnectionPool() self.client = redis.Redis(connection_pool=self.connection_pool) self.pubsub = self.client.pubsub() def tearDown(self): self.connection_pool.disconnect() def test_channel_subscribe(self): # subscribe doesn't return anything self.assertEquals( self.pubsub.subscribe('foo'), None ) # send a message self.assertEquals(self.client.publish('foo', 'hello foo'), 1) # there should be now 2 messages in the buffer, a subscribe and the # one we just published self.assertEquals( next(self.pubsub.listen()), { 'type': 'subscribe', 'pattern': None, 'channel': 'foo', 'data': 1 } ) self.assertEquals( next(self.pubsub.listen()), { 'type': 'message', 'pattern': None, 'channel': 'foo', 'data': b('hello foo') } ) # unsubscribe self.assertEquals( self.pubsub.unsubscribe('foo'), None ) # unsubscribe message should be in the buffer self.assertEquals( next(self.pubsub.listen()), { 'type': 'unsubscribe', 'pattern': None, 'channel': 'foo', 'data': 0 } ) def test_pattern_subscribe(self): # psubscribe doesn't return anything self.assertEquals( self.pubsub.psubscribe('f*'), None ) # send a message self.assertEquals(self.client.publish('foo', 'hello foo'), 1) # there should be now 2 messages in the buffer, a subscribe and the # one we just published self.assertEquals( next(self.pubsub.listen()), { 'type': 'psubscribe', 'pattern': None, 'channel': 'f*', 'data': 1 } ) self.assertEquals( next(self.pubsub.listen()), { 'type': 'pmessage', 'pattern': 'f*', 'channel': 'foo', 'data': b('hello foo') } ) # unsubscribe self.assertEquals( self.pubsub.punsubscribe('f*'), None ) # unsubscribe message should be in the buffer self.assertEquals( next(self.pubsub.listen()), { 'type': 'punsubscribe', 'pattern': None, 'channel': 'f*', 'data': 0 } ) class PubSubRedisDownTestCase(unittest.TestCase): def setUp(self): self.connection_pool = redis.ConnectionPool(port=6390) self.client = redis.Redis(connection_pool=self.connection_pool) self.pubsub = self.client.pubsub() def tearDown(self): self.connection_pool.disconnect() def test_channel_subscribe(self): got_exception = False try: self.pubsub.subscribe('foo') except ConnectionError: got_exception = True self.assertTrue(got_exception) redis-2.7.2/tests/server_commands.py0000644000076500000240000017764312034471511017631 0ustar andystaff00000000000000from distutils.version import StrictVersion import unittest import datetime import time import binascii from redis._compat import (unichr, u, b, ascii_letters, iteritems, dictkeys, dictvalues) from redis.client import parse_info import redis class ServerCommandsTestCase(unittest.TestCase): def get_client(self, cls=redis.Redis): return cls(host='localhost', port=6379, db=9) def setUp(self): self.client = self.get_client() self.client.flushdb() def tearDown(self): self.client.flushdb() self.client.connection_pool.disconnect() def test_response_callbacks(self): self.assertEquals( self.client.response_callbacks, redis.Redis.RESPONSE_CALLBACKS) self.assertNotEquals( id(self.client.response_callbacks), id(redis.Redis.RESPONSE_CALLBACKS)) self.client.set_response_callback('GET', lambda x: 'static') self.client.set('a', 'foo') self.assertEquals(self.client.get('a'), 'static') # GENERAL SERVER COMMANDS def test_dbsize(self): self.client['a'] = 'foo' self.client['b'] = 'bar' self.assertEquals(self.client.dbsize(), 2) def test_get_and_set(self): # get and set can't be tested independently of each other client = redis.Redis(host='localhost', port=6379, db=9) self.assertEquals(client.get('a'), None) byte_string = b('value') integer = 5 unicode_string = unichr(3456) + u('abcd') + unichr(3421) self.assert_(client.set('byte_string', byte_string)) self.assert_(client.set('integer', 5)) self.assert_(client.set('unicode_string', unicode_string)) self.assertEquals(client.get('byte_string'), byte_string) self.assertEquals(client.get('integer'), b(str(integer))) self.assertEquals( client.get('unicode_string').decode('utf-8'), unicode_string) def test_getitem_and_setitem(self): self.client['a'] = 'bar' self.assertEquals(self.client['a'], b('bar')) self.assertRaises(KeyError, self.client.__getitem__, 'b') def test_delete(self): self.assertEquals(self.client.delete('a'), False) self.client['a'] = 'foo' self.assertEquals(self.client.delete('a'), True) def test_delitem(self): self.client['a'] = 'foo' del self.client['a'] self.assertEquals(self.client.get('a'), None) def test_client_list(self): clients = self.client.client_list() self.assert_(isinstance(clients[0], dict)) self.assert_('addr' in clients[0]) def test_config_get(self): data = self.client.config_get() self.assert_('maxmemory' in data) self.assert_(data['maxmemory'].isdigit()) def test_config_set(self): data = self.client.config_get() rdbname = data['dbfilename'] self.assert_(self.client.config_set('dbfilename', 'redis_py_test.rdb')) self.assertEquals( self.client.config_get()['dbfilename'], 'redis_py_test.rdb' ) self.assert_(self.client.config_set('dbfilename', rdbname)) self.assertEquals(self.client.config_get()['dbfilename'], rdbname) def test_debug_object(self): self.client['a'] = 'foo' debug_info = self.client.debug_object('a') self.assert_(len(debug_info) > 0) self.assertEquals(debug_info['refcount'], 1) self.assert_(debug_info['serializedlength'] > 0) self.client.rpush('b', 'a1') debug_info = self.client.debug_object('a') def test_echo(self): self.assertEquals(self.client.echo('foo bar'), b('foo bar')) def test_info(self): self.client['a'] = 'foo' self.client['b'] = 'bar' info = self.client.info() self.assert_(isinstance(info, dict)) self.assertEquals(info['db9']['keys'], 2) def test_lastsave(self): self.assert_(isinstance(self.client.lastsave(), datetime.datetime)) def test_object(self): self.client['a'] = 'foo' self.assert_(isinstance(self.client.object('refcount', 'a'), int)) self.assert_(isinstance(self.client.object('idletime', 'a'), int)) self.assertEquals(self.client.object('encoding', 'a'), b('raw')) def test_ping(self): self.assertEquals(self.client.ping(), True) def test_time(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return t = self.client.time() self.assertEquals(len(t), 2) self.assert_(isinstance(t[0], int)) self.assert_(isinstance(t[1], int)) # KEYS def test_append(self): # invalid key type self.client.rpush('a', 'a1') self.assertRaises(redis.ResponseError, self.client.append, 'a', 'a1') del self.client['a'] # real logic self.assertEquals(self.client.append('a', 'a1'), 2) self.assertEquals(self.client['a'], b('a1')) self.assert_(self.client.append('a', 'a2'), 4) self.assertEquals(self.client['a'], b('a1a2')) def test_getrange(self): self.client['a'] = 'foo' self.assertEquals(self.client.getrange('a', 0, 0), b('f')) self.assertEquals(self.client.getrange('a', 0, 2), b('foo')) self.assertEquals(self.client.getrange('a', 3, 4), b('')) def test_decr(self): self.assertEquals(self.client.decr('a'), -1) self.assertEquals(self.client['a'], b('-1')) self.assertEquals(self.client.decr('a'), -2) self.assertEquals(self.client['a'], b('-2')) self.assertEquals(self.client.decr('a', amount=5), -7) self.assertEquals(self.client['a'], b('-7')) def test_exists(self): self.assertEquals(self.client.exists('a'), False) self.client['a'] = 'foo' self.assertEquals(self.client.exists('a'), True) def test_expire(self): self.assertEquals(self.client.expire('a', 10), False) self.client['a'] = 'foo' self.assertEquals(self.client.expire('a', 10), True) self.assertEquals(self.client.ttl('a'), 10) self.assertEquals(self.client.persist('a'), True) self.assertEquals(self.client.ttl('a'), None) def test_expireat(self): expire_at = datetime.datetime.now() + datetime.timedelta(minutes=1) self.assertEquals(self.client.expireat('a', expire_at), False) self.client['a'] = 'foo' # expire at in unix time expire_at_seconds = int(time.mktime(expire_at.timetuple())) self.assertEquals(self.client.expireat('a', expire_at_seconds), True) self.assertEquals(self.client.ttl('a'), 60) # expire at given a datetime object self.client['b'] = 'bar' self.assertEquals(self.client.expireat('b', expire_at), True) self.assertEquals(self.client.ttl('b'), 60) def test_pexpire(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return self.assertEquals(self.client.pexpire('a', 10000), False) self.client['a'] = 'foo' self.assertEquals(self.client.pexpire('a', 10000), True) self.assert_(self.client.pttl('a') <= 10000) self.assertEquals(self.client.persist('a'), True) self.assertEquals(self.client.pttl('a'), None) def test_pexpireat(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return expire_at = datetime.datetime.now() + datetime.timedelta(minutes=1) self.assertEquals(self.client.pexpireat('a', expire_at), False) self.client['a'] = 'foo' # expire at in unix time (milliseconds) expire_at_seconds = int(time.mktime(expire_at.timetuple())) * 1000 self.assertEquals(self.client.pexpireat('a', expire_at_seconds), True) self.assert_(self.client.ttl('a') <= 60) # expire at given a datetime object self.client['b'] = 'bar' self.assertEquals(self.client.pexpireat('b', expire_at), True) self.assert_(self.client.ttl('b') <= 60) def test_get_set_bit(self): self.assertEquals(self.client.getbit('a', 5), False) self.assertEquals(self.client.setbit('a', 5, True), False) self.assertEquals(self.client.getbit('a', 5), True) self.assertEquals(self.client.setbit('a', 4, False), False) self.assertEquals(self.client.getbit('a', 4), False) self.assertEquals(self.client.setbit('a', 4, True), False) self.assertEquals(self.client.setbit('a', 5, True), True) self.assertEquals(self.client.getbit('a', 4), True) self.assertEquals(self.client.getbit('a', 5), True) def test_bitcount(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return self.client.setbit('a', 5, True) self.assertEquals(self.client.bitcount('a'), 1) self.client.setbit('a', 6, True) self.assertEquals(self.client.bitcount('a'), 2) self.client.setbit('a', 5, False) self.assertEquals(self.client.bitcount('a'), 1) self.client.setbit('a', 9, True) self.client.setbit('a', 17, True) self.client.setbit('a', 25, True) self.client.setbit('a', 33, True) self.assertEquals(self.client.bitcount('a'), 5) self.assertEquals(self.client.bitcount('a', 2, 3), 2) self.assertEquals(self.client.bitcount('a', 2, -1), 3) self.assertEquals(self.client.bitcount('a', -2, -1), 2) self.assertEquals(self.client.bitcount('a', 1, 1), 1) def test_bitop_not_empty_string(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return self.client.set('a', '') self.client.bitop('not', 'r', 'a') self.assertEquals(self.client.get('r'), None) def test_bitop_not(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return test_str = b('\xAA\x00\xFF\x55') correct = ~0xAA00FF55 & 0xFFFFFFFF self.client.set('a', test_str) self.client.bitop('not', 'r', 'a') self.assertEquals( int(binascii.hexlify(self.client.get('r')), 16), correct) def test_bitop_not_in_place(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return test_str = b('\xAA\x00\xFF\x55') correct = ~0xAA00FF55 & 0xFFFFFFFF self.client.set('a', test_str) self.client.bitop('not', 'a', 'a') self.assertEquals( int(binascii.hexlify(self.client.get('a')), 16), correct) def test_bitop_single_string(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return test_str = b('\x01\x02\xFF') self.client.set('a', test_str) self.client.bitop('and', 'res1', 'a') self.client.bitop('or', 'res2', 'a') self.client.bitop('xor', 'res3', 'a') self.assertEquals(self.client.get('res1'), test_str) self.assertEquals(self.client.get('res2'), test_str) self.assertEquals(self.client.get('res3'), test_str) def test_bitop_string_operands(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return self.client.set('a', b('\x01\x02\xFF\xFF')) self.client.set('b', b('\x01\x02\xFF')) self.client.bitop('and', 'res1', 'a', 'b') self.client.bitop('or', 'res2', 'a', 'b') self.client.bitop('xor', 'res3', 'a', 'b') self.assertEquals( int(binascii.hexlify(self.client.get('res1')), 16), 0x0102FF00) self.assertEquals( int(binascii.hexlify(self.client.get('res2')), 16), 0x0102FFFF) self.assertEquals( int(binascii.hexlify(self.client.get('res3')), 16), 0x000000FF) def test_getset(self): self.assertEquals(self.client.getset('a', 'foo'), None) self.assertEquals(self.client.getset('a', 'bar'), b('foo')) def test_incr(self): self.assertEquals(self.client.incr('a'), 1) self.assertEquals(self.client['a'], b('1')) self.assertEquals(self.client.incr('a'), 2) self.assertEquals(self.client['a'], b('2')) self.assertEquals(self.client.incr('a', amount=5), 7) self.assertEquals(self.client['a'], b('7')) def test_incrbyfloat(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return self.assertEquals(self.client.incrbyfloat('a'), 1.0) self.assertEquals(self.client['a'], b('1')) self.assertEquals(self.client.incrbyfloat('a', 1.1), 2.1) self.assertEquals(float(self.client['a']), float(2.1)) def test_keys(self): self.assertEquals(self.client.keys(), []) keys = set([b('test_a'), b('test_b'), b('testc')]) for key in keys: self.client[key] = 1 self.assertEquals( set(self.client.keys(pattern='test_*')), keys - set([b('testc')])) self.assertEquals(set(self.client.keys(pattern='test*')), keys) def test_mget(self): self.assertEquals(self.client.mget(['a', 'b']), [None, None]) self.client['a'] = '1' self.client['b'] = '2' self.client['c'] = '3' self.assertEquals( self.client.mget(['a', 'other', 'b', 'c']), [b('1'), None, b('2'), b('3')]) def test_mset(self): d = {'a': '1', 'b': '2', 'c': '3'} self.assert_(self.client.mset(d)) for k, v in iteritems(d): self.assertEquals(self.client[k], b(v)) def test_msetnx(self): d = {'a': '1', 'b': '2', 'c': '3'} self.assert_(self.client.msetnx(d)) d2 = {'a': 'x', 'd': '4'} self.assert_(not self.client.msetnx(d2)) for k, v in iteritems(d): self.assertEquals(self.client[k], b(v)) self.assertEquals(self.client.get('d'), None) def test_randomkey(self): self.assertEquals(self.client.randomkey(), None) self.client['a'] = '1' self.client['b'] = '2' self.client['c'] = '3' self.assert_(self.client.randomkey() in (b('a'), b('b'), b('c'))) def test_rename(self): self.client['a'] = '1' self.assert_(self.client.rename('a', 'b')) self.assertEquals(self.client.get('a'), None) self.assertEquals(self.client['b'], b('1')) def test_renamenx(self): self.client['a'] = '1' self.client['b'] = '2' self.assert_(not self.client.renamenx('a', 'b')) self.assertEquals(self.client['a'], b('1')) self.assertEquals(self.client['b'], b('2')) def test_setex(self): self.assertEquals(self.client.setex('a', '1', 60), True) self.assertEquals(self.client['a'], b('1')) self.assertEquals(self.client.ttl('a'), 60) def test_setnx(self): self.assert_(self.client.setnx('a', '1')) self.assertEquals(self.client['a'], b('1')) self.assert_(not self.client.setnx('a', '2')) self.assertEquals(self.client['a'], b('1')) def test_setrange(self): self.assertEquals(self.client.setrange('a', 5, 'abcdef'), 11) self.assertEquals(self.client['a'], b('\0\0\0\0\0abcdef')) self.client['a'] = 'Hello World' self.assertEquals(self.client.setrange('a', 6, 'Redis'), 11) self.assertEquals(self.client['a'], b('Hello Redis')) def test_strlen(self): self.client['a'] = 'abcdef' self.assertEquals(self.client.strlen('a'), 6) def test_substr(self): # invalid key type self.client.rpush('a', 'a1') self.assertRaises(redis.ResponseError, self.client.substr, 'a', 0) del self.client['a'] # real logic self.client['a'] = 'abcdefghi' self.assertEquals(self.client.substr('a', 0), b('abcdefghi')) self.assertEquals(self.client.substr('a', 2), b('cdefghi')) self.assertEquals(self.client.substr('a', 3, 5), b('def')) self.assertEquals(self.client.substr('a', 3, -2), b('defgh')) self.client['a'] = 123456 # does substr work with ints? self.assertEquals(self.client.substr('a', 2, -2), b('345')) def test_type(self): self.assertEquals(self.client.type('a'), b('none')) self.client['a'] = '1' self.assertEquals(self.client.type('a'), b('string')) del self.client['a'] self.client.lpush('a', '1') self.assertEquals(self.client.type('a'), b('list')) del self.client['a'] self.client.sadd('a', '1') self.assertEquals(self.client.type('a'), b('set')) del self.client['a'] self.client.zadd('a', **{'1': 1}) self.assertEquals(self.client.type('a'), b('zset')) # LISTS def make_list(self, name, l): for i in l: self.client.rpush(name, i) def test_blpop(self): self.make_list('a', 'ab') self.make_list('b', 'cd') self.assertEquals( self.client.blpop(['b', 'a'], timeout=1), (b('b'), b('c'))) self.assertEquals( self.client.blpop(['b', 'a'], timeout=1), (b('b'), b('d'))) self.assertEquals( self.client.blpop(['b', 'a'], timeout=1), (b('a'), b('a'))) self.assertEquals( self.client.blpop(['b', 'a'], timeout=1), (b('a'), b('b'))) self.assertEquals(self.client.blpop(['b', 'a'], timeout=1), None) self.make_list('c', 'a') self.assertEquals(self.client.blpop('c', timeout=1), (b('c'), b('a'))) def test_brpop(self): self.make_list('a', 'ab') self.make_list('b', 'cd') self.assertEquals( self.client.brpop(['b', 'a'], timeout=1), (b('b'), b('d'))) self.assertEquals( self.client.brpop(['b', 'a'], timeout=1), (b('b'), b('c'))) self.assertEquals( self.client.brpop(['b', 'a'], timeout=1), (b('a'), b('b'))) self.assertEquals( self.client.brpop(['b', 'a'], timeout=1), (b('a'), b('a'))) self.assertEquals(self.client.brpop(['b', 'a'], timeout=1), None) self.make_list('c', 'a') self.assertEquals(self.client.brpop('c', timeout=1), (b('c'), b('a'))) def test_brpoplpush(self): self.make_list('a', '12') self.make_list('b', '34') self.assertEquals(self.client.brpoplpush('a', 'b'), b('2')) self.assertEquals(self.client.brpoplpush('a', 'b'), b('1')) self.assertEquals(self.client.brpoplpush('a', 'b', timeout=1), None) self.assertEquals(self.client.lrange('a', 0, -1), []) self.assertEquals( self.client.lrange('b', 0, -1), [b('1'), b('2'), b('3'), b('4')]) def test_lindex(self): # no key self.assertEquals(self.client.lindex('a', '0'), None) # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.lindex, 'a', '0') del self.client['a'] # real logic self.make_list('a', 'abc') self.assertEquals(self.client.lindex('a', '0'), b('a')) self.assertEquals(self.client.lindex('a', '1'), b('b')) self.assertEquals(self.client.lindex('a', '2'), b('c')) def test_linsert(self): # no key self.assertEquals(self.client.linsert('a', 'after', 'x', 'y'), 0) # key is not a list self.client['a'] = 'b' self.assertRaises( redis.ResponseError, self.client.linsert, 'a', 'after', 'x', 'y' ) del self.client['a'] # real logic self.make_list('a', 'abc') self.assertEquals(self.client.linsert('a', 'after', 'b', 'b1'), 4) self.assertEquals( self.client.lrange('a', 0, -1), [b('a'), b('b'), b('b1'), b('c')]) self.assertEquals(self.client.linsert('a', 'before', 'b', 'a1'), 5) self.assertEquals( self.client.lrange('a', 0, -1), [b('a'), b('a1'), b('b'), b('b1'), b('c')]) def test_llen(self): # no key self.assertEquals(self.client.llen('a'), 0) # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.llen, 'a') del self.client['a'] # real logic self.make_list('a', 'abc') self.assertEquals(self.client.llen('a'), 3) def test_lpop(self): # no key self.assertEquals(self.client.lpop('a'), None) # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.lpop, 'a') del self.client['a'] # real logic self.make_list('a', 'abc') self.assertEquals(self.client.lpop('a'), b('a')) self.assertEquals(self.client.lpop('a'), b('b')) self.assertEquals(self.client.lpop('a'), b('c')) self.assertEquals(self.client.lpop('a'), None) def test_lpush(self): # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.lpush, 'a', 'a') del self.client['a'] # real logic version = self.client.info()['redis_version'] if StrictVersion(version) >= StrictVersion('2.4.0'): self.assertEqual(1, self.client.lpush('a', 'b')) self.assertEqual(2, self.client.lpush('a', 'a')) self.assertEqual(4, self.client.lpush('a', 'b', 'a')) elif StrictVersion(version) >= StrictVersion('1.3.4'): self.assertEqual(1, self.client.lpush('a', 'b')) self.assertEqual(2, self.client.lpush('a', 'a')) else: self.assert_(self.client.lpush('a', 'b')) self.assert_(self.client.lpush('a', 'a')) self.assertEquals(self.client.lindex('a', 0), b('a')) self.assertEquals(self.client.lindex('a', 1), b('b')) def test_lpushx(self): # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.lpushx, 'a', 'a') del self.client['a'] # real logic self.assertEquals(self.client.lpushx('a', 'b'), 0) self.assertEquals(self.client.lrange('a', 0, -1), []) self.make_list('a', 'abc') self.assertEquals(self.client.lpushx('a', 'd'), 4) self.assertEquals( self.client.lrange('a', 0, -1), [b('d'), b('a'), b('b'), b('c')]) def test_lrange(self): # no key self.assertEquals(self.client.lrange('a', 0, 1), []) # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.lrange, 'a', 0, 1) del self.client['a'] # real logic self.make_list('a', 'abcde') self.assertEquals( self.client.lrange('a', 0, 2), [b('a'), b('b'), b('c')]) self.assertEquals( self.client.lrange('a', 2, 10), [b('c'), b('d'), b('e')]) def test_lrem(self): # no key self.assertEquals(self.client.lrem('a', 'foo'), 0) # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.lrem, 'a', 'b') del self.client['a'] # real logic self.make_list('a', 'aaaa') self.assertEquals(self.client.lrem('a', 'a', 1), 1) self.assertEquals( self.client.lrange('a', 0, 3), [b('a'), b('a'), b('a')]) self.assertEquals(self.client.lrem('a', 'a'), 3) # remove all the elements in the list means the key is deleted self.assertEquals(self.client.lrange('a', 0, 1), []) def test_lset(self): # no key self.assertRaises(redis.ResponseError, self.client.lset, 'a', 1, 'b') # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.lset, 'a', 1, 'b') del self.client['a'] # real logic self.make_list('a', 'abc') self.assertEquals( self.client.lrange('a', 0, 2), [b('a'), b('b'), b('c')]) self.assert_(self.client.lset('a', 1, 'd')) self.assertEquals( self.client.lrange('a', 0, 2), [b('a'), b('d'), b('c')]) def test_ltrim(self): # no key -- TODO: Not sure why this is actually true. self.assert_(self.client.ltrim('a', 0, 2)) # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.ltrim, 'a', 0, 2) del self.client['a'] # real logic self.make_list('a', 'abc') self.assert_(self.client.ltrim('a', 0, 1)) self.assertEquals(self.client.lrange('a', 0, 5), [b('a'), b('b')]) def test_rpop(self): # no key self.assertEquals(self.client.rpop('a'), None) # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.rpop, 'a') del self.client['a'] # real logic self.make_list('a', 'abc') self.assertEquals(self.client.rpop('a'), b('c')) self.assertEquals(self.client.rpop('a'), b('b')) self.assertEquals(self.client.rpop('a'), b('a')) self.assertEquals(self.client.rpop('a'), None) def test_rpoplpush(self): # no src key self.make_list('b', ['b1']) self.assertEquals(self.client.rpoplpush('a', 'b'), None) # no dest key self.assertEquals(self.client.rpoplpush('b', 'a'), b('b1')) self.assertEquals(self.client.lindex('a', 0), b('b1')) del self.client['a'] del self.client['b'] # src key is not a list self.client['a'] = 'a1' self.assertRaises(redis.ResponseError, self.client.rpoplpush, 'a', 'b') del self.client['a'] # dest key is not a list self.make_list('a', ['a1']) self.client['b'] = 'b' self.assertRaises(redis.ResponseError, self.client.rpoplpush, 'a', 'b') del self.client['a'] del self.client['b'] # real logic self.make_list('a', ['a1', 'a2', 'a3']) self.make_list('b', ['b1', 'b2', 'b3']) self.assertEquals(self.client.rpoplpush('a', 'b'), b('a3')) self.assertEquals(self.client.lrange('a', 0, 2), [b('a1'), b('a2')]) self.assertEquals( self.client.lrange('b', 0, 4), [b('a3'), b('b1'), b('b2'), b('b3')]) def test_rpush(self): # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.rpush, 'a', 'a') del self.client['a'] # real logic version = self.client.info()['redis_version'] if StrictVersion(version) >= StrictVersion('2.4.0'): self.assertEqual(1, self.client.rpush('a', 'a')) self.assertEqual(2, self.client.rpush('a', 'b')) self.assertEqual(4, self.client.rpush('a', 'a', 'b')) elif StrictVersion(version) >= StrictVersion('1.3.4'): self.assertEqual(1, self.client.rpush('a', 'a')) self.assertEqual(2, self.client.rpush('a', 'b')) else: self.assert_(self.client.rpush('a', 'a')) self.assert_(self.client.rpush('a', 'b')) self.assertEquals(self.client.lindex('a', 0), b('a')) self.assertEquals(self.client.lindex('a', 1), b('b')) def test_rpushx(self): # key is not a list self.client['a'] = 'b' self.assertRaises(redis.ResponseError, self.client.rpushx, 'a', 'a') del self.client['a'] # real logic self.assertEquals(self.client.rpushx('a', 'b'), 0) self.assertEquals(self.client.lrange('a', 0, -1), []) self.make_list('a', 'abc') self.assertEquals(self.client.rpushx('a', 'd'), 4) self.assertEquals( self.client.lrange('a', 0, -1), [b('a'), b('b'), b('c'), b('d')]) # Set commands def make_set(self, name, l): for i in l: self.client.sadd(name, i) def test_sadd(self): # key is not a set self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.sadd, 'a', 'a1') del self.client['a'] # real logic members = set([b('a1'), b('a2'), b('a3')]) self.make_set('a', members) self.assertEquals(self.client.smembers('a'), members) def test_scard(self): # key is not a set self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.scard, 'a') del self.client['a'] # real logic self.make_set('a', 'abc') self.assertEquals(self.client.scard('a'), 3) def test_sdiff(self): # some key is not a set self.make_set('a', ['a1', 'a2', 'a3']) self.client['b'] = 'b' self.assertRaises(redis.ResponseError, self.client.sdiff, ['a', 'b']) del self.client['b'] # real logic self.make_set('b', ['b1', 'a2', 'b3']) self.assertEquals( self.client.sdiff(['a', 'b']), set([b('a1'), b('a3')])) def test_sdiffstore(self): # some key is not a set self.make_set('a', ['a1', 'a2', 'a3']) self.client['b'] = 'b' self.assertRaises( redis.ResponseError, self.client.sdiffstore, 'c', ['a', 'b']) del self.client['b'] self.make_set('b', ['b1', 'a2', 'b3']) # dest key always gets overwritten, even if it's not a set, so don't # test for that # real logic self.assertEquals(self.client.sdiffstore('c', ['a', 'b']), 2) self.assertEquals(self.client.smembers('c'), set([b('a1'), b('a3')])) def test_sinter(self): # some key is not a set self.make_set('a', ['a1', 'a2', 'a3']) self.client['b'] = 'b' self.assertRaises(redis.ResponseError, self.client.sinter, ['a', 'b']) del self.client['b'] # real logic self.make_set('b', ['a1', 'b2', 'a3']) self.assertEquals( self.client.sinter(['a', 'b']), set([b('a1'), b('a3')])) def test_sinterstore(self): # some key is not a set self.make_set('a', ['a1', 'a2', 'a3']) self.client['b'] = 'b' self.assertRaises( redis.ResponseError, self.client.sinterstore, 'c', ['a', 'b']) del self.client['b'] self.make_set('b', ['a1', 'b2', 'a3']) # dest key always gets overwritten, even if it's not a set, so don't # test for that # real logic self.assertEquals(self.client.sinterstore('c', ['a', 'b']), 2) self.assertEquals(self.client.smembers('c'), set([b('a1'), b('a3')])) def test_sismember(self): # key is not a set self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.sismember, 'a', 'a') del self.client['a'] # real logic self.make_set('a', 'abc') self.assertEquals(self.client.sismember('a', 'a'), True) self.assertEquals(self.client.sismember('a', 'b'), True) self.assertEquals(self.client.sismember('a', 'c'), True) self.assertEquals(self.client.sismember('a', 'd'), False) def test_smembers(self): # key is not a set self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.smembers, 'a') del self.client['a'] # set doesn't exist self.assertEquals(self.client.smembers('a'), set()) # real logic self.make_set('a', 'abc') self.assertEquals( self.client.smembers('a'), set([b('a'), b('b'), b('c')])) def test_smove(self): # src key is not set self.make_set('b', ['b1', 'b2']) self.assertEquals(self.client.smove('a', 'b', 'a1'), 0) # src key is not a set self.client['a'] = 'a' self.assertRaises( redis.ResponseError, self.client.smove, 'a', 'b', 'a1') del self.client['a'] self.make_set('a', ['a1', 'a2']) # dest key is not a set del self.client['b'] self.client['b'] = 'b' self.assertRaises( redis.ResponseError, self.client.smove, 'a', 'b', 'a1') del self.client['b'] self.make_set('b', ['b1', 'b2']) # real logic self.assert_(self.client.smove('a', 'b', 'a1')) self.assertEquals(self.client.smembers('a'), set([b('a2')])) self.assertEquals( self.client.smembers('b'), set([b('b1'), b('b2'), b('a1')])) def test_spop(self): # key is not set self.assertEquals(self.client.spop('a'), None) # key is not a set self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.spop, 'a') del self.client['a'] # real logic s = [b('a'), b('b'), b('c')] self.make_set('a', s) value = self.client.spop('a') self.assert_(value in s) self.assertEquals(self.client.smembers('a'), set(s) - set([value])) def test_srandmember(self): # key is not set self.assertEquals(self.client.srandmember('a'), None) # key is not a set self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.srandmember, 'a') del self.client['a'] # real logic self.make_set('a', 'abc') self.assert_(self.client.srandmember('a') in b('abc')) version = self.client.info()['redis_version'] if StrictVersion(version) >= StrictVersion('2.6.0'): randoms = self.client.srandmember('a', number=2) self.assertEquals(len(randoms), 2) for r in randoms: self.assert_(r in b('abc')) def test_srem(self): # key is not set self.assertEquals(self.client.srem('a', 'a'), False) # key is not a set self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.srem, 'a', 'a') del self.client['a'] # real logic self.make_set('a', 'abc') self.assertEquals(self.client.srem('a', 'd'), False) self.assertEquals(self.client.srem('a', 'b'), True) self.assertEquals(self.client.smembers('a'), set([b('a'), b('c')])) def test_sunion(self): # some key is not a set self.make_set('a', ['a1', 'a2', 'a3']) self.client['b'] = 'b' self.assertRaises(redis.ResponseError, self.client.sunion, ['a', 'b']) del self.client['b'] # real logic self.make_set('b', ['a1', 'b2', 'a3']) self.assertEquals( self.client.sunion(['a', 'b']), set([b('a1'), b('a2'), b('a3'), b('b2')])) def test_sunionstore(self): # some key is not a set self.make_set('a', ['a1', 'a2', 'a3']) self.client['b'] = 'b' self.assertRaises( redis.ResponseError, self.client.sunionstore, 'c', ['a', 'b']) del self.client['b'] self.make_set('b', ['a1', 'b2', 'a3']) # dest key always gets overwritten, even if it's not a set, so don't # test for that # real logic self.assertEquals(self.client.sunionstore('c', ['a', 'b']), 4) self.assertEquals( self.client.smembers('c'), set([b('a1'), b('a2'), b('a3'), b('b2')])) # SORTED SETS def make_zset(self, name, d): for k, v in d.items(): self.client.zadd(name, **{k: v}) def test_zadd(self): self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals( self.client.zrange('a', 0, 3), [b('a1'), b('a2'), b('a3')]) def test_zcard(self): # key is not a zset self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.zcard, 'a') del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.zcard('a'), 3) def test_zcount(self): # key is not a zset self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.zcount, 'a', 0, 0) del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.zcount('a', '-inf', '+inf'), 3) self.assertEquals(self.client.zcount('a', 1, 2), 2) self.assertEquals(self.client.zcount('a', 10, 20), 0) def test_zincrby(self): # key is not a zset self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.zincrby, 'a', 'a1') del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.zincrby('a', 'a2'), 3.0) self.assertEquals(self.client.zincrby('a', 'a3', amount=5), 8.0) self.assertEquals(self.client.zscore('a', 'a2'), 3.0) self.assertEquals(self.client.zscore('a', 'a3'), 8.0) def test_zinterstore(self): self.make_zset('a', {'a1': 1, 'a2': 1, 'a3': 1}) self.make_zset('b', {'a1': 2, 'a3': 2, 'a4': 2}) self.make_zset('c', {'a1': 6, 'a3': 5, 'a4': 4}) # sum, no weight self.assert_(self.client.zinterstore('z', ['a', 'b', 'c'])) self.assertEquals( self.client.zrange('z', 0, -1, withscores=True), [(b('a3'), 8), (b('a1'), 9)] ) # max, no weight self.assert_( self.client.zinterstore('z', ['a', 'b', 'c'], aggregate='MAX') ) self.assertEquals( self.client.zrange('z', 0, -1, withscores=True), [(b('a3'), 5), (b('a1'), 6)] ) # with weight self.assert_(self.client.zinterstore('z', {'a': 1, 'b': 2, 'c': 3})) self.assertEquals( self.client.zrange('z', 0, -1, withscores=True), [(b('a3'), 20), (b('a1'), 23)] ) def test_zrange(self): # key is not a zset self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.zrange, 'a', 0, 1) del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.zrange('a', 0, 1), [b('a1'), b('a2')]) self.assertEquals(self.client.zrange('a', 1, 2), [b('a2'), b('a3')]) self.assertEquals( self.client.zrange('a', 0, 1, withscores=True), [(b('a1'), 1.0), (b('a2'), 2.0)]) self.assertEquals( self.client.zrange('a', 1, 2, withscores=True), [(b('a2'), 2.0), (b('a3'), 3.0)]) # test a custom score casting function returns the correct value self.assertEquals( self.client.zrange('a', 0, 1, withscores=True, score_cast_func=int), [(b('a1'), 1), (b('a2'), 2)]) # a non existant key should return empty list self.assertEquals(self.client.zrange('b', 0, 1, withscores=True), []) def test_zrangebyscore(self): # key is not a zset self.client['a'] = 'a' self.assertRaises( redis.ResponseError, self.client.zrangebyscore, 'a', 0, 1) del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) self.assertEquals( self.client.zrangebyscore('a', 2, 4), [b('a2'), b('a3'), b('a4')]) self.assertEquals( self.client.zrangebyscore('a', 2, 4, start=1, num=2), [b('a3'), b('a4')]) self.assertEquals( self.client.zrangebyscore('a', 2, 4, withscores=True), [(b('a2'), 2.0), (b('a3'), 3.0), (b('a4'), 4.0)]) # a non existant key should return empty list self.assertEquals( self.client.zrangebyscore('b', 0, 1, withscores=True), []) def test_zrank(self): # key is not a zset self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.zrank, 'a', 'a4') del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) self.assertEquals(self.client.zrank('a', 'a1'), 0) self.assertEquals(self.client.zrank('a', 'a2'), 1) self.assertEquals(self.client.zrank('a', 'a3'), 2) self.assertEquals(self.client.zrank('a', 'a4'), 3) self.assertEquals(self.client.zrank('a', 'a5'), 4) # non-existent value in zset self.assertEquals(self.client.zrank('a', 'a6'), None) def test_zrem(self): # key is not a zset self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.zrem, 'a', 'a1') del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.zrem('a', 'a2'), True) self.assertEquals(self.client.zrange('a', 0, 5), [b('a1'), b('a3')]) self.assertEquals(self.client.zrem('a', 'b'), False) self.assertEquals(self.client.zrange('a', 0, 5), [b('a1'), b('a3')]) def test_zremrangebyrank(self): # key is not a zset self.client['a'] = 'a' self.assertRaises( redis.ResponseError, self.client.zremrangebyscore, 'a', 0, 1) del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) self.assertEquals(self.client.zremrangebyrank('a', 1, 3), 3) self.assertEquals(self.client.zrange('a', 0, 5), [b('a1'), b('a5')]) def test_zremrangebyscore(self): # key is not a zset self.client['a'] = 'a' self.assertRaises( redis.ResponseError, self.client.zremrangebyscore, 'a', 0, 1) del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) self.assertEquals(self.client.zremrangebyscore('a', 2, 4), 3) self.assertEquals(self.client.zrange('a', 0, 5), [b('a1'), b('a5')]) self.assertEquals(self.client.zremrangebyscore('a', 2, 4), 0) self.assertEquals(self.client.zrange('a', 0, 5), [b('a1'), b('a5')]) def test_zrevrange(self): # key is not a zset self.client['a'] = 'a' self.assertRaises( redis.ResponseError, self.client.zrevrange, 'a', 0, 1) del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.zrevrange('a', 0, 1), [b('a3'), b('a2')]) self.assertEquals(self.client.zrevrange('a', 1, 2), [b('a2'), b('a1')]) self.assertEquals( self.client.zrevrange('a', 0, 1, withscores=True), [(b('a3'), 3.0), (b('a2'), 2.0)]) self.assertEquals( self.client.zrevrange('a', 1, 2, withscores=True), [(b('a2'), 2.0), (b('a1'), 1.0)]) # a non existant key should return empty list self.assertEquals(self.client.zrange('b', 0, 1, withscores=True), []) def test_zrevrangebyscore(self): # key is not a zset self.client['a'] = 'a' self.assertRaises( redis.ResponseError, self.client.zrevrangebyscore, 'a', 0, 1) del self.client['a'] # real logic self.make_zset('a', {'a1': 1, 'a2': 2, 'a3': 3, 'a4': 4, 'a5': 5}) self.assertEquals( self.client.zrevrangebyscore('a', 4, 2), [b('a4'), b('a3'), b('a2')]) self.assertEquals( self.client.zrevrangebyscore('a', 4, 2, start=1, num=2), [b('a3'), b('a2')]) self.assertEquals( self.client.zrevrangebyscore('a', 4, 2, withscores=True), [(b('a4'), 4.0), (b('a3'), 3.0), (b('a2'), 2.0)]) # a non existant key should return empty list self.assertEquals( self.client.zrevrangebyscore('b', 1, 0, withscores=True), []) def test_zrevrank(self): # key is not a zset self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.zrevrank, 'a', 'a4') del self.client['a'] # real logic self.make_zset('a', {'a1': 5, 'a2': 4, 'a3': 3, 'a4': 2, 'a5': 1}) self.assertEquals(self.client.zrevrank('a', 'a1'), 0) self.assertEquals(self.client.zrevrank('a', 'a2'), 1) self.assertEquals(self.client.zrevrank('a', 'a3'), 2) self.assertEquals(self.client.zrevrank('a', 'a4'), 3) self.assertEquals(self.client.zrevrank('a', 'a5'), 4) self.assertEquals(self.client.zrevrank('a', 'b'), None) def test_zscore(self): # key is not a zset self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.zscore, 'a', 'a1') del self.client['a'] # real logic self.make_zset('a', {'a1': 0, 'a2': 1, 'a3': 2}) self.assertEquals(self.client.zscore('a', 'a1'), 0.0) self.assertEquals(self.client.zscore('a', 'a2'), 1.0) # test a non-existant member self.assertEquals(self.client.zscore('a', 'a4'), None) def test_zunionstore(self): self.make_zset('a', {'a1': 1, 'a2': 1, 'a3': 1}) self.make_zset('b', {'a1': 2, 'a3': 2, 'a4': 2}) self.make_zset('c', {'a1': 6, 'a4': 5, 'a5': 4}) # sum, no weight self.assert_(self.client.zunionstore('z', ['a', 'b', 'c'])) self.assertEquals( self.client.zrange('z', 0, -1, withscores=True), [ (b('a2'), 1), (b('a3'), 3), (b('a5'), 4), (b('a4'), 7), (b('a1'), 9) ] ) # max, no weight self.assert_( self.client.zunionstore('z', ['a', 'b', 'c'], aggregate='MAX') ) self.assertEquals( self.client.zrange('z', 0, -1, withscores=True), [ (b('a2'), 1), (b('a3'), 2), (b('a5'), 4), (b('a4'), 5), (b('a1'), 6) ] ) # with weight self.assert_(self.client.zunionstore('z', {'a': 1, 'b': 2, 'c': 3})) self.assertEquals( self.client.zrange('z', 0, -1, withscores=True), [ (b('a2'), 1), (b('a3'), 5), (b('a5'), 12), (b('a4'), 19), (b('a1'), 23) ] ) # HASHES def make_hash(self, key, d): for k, v in iteritems(d): self.client.hset(key, k, v) def test_hget_and_hset(self): # key is not a hash self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.hget, 'a', 'a1') del self.client['a'] # no key self.assertEquals(self.client.hget('a', 'a1'), None) # real logic self.make_hash('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.hget('a', 'a1'), b('1')) self.assertEquals(self.client.hget('a', 'a2'), b('2')) self.assertEquals(self.client.hget('a', 'a3'), b('3')) # field was updated, redis returns 0 self.assertEquals(self.client.hset('a', 'a2', 5), 0) self.assertEquals(self.client.hget('a', 'a2'), b('5')) # field is new, redis returns 1 self.assertEquals(self.client.hset('a', 'a4', 4), 1) self.assertEquals(self.client.hget('a', 'a4'), b('4')) # key inside of hash that doesn't exist returns null value self.assertEquals(self.client.hget('a', 'b'), None) def test_hsetnx(self): # Initially set the hash field self.client.hsetnx('a', 'a1', 1) self.assertEqual(self.client.hget('a', 'a1'), b('1')) # Try and set the existing hash field to a different value self.client.hsetnx('a', 'a1', 2) self.assertEqual(self.client.hget('a', 'a1'), b('1')) def test_hmset(self): d = {b('a'): b('1'), b('b'): b('2'), b('c'): b('3')} self.assert_(self.client.hmset('foo', d)) self.assertEqual(self.client.hgetall('foo'), d) self.assertRaises(redis.DataError, self.client.hmset, 'foo', {}) def test_hmset_empty_value(self): d = {b('a'): b('1'), b('b'): b('2'), b('c'): b('')} self.assert_(self.client.hmset('foo', d)) self.assertEqual(self.client.hgetall('foo'), d) def test_hmget(self): d = {'a': 1, 'b': 2, 'c': 3} self.assert_(self.client.hmset('foo', d)) self.assertEqual( self.client.hmget('foo', ['a', 'b', 'c']), [b('1'), b('2'), b('3')]) self.assertEqual(self.client.hmget('foo', ['a', 'c']), [b('1'), b('3')]) # using *args type args self.assertEquals(self.client.hmget('foo', 'a', 'c'), [b('1'), b('3')]) def test_hmget_empty(self): self.assertEqual(self.client.hmget('foo', ['a', 'b']), [None, None]) def test_hmget_no_keys(self): self.assertRaises(redis.ResponseError, self.client.hmget, 'foo', []) def test_hdel(self): # key is not a hash self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.hdel, 'a', 'a1') del self.client['a'] # no key self.assertEquals(self.client.hdel('a', 'a1'), False) # real logic self.make_hash('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.hget('a', 'a2'), b('2')) self.assert_(self.client.hdel('a', 'a2')) self.assertEquals(self.client.hget('a', 'a2'), None) def test_hexists(self): # key is not a hash self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.hexists, 'a', 'a1') del self.client['a'] # no key self.assertEquals(self.client.hexists('a', 'a1'), False) # real logic self.make_hash('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.hexists('a', 'a1'), True) self.assertEquals(self.client.hexists('a', 'a4'), False) self.client.hdel('a', 'a1') self.assertEquals(self.client.hexists('a', 'a1'), False) def test_hgetall(self): # key is not a hash self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.hgetall, 'a') del self.client['a'] # no key self.assertEquals(self.client.hgetall('a'), {}) # real logic h = {b('a1'): b('1'), b('a2'): b('2'), b('a3'): b('3')} self.make_hash('a', h) remote_hash = self.client.hgetall('a') self.assertEquals(h, remote_hash) def test_hincrby(self): # key is not a hash self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.hincrby, 'a', 'a1') del self.client['a'] # no key should create the hash and incr the key's value to 1 self.assertEquals(self.client.hincrby('a', 'a1'), 1) # real logic self.assertEquals(self.client.hincrby('a', 'a1'), 2) self.assertEquals(self.client.hincrby('a', 'a1', amount=2), 4) # negative values decrement self.assertEquals(self.client.hincrby('a', 'a1', amount=-3), 1) # hash that exists, but key that doesn't self.assertEquals(self.client.hincrby('a', 'a2', amount=3), 3) # finally a key that's not an int self.client.hset('a', 'a3', 'foo') self.assertRaises(redis.ResponseError, self.client.hincrby, 'a', 'a3') def test_hincrbyfloat(self): version = self.client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return # key is not a hash self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.hincrbyfloat, 'a', 'a1') del self.client['a'] # no key should create the hash and incr the key's value to 1 self.assertEquals(self.client.hincrbyfloat('a', 'a1'), 1.0) self.assertEquals(self.client.hincrbyfloat('a', 'a1'), 2.0) self.assertEquals(self.client.hincrbyfloat('a', 'a1', 1.2), 3.2) def test_hkeys(self): # key is not a hash self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.hkeys, 'a') del self.client['a'] # no key self.assertEquals(self.client.hkeys('a'), []) # real logic h = {b('a1'): b('1'), b('a2'): b('2'), b('a3'): b('3')} self.make_hash('a', h) keys = dictkeys(h) keys.sort() remote_keys = self.client.hkeys('a') remote_keys.sort() self.assertEquals(keys, remote_keys) def test_hlen(self): # key is not a hash self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.hlen, 'a') del self.client['a'] # no key self.assertEquals(self.client.hlen('a'), 0) # real logic self.make_hash('a', {'a1': 1, 'a2': 2, 'a3': 3}) self.assertEquals(self.client.hlen('a'), 3) self.client.hdel('a', 'a3') self.assertEquals(self.client.hlen('a'), 2) def test_hvals(self): # key is not a hash self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.hvals, 'a') del self.client['a'] # no key self.assertEquals(self.client.hvals('a'), []) # real logic h = {b('a1'): b('1'), b('a2'): b('2'), b('a3'): b('3')} self.make_hash('a', h) vals = dictvalues(h) vals.sort() remote_vals = self.client.hvals('a') remote_vals.sort() self.assertEquals(vals, remote_vals) # SORT def test_sort_bad_key(self): # key is not set self.assertEquals(self.client.sort('a'), []) # key is a string value self.client['a'] = 'a' self.assertRaises(redis.ResponseError, self.client.sort, 'a') del self.client['a'] def test_sort_basic(self): self.make_list('a', '3214') self.assertEquals( self.client.sort('a'), [b('1'), b('2'), b('3'), b('4')]) def test_sort_limited(self): self.make_list('a', '3214') self.assertEquals( self.client.sort('a', start=1, num=2), [b('2'), b('3')]) def test_sort_by(self): self.client['score:1'] = 8 self.client['score:2'] = 3 self.client['score:3'] = 5 self.make_list('a_values', '123') self.assertEquals( self.client.sort('a_values', by='score:*'), [b('2'), b('3'), b('1')]) def test_sort_get(self): self.client['user:1'] = 'u1' self.client['user:2'] = 'u2' self.client['user:3'] = 'u3' self.make_list('a', '231') self.assertEquals( self.client.sort('a', get='user:*'), [b('u1'), b('u2'), b('u3')]) def test_sort_get_multi(self): self.client['user:1'] = 'u1' self.client['user:2'] = 'u2' self.client['user:3'] = 'u3' self.make_list('a', '231') self.assertEquals( self.client.sort('a', get=('user:*', '#')), [b('u1'), b('1'), b('u2'), b('2'), b('u3'), b('3')]) def test_sort_desc(self): self.make_list('a', '231') self.assertEquals( self.client.sort('a', desc=True), [b('3'), b('2'), b('1')]) def test_sort_alpha(self): self.make_list('a', 'ecbda') self.assertEquals( self.client.sort('a', alpha=True), [b('a'), b('b'), b('c'), b('d'), b('e')]) def test_sort_store(self): self.make_list('a', '231') self.assertEquals(self.client.sort('a', store='sorted_values'), 3) self.assertEquals( self.client.lrange('sorted_values', 0, 5), [b('1'), b('2'), b('3')]) def test_sort_all_options(self): self.client['user:1:username'] = 'zeus' self.client['user:2:username'] = 'titan' self.client['user:3:username'] = 'hermes' self.client['user:4:username'] = 'hercules' self.client['user:5:username'] = 'apollo' self.client['user:6:username'] = 'athena' self.client['user:7:username'] = 'hades' self.client['user:8:username'] = 'dionysus' self.client['user:1:favorite_drink'] = 'yuengling' self.client['user:2:favorite_drink'] = 'rum' self.client['user:3:favorite_drink'] = 'vodka' self.client['user:4:favorite_drink'] = 'milk' self.client['user:5:favorite_drink'] = 'pinot noir' self.client['user:6:favorite_drink'] = 'water' self.client['user:7:favorite_drink'] = 'gin' self.client['user:8:favorite_drink'] = 'apple juice' self.make_list('gods', '12345678') num = self.client.sort( 'gods', start=2, num=4, by='user:*:username', get='user:*:favorite_drink', desc=True, alpha=True, store='sorted') self.assertEquals(num, 4) self.assertEquals( self.client.lrange('sorted', 0, 10), [b('vodka'), b('milk'), b('gin'), b('apple juice')]) def test_strict_zadd(self): client = self.get_client(redis.StrictRedis) client.zadd('a', 1.0, 'a1', 2.0, 'a2', a3=3.0) self.assertEquals(client.zrange('a', 0, 3, withscores=True), [(b('a1'), 1.0), (b('a2'), 2.0), (b('a3'), 3.0)]) def test_strict_lrem(self): client = self.get_client(redis.StrictRedis) client.rpush('a', 'a1') client.rpush('a', 'a2') client.rpush('a', 'a3') client.rpush('a', 'a1') client.lrem('a', 0, 'a1') self.assertEquals(client.lrange('a', 0, -1), [b('a2'), b('a3')]) def test_strict_setex(self): "SETEX swaps the order of the value and timeout" client = self.get_client(redis.StrictRedis) self.assertEquals(client.setex('a', 60, '1'), True) self.assertEquals(client['a'], b('1')) self.assertEquals(client.ttl('a'), 60) def test_strict_expire(self): "TTL is -1 by default in StrictRedis" client = self.get_client(redis.StrictRedis) self.assertEquals(client.expire('a', 10), False) self.client['a'] = 'foo' self.assertEquals(client.expire('a', 10), True) self.assertEquals(client.ttl('a'), 10) self.assertEquals(client.persist('a'), True) self.assertEquals(client.ttl('a'), -1) def test_strict_pexpire(self): client = self.get_client(redis.StrictRedis) version = client.info()['redis_version'] if StrictVersion(version) < StrictVersion('2.6.0'): try: raise unittest.SkipTest() except AttributeError: return self.assertEquals(client.pexpire('a', 10000), False) self.client['a'] = 'foo' self.assertEquals(client.pexpire('a', 10000), True) self.assert_(client.pttl('a') <= 10000) self.assertEquals(client.persist('a'), True) self.assertEquals(client.pttl('a'), -1) ## BINARY SAFE # TODO add more tests def test_binary_get_set(self): self.assertTrue(self.client.set(' foo bar ', '123')) self.assertEqual(self.client.get(' foo bar '), b('123')) self.assertTrue(self.client.set(' foo\r\nbar\r\n ', '456')) self.assertEqual(self.client.get(' foo\r\nbar\r\n '), b('456')) self.assertTrue(self.client.set(' \r\n\t\x07\x13 ', '789')) self.assertEqual(self.client.get(' \r\n\t\x07\x13 '), b('789')) self.assertEqual( sorted(self.client.keys('*')), [b(' \r\n\t\x07\x13 '), b(' foo\r\nbar\r\n '), b(' foo bar ')]) self.assertTrue(self.client.delete(' foo bar ')) self.assertTrue(self.client.delete(' foo\r\nbar\r\n ')) self.assertTrue(self.client.delete(' \r\n\t\x07\x13 ')) def test_binary_lists(self): mapping = { b('foo bar'): [b('1'), b('2'), b('3')], b('foo\r\nbar\r\n'): [b('4'), b('5'), b('6')], b('foo\tbar\x07'): [b('7'), b('8'), b('9')], } # fill in lists for key, value in iteritems(mapping): for c in value: self.assertTrue(self.client.rpush(key, c)) # check that KEYS returns all the keys as they are self.assertEqual(sorted(self.client.keys('*')), sorted(dictkeys(mapping))) # check that it is possible to get list content by key name for key in dictkeys(mapping): self.assertEqual(self.client.lrange(key, 0, -1), mapping[key]) def test_22_info(self): """ Older Redis versions contained 'allocation_stats' in INFO that was the cause of a number of bugs when parsing. """ info = "allocation_stats:6=1,7=1,8=7141,9=180,10=92,11=116,12=5330," \ "13=123,14=3091,15=11048,16=225842,17=1784,18=814,19=12020," \ "20=2530,21=645,22=15113,23=8695,24=142860,25=318,26=3303," \ "27=20561,28=54042,29=37390,30=1884,31=18071,32=31367,33=160," \ "34=169,35=201,36=10155,37=1045,38=15078,39=22985,40=12523," \ "41=15588,42=265,43=1287,44=142,45=382,46=945,47=426,48=171," \ "49=56,50=516,51=43,52=41,53=46,54=54,55=75,56=647,57=332," \ "58=32,59=39,60=48,61=35,62=62,63=32,64=221,65=26,66=30," \ "67=36,68=41,69=44,70=26,71=144,72=169,73=24,74=37,75=25," \ "76=42,77=21,78=126,79=374,80=27,81=40,82=43,83=47,84=46," \ "85=114,86=34,87=37,88=7240,89=34,90=38,91=18,92=99,93=20," \ "94=18,95=17,96=15,97=22,98=18,99=69,100=17,101=22,102=15," \ "103=29,104=39,105=30,106=70,107=22,108=21,109=26,110=52," \ "111=45,112=33,113=67,114=41,115=44,116=48,117=53,118=54," \ "119=51,120=75,121=44,122=57,123=44,124=66,125=56,126=52," \ "127=81,128=108,129=70,130=50,131=51,132=53,133=45,134=62," \ "135=12,136=13,137=7,138=15,139=21,140=11,141=20,142=6,143=7," \ "144=11,145=6,146=16,147=19,148=1112,149=1,151=83,154=1," \ "155=1,156=1,157=1,160=1,161=1,162=2,166=1,169=1,170=1,171=2," \ "172=1,174=1,176=2,177=9,178=34,179=73,180=30,181=1,185=3," \ "187=1,188=1,189=1,192=1,196=1,198=1,200=1,201=1,204=1,205=1," \ "207=1,208=1,209=1,214=2,215=31,216=78,217=28,218=5,219=2," \ "220=1,222=1,225=1,227=1,234=1,242=1,250=1,252=1,253=1," \ ">=256=203" parsed = parse_info(info) self.assert_('allocation_stats' in parsed) self.assert_('6' in parsed['allocation_stats']) self.assert_('>=256' in parsed['allocation_stats']) def test_large_responses(self): "The PythonParser has some special cases for return values > 1MB" # load up 5MB of data into a key data = [] for i in range(5000000 // len(ascii_letters)): data.append(ascii_letters) data = ''.join(data) self.client.set('a', data) self.assertEquals(self.client.get('a'), b(data)) def test_floating_point_encoding(self): """ High precision floating point values sent to the server should keep precision. """ timestamp = 1349673917.939762 self.client.zadd('a', 'aaa', timestamp) self.assertEquals(self.client.zscore('a', 'aaa'), timestamp)