pycassa-1.11.2.1/000077500000000000000000000000001303744607500134075ustar00rootroot00000000000000pycassa-1.11.2.1/.gitignore000066400000000000000000000001331303744607500153740ustar00rootroot00000000000000*.pyc *.swp *.swo *.diff bin/ build/ doc/_build dist/ include/ lib/ man/ pycassa.egg-info/ pycassa-1.11.2.1/.travis.yml000066400000000000000000000005601303744607500155210ustar00rootroot00000000000000language: python python: - "2.6" - "2.7" notifications: email: - hanno@hannosch.eu - pycassa.maintainer@gmail.com services: - cassandra before_script: # give some more time for Cassandra to finish startup and actually test it - sleep 10 - sudo service cassandra status install: - python setup.py develop script: - nosetests -v pycassa-1.11.2.1/AUTHORS000066400000000000000000000007311303744607500144600ustar00rootroot00000000000000Jonathan Hseu Daniel Lundin David King NEMOTO Soshi Ben Bangert Eric Evans Seth Long Whitney Sorenson Tyler Hobbs Jon Hermes Carlo Pires Bjorn Edstrom Adam Lowry Wojciech 'KosciaK' Pietrzok Joaquin Casares Savino Sguera Dan Kuebrich Paul Cannon Kay Sackey Alexey Smolsky Samuel Sutch Aaron Morton Ian Danforth John Calixto Justin Plock Carlo Cabanilla Bayle Shanks Hanno Schlichting Ryan P. Kelley Aleksey Yeschenko Jeremy Katz Umair Mufti Eyal Reuveni pycassa-1.11.2.1/CHANGES000066400000000000000000000757571303744607500144270ustar00rootroot00000000000000Changes in Version 1.11.2 Bug Fixes * Pin Thrift dependency to 0.9.3 to avoid problems with Thrift 0.10.0 (fixes Github issue #245) Changes in Version 1.11.1 Features: * Add describe_token_map() to SystemManager Miscellaneous: * Add TimestampType alias for DateType * Match Cassandra's sorting of TimeUUIDs in stubs Changes in Version 1.11.0 Features: * Upgrade Thrift interface to 19.36.1, which adds support for the LOCAL_ONE consistency level and the populate_io_cache_on_flush column family attribute. Bug Fixes * Return timestamp from remove() in stub ColumnFamily Miscellaneous * Upgrade bundled ez_setup.py Changes in Version 1.10.0 This release only adds one feature: support for Cassandra 1.2’s atomic batches. Features * Add support for Cassandra 1.2+ atomic batches through a new atomic parameter for batch.Mutator, batch.CfMutator, and ColumnFamily.batch(). Changes in Version 1.9.1 This release fixes a few edge cases around connection pooling that can affect long-running applications. It also adds token range support to ColumnFamily.get_range(), which can be useful for parallelizing full-ring scans. Features * Add token range support to ColumnFamily.get_range() Bug Fixes * Prevent possible double connection disposal when recycling connections * Handle empty strings for IntegerType values * Prevent closed connections from being returned to the pool. * Ensure connection count is decremented when pool is disposed Changes in Version 1.9.0 This release adds a couple of minor new features and improves multithreaded locking behavior in ConnectionPool. There should be no backwards- compatibility concerns. Features * Full support for column_start, column_finish, column_count, and column_reversed parameters in contrib.stubs * Addition of an include_ttl parameter to ColumnFamily fetching methods which works like the existing include_timestamp parameter. Bug Fixes * Reduce the locked critical section in ConnectionPool, primarily to make sure lock aquisition time is not ignored outside of the pool’s timeout setting. Changes in Version 1.8.0 This release requires either Python 2.6 or 2.7. Python 2.4 and 2.5 are no longer supported. There are no concrete plans for Python 3 compatibility yet. Features * Add configurable socket_factory attribute and constructor parameter to ConnectionPool and SystemManager. * Add SSL support via the new socket_factory attribute. * Add support for DynamicCompositeType * Add mock support through a new pycassa.contrib.stubs module Bug Fixes * Don’t return closed connections to the pool. This was primarily a problem when operations failed after retrying up to the limit, resulting in a MaximumRetryException or AllServersUnavailable. * Set keyspace for connection after logging in instead of before. This fixes authentication against Cassandra 1.2, which requires logging in prior to setting a keyspace. * Specify correct UUID variant when creating v1 uuid.UUID objects from datetimes or timestamps * Add 900ns to v1 uuid.UUID timestamps when the “max” TimeUUID for a specific datetime or timestamp is requested, such as a column slice end * Also look at attributes of parent classes when creating columns from attributes in ColumnFamilyMap Other * Upgrade bundled Thrift-generated python to 19.35.0, generated with Thrift 0.9.0. Changes in Version 1.7.2 This release fixes a minor bug and upgrades the bundled Cassandra Thrift client interface to 19.34.0, matching Cassandra 1.2.0-beta1. This doesn't affect any existing Thrift methods, only adds new ones (that aren't yet utilized by pycassa), so there should not be any breakage. Bug Fixes * Fix single-component composite packing * Avoid cyclic imports during installation in setup.py Other * Travis CI integration Changes in Version 1.7.1 This release has few changes, and should make for a smooth upgrade from 1.7.0. Features * Add support for DecimalType Bug Fixes * Fix bad slice ends when using xget() with composite columns and a column_finish parameter * Fix bad documentation paths in debian packaging scripts Other * Add __version__ and __version_info__ attributes to the pycassa module Changes in Version 1.7.0 This release has a few relatively large changes in it: a new connection pool stats collector, compatibility with Cassandra 0.7 through 1.1, and a change in timezone behavior for datetimes. Before upgrading, take special care to make sure datetimes that you pass to pycassa (for TimeUUIDType or DateType data) are in UTC, and make sure your code expects to get UTC datetimes back in return. Likewise, the SystemManager changes should be backwards compatible, but there may be minor differences, mostly in create_column_family() and alter_column_family(). Be sure to test any code that works programmatically with these. Features * Added StatsLogger for tracking ConnectionPool metrics * Full Cassandra 1.1 compatibility in SystemManager. To support this, all column family or keyspace attributes that have existed since Cassandra 0.7 may be used as keyword arguments for create_column_family() and alter_column_family(). It is up to the user to know which attributes are available and valid for their version of Cassandra. As part of this change, the version-specific thrift-generated cassandra modules (pycassa.cassandra.c07, pycassa.cassandra.c08, and pycassa.cassandra.c10) have been replaced by pycassa.cassandra. A minor related change is that individual connections now now longer ask for the node’s API version, and that information is no longer stored as an attribute of the ConnectionWrapper. Bug Fixes * Fix xget() paging for non-string comparators * Add batch_insert() to ColumnFamilyMap * Use setattr instead of directly updating the object’s __dict__ in * ColumnFamilyMap to avoid breaking descriptors * Fix single-column counter increments with ColumnFamily.insert() * Include AuthenticationException and AuthorizationException in the pycassa module * Support counters in xget() * Sort column families in pycassaShell for display * Raise TypeError when bad keyword arguments are used when creating a ColumnFamily object Other All datetime objects create by pycassa now use UTC as their timezone rather than the local timezone. Likewise, naive datetime objects that are passed to pycassa are now assumed to be in UTC time, but tz_info is respected if set. Specifically, the types of data that you may need to make adjustments for when upgrading are TimeUUIDType and DateType (including OldPycassaDateType and IntermediateDateType). Changes in Version 1.6.0 This release adds a few minor features and several important bug fixes. The most important change to take note of if you are using composite comparators is the change to the default inclusive/exclusive behavior for slice ends. Other than that, this should be a smooth upgrade from 1.5.x. Features * New script for easily building RPM packages * Add request and parameter information to PoolListener callback * Add ColumnFamily.xget(), a generator version of get() that automatically pages over columns in reasonably sized chunks * Add support for Int32Type, a 4-byte signed integer format * Add constants for the highest and lowest possible TimeUUID values to pycassa.util Bug Fixes * Various 2.4 syntax errors * Raise AllServersUnavailable if server_list is empty * Handle custom types inside of composites * Don’t erase comment when updating column families * Match Cassandra’s sorting of TimeUUIDType values when the timestamps tie * This could result in some columns being erroneously left off of the end of column slices when datetime objects or timestamps were used for column_start or column_finish. * Use gevent’s queue in place of the stdlib version when gevent monkeypatching has been applied. * Avoid sub-microsecond loss of precision with TimeUUID timestamps when using pycassa.util.convert_time_to_uuid() * Make default slice ends inclusive when using CompositeType comparator * Previously, the end of the slice was exclusive by default (as was the start of the slice when column_reversed was True) Changes in Version 1.5.1 This release only affects those of you using DateType data, which has been supported since pycassa 1.2.0. If you are using DateType, it is very important that you read this closely. DateType data is internally stored as an 8 byte integer timestamp. Since version 1.2.0 of pycassa, the timestamp stored has counted the number of microseconds since the unix epoch. The actual format that Cassandra standardizes on is milliseconds since the epoch. If you are only using pycassa, you probably won’t have noticed any problems with this. However, if you try to use cassandra-cli, sstable2json, Hector, or any other client that supports DateType, DateType data written by pycassa will appear to be far in the future. Similarly, DateType data written by other clients will appear to be in the past when loaded by pycassa. This release changes the default DateType behavior to comply with the standard, millisecond-based format. If you use DateType, and you upgrade to this release without making any modifications, you will have problems. Unfortunately, this is a bit of a tricky situation to resolve, but the appropriate actions to take are detailed below. To temporarily continue using the old behavior, a new class has been created: pycassa.types.OldPycassaDateType. This will read and write DateType data exactly the same as pycassa 1.2.0 to 1.5.0 did. If you want to convert your data to the new format, the other new class, pycassa.types.IntermediateDateType, may be useful. It can read either the new or old format correctly (unless you have used dates close to 1970 with the new format) and will write only the new format. The best case for using this is if you have DateType validated columns that don’t have a secondary index on them. To tell pycassa to use OldPycassaDateType or IntermediateDateType, use the ColumnFamily attributes that control types: column_name_class, key_validation_class, column_validators, and so on. Here’s an example: from pycassa.types import OldPycassaDateType, IntermediateDateType from pycassa.column_family import ColumnFamily from pycassa.pool import ConnectionPool pool = ConnectionPool('MyKeyspace', ['192.168.1.1']) # Our tweet timeline has a comparator_type of DateType tweet_timeline_cf = ColumnFamily(pool, 'tweets') tweet_timeline_cf.column_name_class = OldPycassaDateType() # Our tweet timeline has a comparator_type of DateType users_cf = ColumnFamily(pool, 'users') users_cf.column_validators['join_date'] = IntermediateDateType() If you’re using DateType for the key_validation_class, column names, column values with a secondary index on them, or are using the DateType validated column as a non-indexed part of an index clause with get_indexed_slices() (eg. “where state = ‘TX’ and join_date > 2012”), you need to be more careful about the conversion process, and IntermediateDateType probably isn’t a good choice. In most of cases, if you want to switch to the new date format, a manual migration script to convert all existing DateType data to the new format will be needed. In particular, if you convert keys, column names, or indexed columns on a live data set, be very careful how you go about it. If you need any assistance or suggestions at all with migrating your data, please feel free to send an email to tyler@datastax.com; I would be glad to help. Changes in Version 1.5.0 The main change to be aware of for this release is the new no-retry behavior for counter operations. If you have been maintaining a separate connection pool with retries disabled for usage with counters, you may discontinue that practice after upgrading. Features By default, counter operations will not be retried automatically. This makes it easier to use a single connection pool without worrying about overcounting. Bug Fixes * Don’t remove entire row when an empty list is supplied for the columns parameter of remove() or the batch remove methods. * Add python-setuptools to debian build dependencies * Batch remove() was not removing subcolumns when the specified supercolumn was 0 or other “falsey” values * Don’t request an extra row when reading fewer than buffer_size rows with get_range() or get_indexed_slices(). * Remove pool_type from logs, which showed up as None in recent versions * Logs were erroneously showing the same server for retries of failed operations even when the actual server being queried had changed Changes in Version 1.4.0 This release is primarily a bugfix release with a couple of minor features and removed deprecated items. Features * Accept column_validation_classes when creating or altering column families with SystemManager * Ignore UNREACHABLE nodes when waiting for schema version agreement Bug Fixes * Remove accidental print statement in SystemManager * Raise TypeError when unexpected types are used for comparator or validator types when creating or altering a Column Family * Fix packing of column values using column-specific validators during batch inserts when the column name is changed by packing * Always return timestamps from inserts * Fix NameError when timestamps are used where a DateType is expected * Fix NameError in python 2.4 when unpacking DateType objects * Handle reading composites with trailing components missing * Upgrade ez_setup.py to fix broken setuptools link Removed Deprecated Items The following items have been removed: * pycassa.connect() * pycassa.connect_thread_local() * ConnectionPool.status() * ConnectionPool.recreate() Changes in Version 1.3.0 This release adds full compatibility with Cassandra 1.0 and removes support for schema manipulation in Cassandra 0.7. In this release, schema manipulation should work with Cassandra 0.8 and 1.0, but not 0.7. The data API should continue to work with all three versions. Bug Fixes * Don’t ignore columns parameter in ColumnFamilyMap.insert() * Handle empty instance fields in ColumnFamilyMap.insert() * Use the same default for timeout in pycassa.connect() as ConnectionPool uses * Fix typo which caused a different exception to be thrown when an AllServersUnavailable exception was raised * IPython 0.11 compatibility in pycassaShell * Correct dependency declaration in setup.py * Add UUIDType to supported types Features The filter_empty parameter was added to get_range() with a default of True; this allows empty rows to be kept if desired Deprecated pycassa.connect() pycassa.connect_thread_local() Changes in Version 1.2.1 This is strictly a bug-fix release addressing a few issues created in 1.2.0. Bug Fixes * Correctly check for Counters in ColumnFamily when setting default_validation_class * Pass kwargs in ColumnFamilyMap to ColumnFamily * Avoid potential UnboundLocal in ConnectionPool.execute() when get() fails * Fix ez_setup dependency/bundling so that package installations using easy_install or pip don’t fail without ez_setup installed Changes in Version 1.2.0 This should be a fairly smooth upgrade from pycassa 1.1. The primary changes that may introduce minor incompatibilities are the changes to ColumnFamilyMap and the automatic skipping of "ghost ranges" in .ColumnFamily.get_range(). Features * Add ConnectionPool.fill() * Add FloatType, DoubleType, DateType, and BooleanType support. * Add CompositeType support for static composites. See "Composite Types" for more details. * Add timestamp, ttl to ColumnFamilyMap.insert() params * Support variable-length integers with IntegerType. This allows more space-efficient small integers as well as integers that exceed the size of a long. * Make ColumnFamilyMap a subclass of ColumnFamily instead of using one as a component. This allows all of the normal adjustments normally done to a ColumnFamily to be done to a ColumnFamilyMap instead. See "Class Mapping with Column Family Map" for examples of using the new version. * Expose the following ConnectionPool attributes, allowing them to be altered after creation: max_overflow, pool_timeout, recycle, max_retries, and logging_name. Previously, these were all supplied as constructor arguments. Now, the preferred way to set them is to alter the attributes after creation. (However, they may still be set in the constructor by using keyword arguments.) * Automatically skip "ghost ranges" in ColumnFamily.get_range(). Rows without any columns will not be returned by the generator, and these rows will not count towards the supplied row_count. Bug Fixes * Add connections to ConnectionPool more readily when prefill is False. Before this change, if the ConnectionPool was created with prefill=False, connections would only be added to the pool when there was concurrent demand for connections. After this change, if prefill=False and pool_size=N, the first N operations will each result in a new connection being added to the pool. * Close connection and adjust the ConnectionPool‘s connection count after a TApplicationException. This exception generally indicates programmer error, so it’s not extremely common. * Handle typed keys that evaluate to False Deprecated * ConnectionPool.recreate() * ConnectionPool.status() Miscellaneous * Better failure messages for ConnectionPool failures * More efficient packing and unpacking * More efficient multi-column inserts in ColumnFamily.insert() and ColumnFamily.batch_insert() * Prefer Python 2.7’s collections.OrderedDict over the bundled version when available Changes in Version 1.1.1 Features * Add max_count and column_reversed params to get_count() * Add max_count and column_reversed params to multiget_count() Bug Fixes * Don’t retry operations after a TApplicationException. This exception is reserved for programmatic errors (such as a bad API parameters), so retries are not needed. * If the read_consistency_level kwarg was used in a ColumnFamily constructor, it would be ignored, resulting in a default read consistency level of ONE. This did not affect the read consistency level if it was specified in any other way, including per-method or by setting the read_consistency_level attribute. Changes in Version 1.1.0 This release adds compatibility with Cassandra 0.8, including support for counters and key_validation_class. This release is backwards-compatible with Cassandra 0.7, and can support running against a mixed cluster of both Cassandra 0.7 and 0.8. Changes related to Cassandra 0.8 * Addition of COUNTER_COLUMN_TYPE to system_manager. * Several new column family attributes, including key_validation_class, replicate_on_write, merge_shards_chance, row_cache_provider, and key_alias. * The new ColumnFamily.add() and ColumnFamily.remove_counter() methods. * Support for counters in pycassa.batch and ColumnFamily.batch_insert(). * Autopacking of keys based on key_validation_class. Other Features * ColumnFamily.multiget() now has a buffer_size parameter * ColumnFamily.multiget_count() now returns rows in the order that the keys were passed in, similar to how multiget() behaves. It also uses the dict_class attribute for the containing class instead of always using a dict. * Autpacking behavior is now more transparent and configurable, allowing the user to get functionality similar to the CLI’s assume command, whereby items are packed and unpacked as though they were a certain data type, even if Cassandra does not use a matching comparator type or validation class. This behavior can be controlled through the following attributes: - ColumnFamily.column_name_class - ColumnFamily.super_column_name_class - ColumnFamily.key_validation_class - ColumnFamily.default_validation_class - ColumnFamily.column_validators * A ColumnFamily may reload its schema to handle changes in validation classes with ColumnFamily.load_schema(). Bug Fixes There were several related issues with overlow in ConnectionPool: * Connection failures when a ConnectionPool was in a state of overflow would not result in adjustment of the overflow counter, eventually leading the ConnectionPool to refuse to create new connections. * Settings of -1 for ConnectionPool.overflow erroneously caused overflow to be disabled. * If overflow was enabled in conjunction with prefill being disabled, the effective overflow limit was raised to max_overflow + pool_size. Other * Overflow is now disabled by default in ConnectionPool. * ColumnFamilyMap now sets the underlying ColumnFamily‘s autopack_names and autopack_values attributes to False upon construction. * Documentation and tests will no longer be included in the packaged tarballs. Removed Deprecated Items The following deprecated items have been removed: * ColumnFamilyMap.get_count() * The instance parameter from ColumnFamilyMap.get_indexed_slices() * The Int64 Column type. * SystemManager.get_keyspace_description() Deprecated Athough not technically deprecated, most ColumnFamily constructor arguments should instead be set by setting the corresponding attribute on the ColumnFamily after construction. However, all previous constructor arguments will continue to be supported if passed as keyword arguments. Changes in Version 1.0.8 * Pack IndexExpression values in get_indexed_slices() that are supplied through the IndexClause instead of just the instance parameter. * Column names and values which use Cassandra’s IntegerType are unpacked as though they are in a BigInteger-like format. This is (backwards) compatible with the format that pycassa uses to pack IntegerType data. This fixes an incompatibility with the format that cassandra-cli and other clients use to pack IntegerType data. * Restore Python 2.5 compatibility that was broken through out of order keyword arguments in ConnectionWrapper. * Pack column_start and column_finish arguments in ColumnFamily *get*() methods when the super_column parameter is used. * Issue a DeprecationWarning when a method, parameter, or class that has been deprecated is used. Most of these have been deprecated for several releases, but no warnings were issued until now. * Deprecations are now split into separate sections for each release in the changelog. Deprecated in Version 1.0.8 * The instance parameter of ColumnFamilyMap.get_indexed_slices() Changes in Version 1.0.7 * Catch KeyError in pycassa.columnfamily.ColumnFamily.multiget() empty row removal. If the same non-existent key was passed multiple times, a KeyError was raised when trying to remove it from the OrderedDictionary after the first removal. The KeyError is caught and ignored now. * Handle connection failures during retries. When a connection fails, it tries to create a new connection to replace itself. Exceptions during this process were not properly handled; they are now handled and count towards the retry count for the current operation. * Close connection when a MaximumRetryException is raised. Normally a connection is closed when an operation it is performing fails, but this was not happening for the final failure that triggers the MaximumRetryException. Changes in Version 1.0.6 * Add EOFError to the list of exceptions that cause a connection swap and retry * Improved autopacking efficiency for AsciiType, UTF8Type, and BytesType * Preserve sub-second timestamp precision in datetime arguments for insertion or slice bounds where a TimeUUID is expected. Previously, precision below a second was lost. * In a MaximumRetryException‘s message, include details about the last Exception that caused the MaximumRetryException to be raised * pycassa.pool.ConnectionPool.status() now always reports a non-negative overflow; 0 is now used when there is not currently any overflow * Created pycassa.types.Long as a replacement for pycassa.types.Int64. Long uses big-endian encoding, which is compatible with Cassandra’s LongType, while Int64 used little-endian encoding. Deprecated in Version 1.0.6 * pycassa.types.Int64 has been deprecated in favor of pycassa.types.Long Changes in Version 1.0.5 * Assume port 9160 if only a hostname is given * Remove super_column param from pycassa.columnfamily.ColumnFamily.get_indexed_slices() * Enable failover on functions that previously lacked it * Increase base backoff time to 0.01 seconds * Add a timeout parameter to pycassa.system_manager.SystemManger * Return timestamp on single-column inserts Changes in Version 1.0.4 * Fixed threadlocal issues that broke multithreading * Fix bug in pycassa.columnfamily.ColumnFamily.remove() when a super_column argument is supplied * Fix minor PoolLogger logging bugs * Added pycassa.system_manager.SystemManager.describe_partitioner() * Added pycassa.system_manager.SystemManager.describe_snitch() * Added pycassa.system_manager.SystemManager.get_keyspace_properties() * Moved pycassa.system_manager.SystemManager.describe_keyspace() and pycassa.system_manager.SystemManager.describe_column_family() to pycassaShell describe_keyspace() and describe_column_family() Deprecated in Version 1.0.4 * Renamed pycassa.system_manager.SystemManager.get_keyspace_description() to pycassa.system_manager.SystemManager.get_keyspace_column_families() and deprecated the previous name Changes in Version 1.0.3 * Fixed supercolumn slice bug in get() * pycassaShell now runs scripts with execfile to allow for multiline statements * 2.4 compatability fixes Changes in Version 1.0.2 * Failover handles a greater set of potential failures * pycassaShell now loads/reloads pycassa.columnfamily.ColumnFamily instances when the underlying column family is created or updated * Added an option to pycassaShell to run a script after startup * Added pycassa.system_manager.SystemManager.list_keyspaces() Changes in Version 1.0.1 * Allow pycassaShell to be run without specifying a keyspace * Added pycassa.system_manager.SystemManager.describe_schema_versions() Changes in Version 1.0.0 * Created the SystemManager class to allow for keyspace, column family, and index creation, modification, and deletion. These operations are no longer provided by a Connection class. * Updated pycassaShell to use the SystemManager class * Improved retry behavior, including exponential backoff and proper resetting of the retry attempt counter * Condensed connection pooling classes into only pycassa.pool.ConnectionPool to provide a simpler API * Changed pycassa.connection.connect() to return a connection pool * Use more performant Thrift API methods for insert() and get() where possible * Bundled OrderedDict and set it as the default dictionary class for column families * Provide better TypeError feedback when columns are the wrong type * Use Thrift API 19.4.0 Deprecated in Version 1.0.0 * ColumnFamilyMap.get_count() has been deprecated. Use ColumnFamily.get_count() instead. Changes in Version 0.5.4 * Allow for more backward and forward compatibility * Mark a server as being down more quickly in Connection Changes in Version 0.5.3 * Added PooledColumnFamily, which makes it easy to use connection pooling automatically with a ColumnFamily. Changes in Version 0.5.2 * Support for adding/updating/dropping Keyspaces and CFs in pycassa.connection.Connection * get_range() optimization and more configurable batch size * batch get_indexed_slices() similar to get_range() * Reorganized pycassa logging * More efficient packing of data types * Fix error condition that results in infinite recursion * Limit pooling retries to only appropriate exceptions * Use Thrift API 19.3.0 Changes in Version 0.5.1 * Automatically detect if a column family is a standard column family or a super column family * multiget_count() support * Allow preservation of key order in multiget() if an ordered dictionary is used * Convert timestamps to v1 UUIDs where appropriate * pycassaShell documentation * Use Thrift API 17.1.0 Changes in Version 0.5.0 * Connection Pooling support: pycassa.pool * Started moving logging to pycassa.logger * Use Thrift API 14.0.0 Changes in Version 0.4.3 * Autopack on CF’s default_validation_class * Use Thrift API 13.0.0 Changes in Version 0.4.2 * Added batch mutations interface: pycassa.batch * Made bundled thrift-gen code a subpackage of pycassa * Don’t attempt to reencode already encoded UTF8 strings Changes in Version 0.4.1 * Added batch_insert() * Redifined insert() in terms of batch_insert() * Fixed UTF8 autopacking * Convert datetime slice args to uuids when appropriate * Changed how thrift-gen code is bundled * Assert that the major version of the thrift API is the same on the client and on the server * Use Thrift API 12.0.0 Changes in Version 0.4.0 * Added pycassaShell, a simple interactive shell * Converted the test config from xml to yaml * fixed overflow error on get_count() * Only insert columns which exist in the model object * Make ColumnFamilyMap not ignore the ColumnFamily’s dict_class * Specify keyspace as argument to connect() * Add support for framed transport and default to using it * Added autopacking for column names and values * Added support for secondary indexes with get_indexed_slices() and pycassa.index * Added truncate() * Use Thrift API 11.0.0 pycassa-1.11.2.1/LICENSE000066400000000000000000000023241303744607500144150ustar00rootroot00000000000000cassandra/ - Read the cassandra/LICENSE file. It's the Apache license that comes with Cassandra Everything else: http://www.opensource.org/licenses/mit-license.php Copyright (c) 2009 Jonathan Hseu Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. pycassa-1.11.2.1/README.mkd000066400000000000000000000067541303744607500150550ustar00rootroot00000000000000pycassa ======= [![Build Status](https://secure.travis-ci.org/pycassa/pycassa.png?branch=master)](http://travis-ci.org/pycassa/pycassa) pycassa is a Thrift-based python client library for [Apache Cassandra](http://cassandra.apache.org) **pycassa does not support CQL** or Cassandra's native protocol, which are a replacement for the Thrift interface that pycassa is based on. If you are starting a new project, **it is highly recommended that you use the newer** [DataStax python driver](https://github.com/datastax/python-driver) instead of pycassa. pycassa is open source under the [MIT license](http://www.opensource.org/licenses/mit-license.php). Documentation ------------- Documentation can be found here: [http://pycassa.github.com/pycassa/](http://pycassa.github.com/pycassa/) It includes [installation instructions](http://pycassa.github.com/pycassa/installation.html), a [tutorial](http://pycassa.github.com/pycassa/tutorial.html), [API documentation](http://pycassa.github.com/pycassa/api/index.html), and a [change log](http://pycassa.github.com/pycassa/changelog.html). Getting Help ------------ IRC: * Use the #cassandra channel on irc.freenode.net. If you don't have an IRC client, you can use [freenode's web based client](http://webchat.freenode.net/?channels=#cassandra). Mailing List: * User list: [http://groups.google.com/group/pycassa-discuss](http://groups.google.com/group/pycassa-discuss) * Developer list: [http://groups.google.com/group/pycassa-devel](http://groups.google.com/group/pycassa-devel) Installation ------------ If pip is available, you can install the lastest pycassa release with: pip install pycassa If you want to install from a source checkout, make sure you have Thrift installed, and run setup.py as a superuser: pip install thrift python setup.py install Basic Usage ----------- To get a connection pool, pass a Keyspace and an optional list of servers: ~~~~~~ {python} >>> import pycassa >>> pool = pycassa.ConnectionPool('Keyspace1') # Defaults to connecting to the server at 'localhost:9160' >>> >>> # or, we can specify our servers: >>> pool = pycassa.ConnectionPool('Keyspace1', server_list=['192.168.2.10']) ~~~~~~ To use the standard interface, create a ColumnFamily instance. ~~~~~~ {python} >>> pool = pycassa.ConnectionPool('Keyspace1') >>> cf = pycassa.ColumnFamily(pool, 'Standard1') >>> cf.insert('foo', {'column1': 'val1'}) >>> cf.get('foo') {'column1': 'val1'} ~~~~~~ insert() will also update existing columns: ~~~~~~ {python} >>> cf.insert('foo', {'column1': 'val2'}) >>> cf.get('foo') {'column1': 'val2'} ~~~~~~ You may insert multiple columns at once: ~~~~~~ {python} >>> cf.insert('bar', {'column1': 'val3', 'column2': 'val4'}) >>> cf.multiget(['foo', 'bar']) {'foo': {'column1': 'val2'}, 'bar': {'column1': 'val3', 'column2': 'val4'}} >>> cf.get_count('bar') 2 ~~~~~~ get_range() returns an iterable. You can use list() to convert it to a list: ~~~~~~ {python} >>> list(cf.get_range()) [('bar', {'column1': 'val3', 'column2': 'val4'}), ('foo', {'column1': 'val2'})] >>> list(cf.get_range(row_count=1)) [('bar', {'column1': 'val3', 'column2': 'val4'})] ~~~~~~ You can remove entire keys or just a certain column: ~~~~~~ {python} >>> cf.remove('bar', columns=['column1']) >>> cf.get('bar') {'column2': 'val4'} >>> cf.remove('bar') >>> cf.get('bar') Traceback (most recent call last): ... pycassa.NotFoundException: NotFoundException() ~~~~~~ See the [tutorial](http://pycassa.github.com/pycassa/tutorial.html#connecting-to-cassandra) for more details. pycassa-1.11.2.1/doc/000077500000000000000000000000001303744607500141545ustar00rootroot00000000000000pycassa-1.11.2.1/doc/Makefile000066400000000000000000000060701303744607500156170ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/pycassa.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/pycassa.qhc" latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." pycassa-1.11.2.1/doc/__init__.py000066400000000000000000000000011303744607500162540ustar00rootroot00000000000000 pycassa-1.11.2.1/doc/_static/000077500000000000000000000000001303744607500156025ustar00rootroot00000000000000pycassa-1.11.2.1/doc/_static/favicon.ico000066400000000000000000000025761303744607500177350ustar00rootroot00000000000000h( rDB@xѿ-,.!!"WRH<<=..0333ffi+-1mmoghkz))+>>>99:vvw*)*ҵLyy/LMN#%*''' ' '''/%!(''///$$$1'0 /$$ &$+02" -$$'),*#. 0###1 #'#'/'/'#'/'7 opycassa-1.11.2.1/doc/api/000077500000000000000000000000001303744607500147255ustar00rootroot00000000000000pycassa-1.11.2.1/doc/api/index.rst000066400000000000000000000005571303744607500165750ustar00rootroot00000000000000API Documentation ================= Pycassa Modules --------------- .. toctree:: :maxdepth: 3 pycassa pycassa/pool pycassa/columnfamily pycassa/columnfamilymap pycassa/system_manager pycassa/index pycassa/batch pycassa/types pycassa/util pycassa/logging/pycassa_logger pycassa/logging/pool_stats_logger pycassa/contrib/stubs pycassa-1.11.2.1/doc/api/pycassa.rst000066400000000000000000000040711303744607500171240ustar00rootroot00000000000000:mod:`pycassa` - Exceptions and Enums ===================================== .. module:: pycassa .. exception:: AuthenticationException The credentials supplied when creating a connection did not validate, indicating a bad username or password. .. exception:: AuthorizationException The user that is currently logged in for a connection was not permitted to perform an action. .. exception:: InvalidRequestException Something about the request made was invalid or malformed. The request should not be repeated without modification. Sometimes checking the server logs may help debug what was wrong with the request. .. exception:: NotFoundException The row requested does not exist, or the slice requested was empty. .. exception:: UnavailableException Not enough replicas are up to satisfy the requested consistency level. .. exception:: TimedOutException The replica node did not respond to the coordinator node within ``rpc_timeout_in_ms`` (as configured in :file:`cassandra.yaml`), typically indicating that the replica is overloaded or just went down. .. class:: pycassa.ConsistencyLevel .. data:: ANY Only requires that one replica receives the write *or* the coordinator stores a hint to replay later. Valid only for writes. .. data:: ONE Only one replica needs to respond to consider the operation a success .. data:: QUORUM `ceil(RF/2)` replicas must respond to consider the operation a success .. data:: ALL All replicas must respond to consider the operation a success .. data:: LOCAL_QUORUM Requres a quorum of replicas in the local datacenter .. data:: LOCAL_ONE Has the same behavior as ONE, except that Only replicas in the local datacenter are sent queries .. data:: EACH_QUORUM Requres a quorum of replicas in each datacenter .. data:: TWO Two replicas must respond to consider the operation a success .. data:: THREE Three replicas must respond to consider the operation a success pycassa-1.11.2.1/doc/api/pycassa/000077500000000000000000000000001303744607500163705ustar00rootroot00000000000000pycassa-1.11.2.1/doc/api/pycassa/batch.rst000066400000000000000000000010711303744607500202020ustar00rootroot00000000000000:mod:`pycassa.batch` -- Batch Operations ======================================== .. automodule:: pycassa.batch .. autoclass:: pycassa.batch.Mutator .. automethod:: insert(column_family, key, columns[, timestamp][, ttl]) .. automethod:: remove(column_family, key[, columns][, super_column][, timestamp]) .. automethod:: send([write_consistency_level]) .. autoclass:: pycassa.batch.CfMutator .. automethod:: insert(key, cols[, timestamp][, ttl]) .. automethod:: remove(key[, columns][, super_column][, timestamp]) pycassa-1.11.2.1/doc/api/pycassa/columnfamily.rst000066400000000000000000000054571303744607500216340ustar00rootroot00000000000000:mod:`pycassa.columnfamily` -- Column Family ============================================ .. automodule:: pycassa.columnfamily .. automethod:: pycassa.columnfamily.gm_timestamp .. autoclass:: pycassa.columnfamily.ColumnFamily(pool, column_family) .. autoattribute:: read_consistency_level .. autoattribute:: write_consistency_level .. autoattribute:: autopack_names .. autoattribute:: autopack_values .. autoattribute:: autopack_keys .. autoattribute:: column_name_class .. autoattribute:: super_column_name_class .. autoattribute:: default_validation_class .. autoattribute:: column_validators .. autoattribute:: key_validation_class .. autoattribute:: dict_class .. autoattribute:: buffer_size .. autoattribute:: column_buffer_size .. autoattribute:: timestamp .. automethod:: load_schema() .. automethod:: get(key[, columns][, column_start][, column_finish][, column_reversed][, column_count][, include_timestamp][, super_column][, read_consistency_level]) .. automethod:: multiget(keys[, columns][, column_start][, column_finish][, column_reversed][, column_count][, include_timestamp][, super_column][, read_consistency_level][, buffer_size]) .. automethod:: xget(key[, column_start][, column_finish][, column_reversed][, column_count][, include_timestamp][, read_consistency_level][, buffer_size]) .. automethod:: get_count(key[, super_column][, columns][, column_start][, column_finish][, super_column][, read_consistency_level][, column_reversed][, max_count]) .. automethod:: multiget_count(key[, super_column][, columns][, column_start][, column_finish][, super_column][, read_consistency_level][, buffer_size][, column_reversed][, max_count]) .. automethod:: get_range([start][, finish][, columns][, column_start][, column_finish][, column_reversed][, column_count][, row_count][, include_timestamp][, super_column][, read_consistency_level][, buffer_size][, filter_empty]) .. automethod:: get_indexed_slices(index_clause[, columns][, column_start][, column_finish][, column_reversed][, column_count][, include_timestamp][, read_consistency_level][, buffer_size]) .. automethod:: insert(key, columns[, timestamp][, ttl][, write_consistency_level]) .. automethod:: batch_insert(rows[, timestamp][, ttl][, write_consistency_level]) .. automethod:: add(key, column[, value][, super_column][, write_consistency_level]) .. automethod:: remove(key[, columns][, super_column][, write_consistency_level]) .. automethod:: remove_counter(key, column[, super_column][, write_consistency_level]) .. automethod:: truncate() .. automethod:: batch(self[, queue_size][, write_consistency_level]) pycassa-1.11.2.1/doc/api/pycassa/columnfamilymap.rst000066400000000000000000000023401303744607500223160ustar00rootroot00000000000000:mod:`pycassa.columnfamilymap` -- Maps Classes to Column Families ================================================================= .. automodule:: pycassa.columnfamilymap .. autoclass:: pycassa.columnfamilymap.ColumnFamilyMap(cls, pool, column_family[, raw_columns]) .. automethod:: get(key[, columns][, column_start][, column_finish][, column_count][, column_reversed][, super_column][, read_consistency_level]) .. automethod:: multiget(keys[, columns][, column_start][, column_finish][, column_count][, column_reversed][, super_column][, read_consistency_level]) .. automethod:: get_range([start][, finish][, columns][, column_start][, column_finish][, column_reversed][, column_count][, row_count][, super_column][, read_consistency_level][, buffer_size]) .. automethod:: get_indexed_slices(index_clause[, columns][, column_start][, column_finish][, column_reversed][, column_count][, include_timestamp][, read_consistency_level][, buffer_size]) .. automethod:: insert(instance[, columns][, write_consistency_level]) .. automethod:: batch_insert(instances[, timestamp][, ttl][, write_consistency_level]) .. automethod:: remove(instance[, columns][, write_consistency_level]) pycassa-1.11.2.1/doc/api/pycassa/contrib/000077500000000000000000000000001303744607500200305ustar00rootroot00000000000000pycassa-1.11.2.1/doc/api/pycassa/contrib/stubs.rst000066400000000000000000000023531303744607500217250ustar00rootroot00000000000000:mod:`pycassa.contrib.stubs` -- Pycassa Stubs ============================================= .. automodule:: pycassa.contrib.stubs .. autoclass:: pycassa.contrib.stubs.ColumnFamilyStub(pool=None, column_family=None, rows=None) .. automethod:: get(key[, columns][, column_start][, column_finish][, column_reversed][, column_count][, include_timestamp]) .. automethod:: multiget(keys[, columns][, column_start][, column_finish][, column_reversed][, column_count][, include_timestamp]) .. automethod:: get_range([columns][, include_timestamp]) .. automethod:: get_indexed_slices(index_clause[, columns], include_timestamp]) .. automethod:: insert(key, columns[, timestamp]) .. automethod:: remove(key[, columns]) .. automethod:: truncate() .. automethod:: batch(self) .. autoclass:: pycassa.contrib.stubs.ConnectionPoolStub() .. autoclass:: pycassa.contrib.stubs.SystemManagerStub() .. automethod:: create_column_family(keyspace, table_name) .. automethod:: alter_column(keyspace, table_name, column_name, column_type) .. automethod:: create_index(keyspace, table_name, column_name, column_type) .. automethod:: describe_schema_versions() pycassa-1.11.2.1/doc/api/pycassa/index.rst000066400000000000000000000007331303744607500202340ustar00rootroot00000000000000:mod:`pycassa.index` -- Secondary Index Tools ============================================= .. automodule:: pycassa.index .. autodata:: pycassa.index.EQ .. autodata:: pycassa.index.GT .. autodata:: pycassa.index.GTE .. autodata:: pycassa.index.LT .. autodata:: pycassa.index.LTE .. automethod:: pycassa.index.create_index_expression(column_name, value[, op=EQ]) .. automethod:: pycassa.index.create_index_clause(expr_list[, start_key][, count]) pycassa-1.11.2.1/doc/api/pycassa/logging/000077500000000000000000000000001303744607500200165ustar00rootroot00000000000000pycassa-1.11.2.1/doc/api/pycassa/logging/pool_stats_logger.rst000066400000000000000000000003051303744607500242740ustar00rootroot00000000000000:mod:`pycassa.logging.pool_stats_logger` -- Connection Pool Stats ================================================================= .. automodule:: pycassa.logging.pool_stats_logger :members: pycassa-1.11.2.1/doc/api/pycassa/logging/pycassa_logger.rst000066400000000000000000000002601303744607500235500ustar00rootroot00000000000000:mod:`pycassa.logging.pycassa_logger` -- Pycassa Logging ======================================================== .. automodule:: pycassa.logging.pycassa_logger :members: pycassa-1.11.2.1/doc/api/pycassa/pool.rst000066400000000000000000000021061303744607500200720ustar00rootroot00000000000000:mod:`pycassa.pool` -- Connection Pooling ========================================= .. automodule:: pycassa.pool .. autoclass:: pycassa.pool.ConnectionPool .. autoattribute:: max_overflow .. autoattribute:: pool_timeout .. autoattribute:: recycle .. autoattribute:: max_retries .. autoattribute:: logging_name .. automethod:: get .. automethod:: put .. automethod:: execute .. automethod:: fill .. automethod:: dispose .. automethod:: set_server_list .. automethod:: size .. automethod:: overflow .. automethod:: checkedin .. automethod:: checkedout .. automethod:: add_listener .. autoexception:: pycassa.pool.AllServersUnavailable .. autoexception:: pycassa.pool.NoConnectionAvailable .. autoexception:: pycassa.pool.MaximumRetryException .. autoexception:: pycassa.pool.InvalidRequestError .. autoclass:: pycassa.pool.ConnectionWrapper :members: .. autoclass:: pycassa.pool.PoolListener :members: pycassa-1.11.2.1/doc/api/pycassa/system_manager.rst000066400000000000000000000003101303744607500221320ustar00rootroot00000000000000:mod:`pycassa.system_manager` -- Manage Schema Definitions ========================================================== .. automodule:: pycassa.system_manager :members: :member-order: bysource pycassa-1.11.2.1/doc/api/pycassa/types.rst000066400000000000000000000016271303744607500202740ustar00rootroot00000000000000:mod:`pycassa.types` -- Data Type Descriptions ============================================== .. automodule:: pycassa.types .. autoclass:: pycassa.types.CassandraType(reversed=False, default=None) .. autoclass:: pycassa.types.BytesType() .. autoclass:: pycassa.types.AsciiType() .. autoclass:: pycassa.types.UTF8Type() .. autoclass:: pycassa.types.LongType() .. autoclass:: pycassa.types.IntegerType() .. autoclass:: pycassa.types.DoubleType() .. autoclass:: pycassa.types.FloatType() .. autoclass:: pycassa.types.DecimalType() .. autoclass:: pycassa.types.DateType() .. autoclass:: pycassa.types.UUIDType() .. autoclass:: pycassa.types.TimeUUIDType() .. autoclass:: pycassa.types.LexicalUUIDType() .. autoclass:: pycassa.types.OldPycassaDateType() .. autoclass:: pycassa.types.IntermediateDateType() .. autoclass:: pycassa.types.CompositeType(*components) .. autoclass:: pycassa.types.DynamicCompositeType(*aliases) pycassa-1.11.2.1/doc/api/pycassa/util.rst000066400000000000000000000001561303744607500201010ustar00rootroot00000000000000:mod:`pycassa.util` -- Utilities ================================ .. automodule:: pycassa.util :members: pycassa-1.11.2.1/doc/assorted/000077500000000000000000000000001303744607500160005ustar00rootroot00000000000000pycassa-1.11.2.1/doc/assorted/column_family_map.rst000066400000000000000000000040171303744607500222270ustar00rootroot00000000000000.. _column-family-map: Class Mapping with Column Family Map ==================================== You can map existing classes to column families using :class:`~pycassa.columnfamilymap.ColumnFamilyMap`. To specify the fields to be persisted, use any of the subclasses of :class:`pycassa.types.CassandraType` available in :mod:`pycassa.types`. .. code-block:: python >>> from pycassa.types import * >>> class User(object): ... key = LexicalUUIDType() ... name = Utf8Type() ... age = IntegerType() ... height = FloatType() ... score = DoubleType(default=0.0) ... joined = DateType() The defaults will be filled in whenever you retrieve instances from the Cassandra server and the column doesn't exist. If you want to add a column in the future, you can simply add the relevant attribute to the class and the default value will be used when you get old instances. .. code-block:: python >>> from pycassa.pool import ConnectionPool >>> from pycassa.columnfamilymap import ColumnFamilyMap >>> >>> pool = ConnectionPool('Keyspace1') >>> cfmap = ColumnFamilyMap(User, pool, 'users') All the functions are exactly the same as for :class:`ColumnFamily`, except that they return instances of the supplied class when possible. .. code-block:: python >>> from datetime import datetime >>> import uuid >>> >>> key = uuid.uuid4() >>> >>> user = User() >>> user.key = key >>> user.name = 'John' >>> user.age = 18 >>> user.height = 5.9 >>> user.joined = datetime.now() >>> cfmap.insert(user) 1261395560186855 .. code-block:: python >>> user = cfmap.get(key) >>> user.name "John" >>> user.age 18 .. code-block:: python >>> users = cfmap.multiget([key1, key2]) >>> print users[0].name "John" >>> for user in cfmap.get_range(): ... print user.name "John" "Bob" "Alex" .. code-block:: python >>> cfmap.remove(user) 1261395603906864 >>> cfmap.get(user.key) Traceback (most recent call last): ... cassandra.ttypes.NotFoundException: NotFoundException() pycassa-1.11.2.1/doc/assorted/composite_types.rst000066400000000000000000000073171303744607500217700ustar00rootroot00000000000000.. _composite-types: Composite Types =============== pycassa currently supports static CompositeTypes. DynamicCompositeType support is planned. Creating a CompositeType Column Family -------------------------------------- When creating a column family, you can specify a CompositeType comparator or validator using :class:`~.CompositeType` in conjunction with the other types in :mod:`pycassa.types`. .. code-block:: python >>> from pycassa.types import * >>> from pycassa.system_manager import * >>> >>> sys = SystemManager() >>> comparator = CompositeType(LongType(reversed=True), AsciiType()) >>> sys.create_column_family("Keyspace1", "CF1", comparator_type=comparator) This example creates a column family with column names that have two components. The first component is a :class:`~.LongType`, sorted in reverse order; the second is a normally sorted :class:`~.AsciiType`. You may put an arbitrary number of components in a :class:`~.CompositeType`, and each component may be reversed or not. Insert CompositeType Data ------------------------- When inserting data, where a :class:`~.CompositeType` is expected, you should supply a tuple which includes all of the components. Continuing the example from above: .. code-block:: python >>> cf = ColumnFamily(pool, "CF1") >>> cf.insert("key", {(1234, "abc"): "colval"}) When dealing with composite keys or column values, supply tuples in exactly the same manner. Fetching CompositeType Data --------------------------- :class:`.CompositeType` data is also returned in a tuple format. .. code-block:: python >>> cf.get("key") {(1234, "abc"): "colval"} When fetching a slice of columns, slice ends are specified using tuples as well. However, you are only required to supply at least the first component of the :class:`.CompositeType`; elements may be left off of the end of the tuple in order to slice columns based on only the first or first few components. For example, suppose our `comparator_type` is ``CompositeType(LongType, AsciiType, LongType)``. Valid slice ends would include ``(1, )``, ``(1, "a")``, and ``(1, "a", 2011)``. If you supply a slice start and a slice end that only specify the first component, you will get back all columns where the first component falls in that range, regardless of what the value of the other components is. When slicing columns, the second component is only compared to the second component of the slice start if the first component of the column name matches the first component of the slice start. Likewise with the slice end, the second component will only be checked if the first components match. In essence, components after the first only serve as "tie-breakers" at the slice ends, and have no effect in the "middle" of the slice. Keep in mind the sorted order of the columns within Cassandra, and that when you get a slice of columns, you can only get a contiguous slice, not separate chunks out of the row. Inclusive or Exclusive Slice Ends ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ By default, slice ends are inclusive on the final component you supply for that slice end. This means that if you give a ``column_finish`` of ``(123, "b")``, then columns named ``(123, "a", 2011)``, ``(123, "b", 0)``, and ``(123, "b" 123098123012)`` would all be returned. With composite types, you have the option to make the slice start and finish exclusive. To do so, replace the final component in your slice end with a tuple like ``(value, False)``. (Think of the ``False`` as being short for ``inclusive=False``. You can also explicitly specify ``True``, but this is redundant.) Now, if you gave a ``column_finish`` of ``(123, ("b", False))``, you would only get back ``(123, "a", 2011)``. The same principle applies for ``column_start``. pycassa-1.11.2.1/doc/assorted/index.rst000066400000000000000000000004641303744607500176450ustar00rootroot00000000000000Assorted Cassandra and pycassa Functionality ============================================ These sections document how to make use of various features offered by either Cassandra or pycassa. .. toctree:: secondary_indexes super_columns composite_types column_family_map time_uuid pycassa_shell pycassa-1.11.2.1/doc/assorted/pycassa_shell.rst000066400000000000000000000105031303744607500213630ustar00rootroot00000000000000.. _pycassa-shell: pycassaShell ============ **pycassaShell** is an interactive Cassandra python shell. It is useful for exploring Cassandra, especially for those who are just beginning. Requirements ------------ Python 2.6 or later is required. Make sure you have **pycassa** installed as shown in :ref:`installing`. It is **strongly** recommended that you have `IPython `_, an enhanced interactive python shell, installed. This gives you tab completion, colors, and working arrow keys! On Debian based systems, this can be installed by: .. code-block:: bash apt-get install ipython Alternatively, if ``easy_install`` is available: .. code-block:: bash easy_install ipython Usage ----- .. code-block:: bash pycassaShell -k KEYSPACE [OPTIONS] The available options are: * ``-H``, ``--host`` - The hostname to connect to. Defaults to 'localhost' * ``-p``, ``--port`` - The port to connect to. Defaults to 9160. * ``-u``, ``--user`` - If authentication or authorization are enabled, this username is used. * ``-P``, ``--passwd`` - If authentication or authorization are enabled, this password is used. * ``-S``, ``--streaming`` - Use a streaming transport. Works with Cassandra 0.6.x and below. * ``-F``, ``--framed`` - Use a framed transport. Works with Cassandra 0.7.x. This is the default. When pycassaShell starts, it creates a :class:`~pycassa.columnfamily.ColumnFamily` for every existing column family and prints the names of the objects. You can use these to easily insert and retrieve data from Cassandra. .. code-block:: python >>> STANDARD1.insert('key', {'colname': 'val'}) 1286048238391943 >>> STANDARD1.get('key') {'colname': 'val'} .. _pycassa-shell-sys-man: If you are interested in the keyspace and column family definitions, **pycassa** provides several methods that can be used with ``SYSTEM_MANAGER``: * :meth:`~pycassa.system_manager.SystemManager.create_keyspace()` * :meth:`~pycassa.system_manager.SystemManager.alter_keyspace()` * :meth:`~pycassa.system_manager.SystemManager.drop_keyspace()` * :meth:`~pycassa.system_manager.SystemManager.create_column_family()` * :meth:`~pycassa.system_manager.SystemManager.alter_column_family()` * :meth:`~pycassa.system_manager.SystemManager.drop_column_family()` * :meth:`~pycassa.system_manager.SystemManager.alter_column()` * :meth:`~pycassa.system_manager.SystemManager.create_index()` * :meth:`~pycassa.system_manager.SystemManager.drop_index()` Example usage: .. code-block:: python >>> describe_keyspace('Keyspace1') Name: Keyspace1 Replication Strategy: SimpleStrategy Replication Factor: 1 Column Families: Indexed1 Standard2 Standard1 Super1 >>> describe_column_family('Keyspace1', 'Indexed1') Name: Indexed1 Description: Column Type: Standard Comparator Type: BytesType Default Validation Class: BytesType Cache Sizes Row Cache: Disabled Key Cache: 200000 keys Read Repair Chance: 100.0% GC Grace Seconds: 864000 Compaction Thresholds Min: 4 Max: 32 Memtable Flush After Thresholds Throughput: 63 MiB Operations: 295312 operations Time: 60 minutes Cache Save Periods Row Cache: Disabled Key Cache: 3600 seconds Column Metadata - Name: birthdate Value Type: LongType Index Type: KEYS Index Name: None >>> SYSTEM_MANAGER.create_keyspace('Keyspace1', strategy_options={"replication_factor": "1"}) >>> SYSTEM_MANAGER.create_column_family('Keyspace1', 'Users', comparator_type=INT_TYPE) >>> SYSTEM_MANAGER.alter_column_family('Keyspace1', 'Users', key_cache_size=100) >>> SYSTEM_MANAGER.create_index('Keyspace1', 'Users', 'birthdate', LONG_TYPE, index_name='bday_index') >>> SYSTEM_MANAGER.drop_keyspace('Keyspace1') pycassa-1.11.2.1/doc/assorted/secondary_indexes.rst000066400000000000000000000033771303744607500222520ustar00rootroot00000000000000.. _secondary-indexes: Secondary Indexes ----------------- Cassandra supports secondary indexes, which allow you to efficiently get only rows which match a certain expression. Here's a `description of secondary indexes and how to use them `_. In order to use :meth:`~pycassa.columnfamily.ColumnFamily.get_indexed_slices()` to get data from an indexed column family using the indexed column, we need to create an :class:`~pycassa.cassandra.ttypes.IndexClause` which contains a list of :class:`~pycassa.cassandra.ttypes.IndexExpression` objects. The :class:`IndexExpression` objects inside the clause are ANDed together, meaning every expression must match for a row to be returned. Suppose we have a 'Users' column family with one row per user, and we want to get all of the users from Utah with a birthdate after 1970. We can make use of the :mod:`pycassa.index` module to make this easier: .. code-block:: python >>> from pycassa.pool import ConnectionPool >>> from pycassa.columnfamily import ColumnFamily >>> from pycassa.index import * >>> pool = ConnectionPool('Keyspace1') >>> users = ColumnFamily(pool, 'Users') >>> state_expr = create_index_expression('state', 'Utah') >>> bday_expr = create_index_expression('birthdate', 1970, GT) >>> clause = create_index_clause([state_expr, bday_expr], count=20) >>> for key, user in users.get_indexed_slices(clause): ... print user['name'] + ",", user['state'], user['birthdate'] John Smith, Utah 1971 Mike Scott, Utah 1980 Jeff Bird, Utah 1973 Although at least one :class:`~pycassa.cassandra.ttypes.IndexExpression` in the clause must be on an indexed column, you may also have other expressions which are on non-indexed columns. pycassa-1.11.2.1/doc/assorted/super_columns.rst000066400000000000000000000027371303744607500214410ustar00rootroot00000000000000Super Columns ============= Cassandra allows you to group columns in "super columns". You would create a super column family through :file:`cassandra-cli` in the following way: :: [default@keyspace1] create column family Super1 with column_type=Super; 632cf985-645e-11e0-ad9e-e700f669bcfc Waiting for schema agreement... ... schemas agree across the cluster To use a super column in **pycassa**, you only need to add an extra level to the dictionary: .. code-block:: python >>> col_fam = pycassa.ColumnFamily(pool, 'Super1') >>> col_fam.insert('row_key', {'supercol_name': {'col_name': 'col_val'}}) 1354491238721345 >>> col_fam.get('row_key') {'supercol_name': {'col_name': 'col_val'}} The `super_column` parameter for :meth:`get()`-like methods allows you to be selective about what subcolumns you get from a single super column. .. code-block:: python >>> col_fam = pycassa.ColumnFamily(pool, 'Letters') >>> col_fam.insert('row_key', {'lowercase': {'a': '1', 'b': '2', 'c': '3'}}) 1354491239132744 >>> col_fam.get('row_key', super_column='lowercase') {'supercol1': {'a': '1': 'b': '2', 'c': '3'}} >>> col_fam.get('row_key', super_column='lowercase', columns=['a', 'b']) {'supercol1': {'a': '1': 'b': '2'}} >>> col_fam.get('row_key', super_column='lowercase', column_start='b') {'supercol1': {'b': '1': 'c': '2'}} >>> col_fam.get('row_key', super_column='lowercase', column_finish='b', column_reversed=True) {'supercol1': {'c': '2', 'b': '1'}} pycassa-1.11.2.1/doc/assorted/time_uuid.rst000066400000000000000000000047301303744607500205220ustar00rootroot00000000000000Version 1 UUIDs (TimeUUIDType) ============================== Version 1 UUIDs are `frequently used for timelines `_ instead of timestamps. Normally, this makes it difficult to get a slice of columns for some time range or to create a column name or value for some specific time. To make this easier, if a :class:`datetime` object or a timestamp with the same precision as the output of ``time.time()`` is passed where a TimeUUID is expected, **pycassa** will convert that into a :class:`uuid.UUID` with an equivalent timestamp component. Suppose we have something like Twissandra's public timeline but with TimeUUIDs for column names. If we want to get all tweets that happened yesterday, we can do: .. code-block:: python >>> import datetime >>> line = pycassa.ColumnFamily(pool, 'Userline') >>> today = datetime.datetime.utcnow() >>> yesterday = today - datetime.timedelta(days=1) >>> tweets = line.get('__PUBLIC__', column_start=yesterday, column_finish=today) Now, suppose there was a tweet that was supposed to be posted on December 11th at 8:02:15, but it was dropped and now we need to put it in the public timeline. There's no need to generate a UUID, we can just pass another datetime object instead: .. code-block:: python >>> from datetime import datetime >>> line = pycassa.ColumnFamily(pool, 'Userline') >>> time = datetime(2010, 12, 11, 8, 2, 15) >>> line.insert('__PUBLIC__', {time: 'some tweet stuff here'}) One limitation of this is that you can't ask for one specific column with a TimeUUID name by passing a :class:`datetime` through something like the `columns` parameter for :meth:`get()`; this is because there is no way to know the non-timestamp components of the UUID ahead of time. Instead, simply pass the same :class:`datetime` object for both `column_start` and `column_finish` and you'll get one or more columns for that exact moment in time. Note that Python does not sort UUIDs the same way that Cassandra does. When Cassandra sorts V1 UUIDs it first compares the time component, and then the raw bytes of the UUID. Python on the other hand just sorts the raw bytes. If you need to sort UUIDs in Python the same way Cassandra does you will want to use something like this: .. code-block:: python >>> import uuid, random >>> uuids = [uuid.uuid1() for _ xrange(10)] >>> random.shuffle(uuids) >>> improperly_sorted = sorted(uuids) >>> properly_sorted = sorted(uuids, key=lambda k: (k.time, k.bytes)) pycassa-1.11.2.1/doc/changelog.rst000066400000000000000000001020221303744607500166320ustar00rootroot00000000000000Changelog ========= Changes in Version 1.11.1 ------------------------- Features ~~~~~~~~ - Add describe_token_map() to SystemManager Miscellaneous ~~~~~~~~~~~~~ - Add TimestampType alias for DateType - Match Cassandra's sorting of TimeUUIDs in stubs Changes in Version 1.11.0 ------------------------- Features ~~~~~~~~ - Upgrade Thrift interface to 19.36.1, which adds support for the ``LOCAL_ONE consistency`` level and the ``populate_io_cache_on_flush`` column family attribute. Bug Fixes ~~~~~~~~~ - Return timestamp from ``remove()`` in stub ColumnFamily Miscellaneous ~~~~~~~~~~~~~ - Upgrade bundled ``ez_setup.py`` Changes in Version 1.10.0 ------------------------- This release only adds one feature: support for Cassandra 1.2's atomic batches. Features ~~~~~~~~ - Add support for Cassandra 1.2+ atomic batches through a new ``atomic`` parameter for :class:`.batch.Mutator`, :class:`.batch.CfMutator`, and :meth:`.ColumnFamily.batch()`. Changes in Version 1.9.1 ------------------------ This release fixes a few edge cases around connection pooling that can affect long-running applications. It also adds token range support to :meth:`.ColumnFamily.get_range()`, which can be useful for parallelizing full-ring scans. Features ~~~~~~~~ - Add token range support to :meth:`.ColumnFamily.get_range()` Bug Fixes ~~~~~~~~~ - Prevent possible double connection disposal when recycling connections - Handle empty strings for IntegerType values - Prevent closed connections from being returned to the pool. - Ensure connection count is decremented when pool is disposed Changes in Version 1.9.0 ------------------------ This release adds a couple of minor new features and improves multithreaded locking behavior in :class:`~.ConnectionPool`. There should be no backwards-compatibility concerns. Features ~~~~~~~~ - Full support for ``column_start``, ``column_finish``, ``column_count``, and ``column_reversed`` parameters in :mod:`~.contrib.stubs` - Addition of an ``include_ttl`` parameter to :class:`~.ColumnFamily` fetching methods which works like the existing ``include_timestamp`` parameter. Bug Fixes ~~~~~~~~~ - Reduce the locked critical section in :class:`~.ConnectionPool`, primarily to make sure lock aquisition time is not ignored outside of the pool's ``timeout`` setting. Changes in Version 1.8.0 ------------------------ This release requires either Python 2.6 or 2.7. Python 2.4 and 2.5 are no longer supported. There are no concrete plans for Python 3 compatibility yet. Features ~~~~~~~~ - Add configurable ``socket_factory`` attribute and constructor parameter to :class:`~.ConnectionPool` and :class:`~.SystemManager`. - Add SSL support via the new ``socket_factory`` attribute. - Add support for :class:`~.DynamicCompositeType` - Add mock support through a new :mod:`pycassa.contrib.stubs` module Bug Fixes ~~~~~~~~~ - Don't return closed connections to the pool. This was primarily a problem when operations failed after retrying up to the limit, resulting in a :exc:`~.MaximumRetryException` or :exc:`~.AllServersUnavailable`. - Set keyspace for connection after logging in instead of before. This fixes authentication against Cassandra 1.2, which requires logging in prior to setting a keyspace. - Specify correct UUID variant when creating v1 :class:`uuid.UUID` objects from datetimes or timestamps - Add 900ns to v1 :class:`uuid.UUID` timestamps when the "max" TimeUUID for a specific datetime or timestamp is requested, such as a column slice end - Also look at attributes of parent classes when creating columns from attributes in :class:`~.ColumnFamilyMap` Other ~~~~~ - Upgrade bundled Thrift-generated python to 19.35.0, generated with Thrift 0.9.0. Changes in Version 1.7.2 ------------------------ This release fixes a minor bug and upgrades the bundled Cassandra Thrift client interface to 19.34.0, matching Cassandra 1.2.0-beta1. This doesn't affect any existing Thrift methods, only adds new ones (that aren't yet utilized by pycassa), so there should not be any breakage. Bug Fixes ~~~~~~~~~ - Fix single-component composite packing - Avoid cyclic imports during installation in setup.py Other ~~~~~ - Travis CI integration Changes in Version 1.7.1 ------------------------ This release has few changes, and should make for a smooth upgrade from 1.7.0. Features ~~~~~~~~ - Add support for DecimalType: :class:`~.types.DecimalType` Bug Fixes ~~~~~~~~~ - Fix bad slice ends when using :meth:`~.ColumnFamily.xget()` with composite columns and a `column_finish` parameter - Fix bad documentation paths in debian packaging scripts Other ~~~~~ - Add ``__version__`` and ``__version_info__`` attributes to the :mod:`pycassa` module Changes in Version 1.7.0 ------------------------ This release has a few relatively large changes in it: a new connection pool stats collector, compatibility with Cassandra 0.7 through 1.1, and a change in timezone behavior for datetimes. Before upgrading, take special care to make sure datetimes that you pass to pycassa (for TimeUUIDType or DateType data) are in UTC, and make sure your code expects to get UTC datetimes back in return. Likewise, the SystemManager changes *should* be backwards compatible, but there may be minor differences, mostly in :meth:`~.SystemManager.create_column_family` and :meth:`~.SystemManager.alter_column_family`. Be sure to test any code that works programmatically with these. Features ~~~~~~~~ - Added :class:`~.StatsLogger` for tracking :class:`~.ConnectionPool` metrics - Full Cassandra 1.1 compatibility in :class:`.SystemManager`. To support this, all column family or keyspace attributes that have existed since Cassandra 0.7 may be used as keyword arguments for :meth:`~.SystemManager.create_column_family` and :meth:`~.SystemManager.alter_column_family`. It is up to the user to know which attributes are available and valid for their version of Cassandra. As part of this change, the version-specific thrift-generated cassandra modules (``pycassa.cassandra.c07``, ``pycassa.cassandra.c08``, and ``pycassa.cassandra.c10``) have been replaced by ``pycassa.cassandra``. A minor related change is that individual connections now now longer ask for the node's API version, and that information is no longer stored as an attribute of the :class:`.ConnectionWrapper`. Bug Fixes ~~~~~~~~~ - Fix :meth:`~.ColumnFamily.xget()` paging for non-string comparators - Add :meth:`~.ColumnFamilyMap.batch_insert()` to :class:`.ColumnFamilyMap` - Use `setattr` instead of directly updating the object's ``__dict__`` in :class:`.ColumnFamilyMap` to avoid breaking descriptors - Fix single-column counter increments with :meth:`.ColumnFamily.insert()` - Include `AuthenticationException` and `AuthorizationException` in the ``pycassa`` module - Support counters in :meth:`~.ColumnFamily.xget()` - Sort column families in pycassaShell for display - Raise ``TypeError`` when bad keyword arguments are used when creating a :class:`.ColumnFamily` object Other ~~~~~ All ``datetime`` objects create by pycassa now use UTC as their timezone rather than the local timezone. Likewise, naive ``datetime`` objects that are passed to pycassa are now assumed to be in UTC time, but ``tz_info`` is respected if set. Specifically, the types of data that you may need to make adjustments for when upgrading are TimeUUIDType and DateType (including OldPycassaDateType and IntermediateDateType). Changes in Version 1.6.0 ------------------------ This release adds a few minor features and several important bug fixes. The most important change to take note of if you are using composite comparators is the change to the default inclusive/exclusive behavior for slice ends. Other than that, this should be a smooth upgrade from 1.5.x. Features ~~~~~~~~ - New script for easily building RPM packages - Add request and parameter information to PoolListener callback - Add :meth:`.ColumnFamily.xget()`, a generator version of :meth:`~.ColumnFamily.get()` that automatically pages over columns in reasonably sized chunks - Add support for Int32Type, a 4-byte signed integer format - Add constants for the highest and lowest possible TimeUUID values to :mod:`pycassa.util` Bug Fixes ~~~~~~~~~ - Various 2.4 syntax errors - Raise :exc:`~.AllServersUnavailable` if ``server_list`` is empty - Handle custom types inside of composites - Don't erase ``comment`` when updating column families - Match Cassandra's sorting of TimeUUIDType values when the timestamps tie. This could result in some columns being erroneously left off of the end of column slices when datetime objects or timestamps were used for ``column_start`` or ``column_finish`` - Use gevent's queue in place of the stdlib version when gevent monkeypatching has been applied - Avoid sub-microsecond loss of precision with TimeUUID timestamps when using :func:`pycassa.util.convert_time_to_uuid` - Make default slice ends inclusive when using ``CompositeType`` comparator Previously, the end of the slice was exclusive by default (as was the start of the slice when ``column_reversed`` was ``True``) Changes in Version 1.5.1 ------------------------ This release only affects those of you using DateType data, which has been supported since pycassa 1.2.0. If you are using DateType, it is **very** important that you read this closely. DateType data is internally stored as an 8 byte integer timestamp. Since version 1.2.0 of pycassa, the timestamp stored has counted the number of *microseconds* since the unix epoch. The actual format that Cassandra standardizes on is *milliseconds* since the epoch. If you are only using pycassa, you probably won't have noticed any problems with this. However, if you try to use cassandra-cli, sstable2json, Hector, or any other client that supports DateType, DateType data written by pycassa will appear to be far in the future. Similarly, DateType data written by other clients will appear to be in the past when loaded by pycassa. This release changes the default DateType behavior to comply with the standard, millisecond-based format. **If you use DateType, and you upgrade to this release without making any modifications, you will have problems.** Unfortunately, this is a bit of a tricky situation to resolve, but the appropriate actions to take are detailed below. To temporarily continue using the old behavior, a new class has been created: :class:`pycassa.types.OldPycassaDateType`. This will read and write DateType data exactly the same as pycassa 1.2.0 to 1.5.0 did. If you want to convert your data to the new format, the other new class, :class:`pycassa.types.IntermediateDateType`, may be useful. It can read either the new or old format correctly (unless you have used dates close to 1970 with the new format) and will write only the new format. The best case for using this is if you have DateType validated columns that don't have a secondary index on them. To tell pycassa to use :class:`~.types.OldPycassaDateType` or :class:`~.types.IntermediateDateType`, use the :class:`~.ColumnFamily` attributes that control types: :attr:`~.ColumnFamily.column_name_class`, :attr:`~.ColumnFamily.key_validation_class`, :attr:`~.ColumnFamily.column_validators`, and so on. Here's an example: .. code-block:: python from pycassa.types import OldPycassaDateType, IntermediateDateType from pycassa.column_family import ColumnFamily from pycassa.pool import ConnectionPool pool = ConnectionPool('MyKeyspace', ['192.168.1.1']) # Our tweet timeline has a comparator_type of DateType tweet_timeline_cf = ColumnFamily(pool, 'tweets') tweet_timeline_cf.column_name_class = OldPycassaDateType() # Our tweet timeline has a comparator_type of DateType users_cf = ColumnFamily(pool, 'users') users_cf.column_validators['join_date'] = IntermediateDateType() If you're using DateType for the `key_validation_class`, column names, column values with a secondary index on them, or are using the DateType validated column as a non-indexed part of an index clause with `get_indexed_slices()` (eg. "where state = 'TX' and join_date > 2012"), you need to be more careful about the conversion process, and :class:`~.types.IntermediateDateType` probably isn't a good choice. In most of cases, if you want to switch to the new date format, a manual migration script to convert all existing DateType data to the new format will be needed. In particular, if you convert keys, column names, or indexed columns on a live data set, be very careful how you go about it. If you need any assistance or suggestions at all with migrating your data, please feel free to send an email to tyler@datastax.com; I would be glad to help. Changes in Version 1.5.0 ------------------------ The main change to be aware of for this release is the new no-retry behavior for counter operations. If you have been maintaining a separate connection pool with retries disabled for usage with counters, you may discontinue that practice after upgrading. Features ~~~~~~~~ - By default, counter operations will not be retried automatically. This makes it easier to use a single connection pool without worrying about overcounting. Bug Fixes ~~~~~~~~~ - Don't remove entire row when an empty list is supplied for the `columns` parameter of :meth:`~ColumnFamily.remove()` or the batch remove methods. - Add python-setuptools to debian build dependencies - Batch :meth:`~.Mutator.remove()` was not removing subcolumns when the specified supercolumn was 0 or other "falsey" values - Don't request an extra row when reading fewer than `buffer_size` rows with :meth:`~.ColumnFamily.get_range()` or :meth:`~.ColumnFamily.get_indexed_slices()`. - Remove `pool_type` from logs, which showed up as ``None`` in recent versions - Logs were erroneously showing the same server for retries of failed operations even when the actual server being queried had changed Changes in Version 1.4.0 ------------------------ This release is primarily a bugfix release with a couple of minor features and removed deprecated items. Features ~~~~~~~~ - Accept column_validation_classes when creating or altering column families with SystemManager - Ignore UNREACHABLE nodes when waiting for schema version agreement Bug Fixes ~~~~~~~~~ - Remove accidental print statement in SystemManager - Raise TypeError when unexpected types are used for comparator or validator types when creating or altering a Column Family - Fix packing of column values using column-specific validators during batch inserts when the column name is changed by packing - Always return timestamps from inserts - Fix NameError when timestamps are used where a DateType is expected - Fix NameError in python 2.4 when unpacking DateType objects - Handle reading composites with trailing components missing - Upgrade ez_setup.py to fix broken setuptools link Removed Deprecated Items ~~~~~~~~~~~~~~~~~~~~~~~~ - :meth:`pycassa.connect()` - :meth:`pycassa.connect_thread_local()` - :meth:`.ConnectionPool.status()` - :meth:`.ConnectionPool.recreate()` Changes in Version 1.3.0 ------------------------ This release adds full compatibility with Cassandra 1.0 and removes support for schema manipulation in Cassandra 0.7. In this release, schema manipulation should work with Cassandra 0.8 and 1.0, but not 0.7. The data API should continue to work with all three versions. Bug Fixes ~~~~~~~~~ - Don't ignore `columns` parameter in :meth:`.ColumnFamilyMap.insert()` - Handle empty instance fields in :meth:`.ColumnFamilyMap.insert()` - Use the same default for `timeout` in :meth:`pycassa.connect()` as :class:`~.ConnectionPool` uses - Fix typo which caused a different exception to be thrown when an :exc:`.AllServersUnavailable` exception was raised - IPython 0.11 compatibility in pycassaShell - Correct dependency declaration in :file:`setup.py` - Add UUIDType to supported types Features ~~~~~~~~ - The `filter_empty` parameter was added to :meth:`~.ColumnFamily.get_range()` with a default of ``True``; this allows empty rows to be kept if desired Deprecated ~~~~~~~~~~ - :meth:`pycassa.connect()` - :meth:`pycassa.connect_thread_local()` Changes in Version 1.2.1 ------------------------ This is strictly a bug-fix release addressing a few issues created in 1.2.0. Bug Fixes ~~~~~~~~~ - Correctly check for Counters in :class:`.ColumnFamily` when setting `default_validation_class` - Pass kwargs in :class:`.ColumnFamilyMap` to :class:`.ColumnFamily` - Avoid potential UnboundLocal in :meth:`.ConnectionPool.execute` when :meth:`~.ConnectionPool.get` fails - Fix ez_setup dependency/bundling so that package installations using easy_install or pip don't fail without ez_setup installed Changes in Version 1.2.0 ------------------------ This should be a fairly smooth upgrade from pycassa 1.1. The primary changes that may introduce minor incompatibilities are the changes to :class:`.ColumnFamilyMap` and the automatic skipping of "ghost ranges" in :meth:`.ColumnFamily.get_range()`. Features ~~~~~~~~ - Add :meth:`.ConnectionPool.fill()` - Add :class:`~.FloatType`, :class:`~.DoubleType`, :class:`~.DateType`, and :class:`~.BooleanType` support. - Add :class:`~.CompositeType` support for static composites. See :ref:`composite-types` for more details. - Add `timestamp`, `ttl` to :meth:`.ColumnFamilyMap.insert()` params - Support variable-length integers with :class:`~.IntegerType`. This allows more space-efficient small integers as well as integers that exceed the size of a long. - Make :class:`~.ColumnFamilyMap` a subclass of :class:`~.ColumnFamily` instead of using one as a component. This allows all of the normal adjustments normally done to a :class:`~.ColumnFamily` to be done to a :class:`~.ColumnFamilyMap` instead. See :ref:`column-family-map` for examples of using the new version. - Expose the following :class:`~.ConnectionPool` attributes, allowing them to be altered after creation: :attr:`~.ConnectionPool.max_overflow`, :attr:`~.ConnectionPool.pool_timeout`, :attr:`~.ConnectionPool.recycle`, :attr:`~.ConnectionPool.max_retries`, and :attr:`~.ConnectionPool.logging_name`. Previously, these were all supplied as constructor arguments. Now, the preferred way to set them is to alter the attributes after creation. (However, they may still be set in the constructor by using keyword arguments.) - Automatically skip "ghost ranges" in :meth:`ColumnFamily.get_range()`. Rows without any columns will not be returned by the generator, and these rows will not count towards the supplied `row_count`. Bug Fixes ~~~~~~~~~ - Add connections to :class:`~.ConnectionPool` more readily when `prefill` is ``False``. Before this change, if the ConnectionPool was created with ``prefill=False``, connections would only be added to the pool when there was concurrent demand for connections. After this change, if ``prefill=False`` and ``pool_size=N``, the first `N` operations will each result in a new connection being added to the pool. - Close connection and adjust the :class:`~.ConnectionPool`'s connection count after a :exc:`.TApplicationException`. This exception generally indicates programmer error, so it's not extremely common. - Handle typed keys that evaluate to ``False`` Deprecated ~~~~~~~~~~ - :meth:`.ConnectionPool.recreate()` - :meth:`.ConnectionPool.status()` Miscellaneous ~~~~~~~~~~~~~ - Better failure messages for :class:`~.ConnectionPool` failures - More efficient packing and unpacking - More efficient multi-column inserts in :meth:`.ColumnFamily.insert()` and :meth:`.ColumnFamily.batch_insert()` - Prefer Python 2.7's :class:`collections.OrderedDict` over the bundled version when available Changes in Version 1.1.1 ------------------------ Features ~~~~~~~~ - Add ``max_count`` and ``column_reversed`` params to :meth:`~.ColumnFamily.get_count()` - Add ``max_count`` and ``column_reversed`` params to :meth:`~.ColumnFamily.multiget_count()` Bug Fixes ~~~~~~~~~ - Don't retry operations after a ``TApplicationException``. This exception is reserved for programmatic errors (such as a bad API parameters), so retries are not needed. - If the read_consistency_level kwarg was used in a :class:`~.ColumnFamily` constructor, it would be ignored, resulting in a default read consistency level of :const:`ONE`. This did not affect the read consistency level if it was specified in any other way, including per-method or by setting the :attr:`~.ColumnFamily.read_consistency_level` attribute. Changes in Version 1.1.0 ------------------------ This release adds compatibility with Cassandra 0.8, including support for counters and key_validation_class. This release is backwards-compatible with Cassandra 0.7, and can support running against a mixed cluster of both Cassandra 0.7 and 0.8. Changes related to Cassandra 0.8 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Addition of :data:`~.system_manager.COUNTER_COLUMN_TYPE` to :mod:`~.system_manager`. - Several new column family attributes, including ``key_validation_class``, ``replicate_on_write``, ``merge_shards_chance``, ``row_cache_provider``, and ``key_alias``. - The new :meth:`.ColumnFamily.add()` and :meth:`.ColumnFamily.remove_counter()` methods. - Support for counters in :mod:`pycassa.batch` and :meth:`.ColumnFamily.batch_insert()`. - Autopacking of keys based on ``key_validation_class``. Other Features ~~~~~~~~~~~~~~ - :meth:`.ColumnFamily.multiget()` now has a `buffer_size` parameter - :meth:`.ColumnFamily.multiget_count()` now returns rows in the order that the keys were passed in, similar to how :meth:`~.ColumnFamily.multiget()` behaves. It also uses the :attr:`~.ColumnFamily.dict_class` attribute for the containing class instead of always using a :class:`dict`. - Autpacking behavior is now more transparent and configurable, allowing the user to get functionality similar to the CLI's ``assume`` command, whereby items are packed and unpacked as though they were a certain data type, even if Cassandra does not use a matching comparator type or validation class. This behavior can be controlled through the following attributes: - :attr:`.ColumnFamily.column_name_class` - :attr:`.ColumnFamily.super_column_name_class` - :attr:`.ColumnFamily.key_validation_class` - :attr:`.ColumnFamily.default_validation_class` - :attr:`.ColumnFamily.column_validators` - A :class:`.ColumnFamily` may reload its schema to handle changes in validation classes with :meth:`.ColumnFamily.load_schema()`. Bug Fixes ~~~~~~~~~ There were several related issues with overlow in :class:`.ConnectionPool`: - Connection failures when a :class:`.ConnectionPool` was in a state of overflow would not result in adjustment of the overflow counter, eventually leading the :class:`.ConnectionPool` to refuse to create new connections. - Settings of -1 for :attr:`.ConnectionPool.overflow` erroneously caused overflow to be disabled. - If overflow was enabled in conjunction with `prefill` being disabled, the effective overflow limit was raised to ``max_overflow + pool_size``. Other ~~~~~ - Overflow is now disabled by default in :class:`.ConnectionPool`. - :class:`.ColumnFamilyMap` now sets the underlying :class:`.ColumnFamily`'s :attr:`~.ColumnFamily.autopack_names` and :attr:`~.ColumnFamily.autopack_values` attributes to ``False`` upon construction. - Documentation and tests will no longer be included in the packaged tarballs. Removed Deprecated Items ~~~~~~~~~~~~~~~~~~~~~~~~ The following deprecated items have been removed: - :meth:`.ColumnFamilyMap.get_count()` - The `instance` parameter from :meth:`.ColumnFamilyMap.get_indexed_slices()` - The :class:`~.types.Int64` Column type. - :meth:`.SystemManager.get_keyspace_description()` Deprecated ~~~~~~~~~~ Athough not technically deprecated, most :class:`.ColumnFamily` constructor arguments should instead be set by setting the corresponding attribute on the :class:`.ColumnFamily` after construction. However, all previous constructor arguments will continue to be supported if passed as keyword arguments. Changes in Version 1.0.8 ------------------------ - Pack :class:`.IndexExpression` values in :meth:`~.ColumnFamilyMap.get_indexed_slices()` that are supplied through the :class:`.IndexClause` instead of just the `instance` parameter. - Column names and values which use Cassandra's IntegerType are unpacked as though they are in a BigInteger-like format. This is (backwards) compatible with the format that pycassa uses to pack IntegerType data. This fixes an incompatibility with the format that cassandra-cli and other clients use to pack IntegerType data. - Restore Python 2.5 compatibility that was broken through out of order keyword arguments in :class:`.ConnectionWrapper`. - Pack `column_start` and `column_finish` arguments in :class:`.ColumnFamily` ``*get*()`` methods when the `super_column` parameter is used. - Issue a :class:`DeprecationWarning` when a method, parameter, or class that has been deprecated is used. Most of these have been deprecated for several releases, but no warnings were issued until now. - Deprecations are now split into separate sections for each release in the changelog. Deprecated ~~~~~~~~~~ - The `instance` parameter of :meth:`ColumnFamilyMap.get_indexed_slices()` Changes in Version 1.0.7 ------------------------ - Catch KeyError in :meth:`pycassa.columnfamily.ColumnFamily.multiget()` empty row removal. If the same non-existent key was passed multiple times, a :exc:`KeyError` was raised when trying to remove it from the OrderedDictionary after the first removal. The :exc:`KeyError` is caught and ignored now. - Handle connection failures during retries. When a connection fails, it tries to create a new connection to replace itself. Exceptions during this process were not properly handled; they are now handled and count towards the retry count for the current operation. - Close connection when a :exc:`MaximumRetryException` is raised. Normally a connection is closed when an operation it is performing fails, but this was not happening for the final failure that triggers the :exc:`MaximumRetryException`. Changes in Version 1.0.6 ------------------------ - Add :exc:`EOFError` to the list of exceptions that cause a connection swap and retry - Improved autopacking efficiency for AsciiType, UTF8Type, and BytesType - Preserve sub-second timestamp precision in datetime arguments for insertion or slice bounds where a TimeUUID is expected. Previously, precision below a second was lost. - In a :exc:`MaximumRetryException`'s message, include details about the last :exc:`Exception` that caused the :exc:`MaximumRetryException` to be raised - :meth:`pycassa.pool.ConnectionPool.status()` now always reports a non-negative overflow; 0 is now used when there is not currently any overflow - Created :class:`pycassa.types.Long` as a replacement for :class:`pycassa.types.Int64`. :class:`Long` uses big-endian encoding, which is compatible with Cassandra's LongType, while :class:`Int64` used little-endian encoding. Deprecated ~~~~~~~~~~ - :class:`pycassa.types.Int64` has been deprecated in favor of :class:`pycassa.types.Long` Changes in Version 1.0.5 ------------------------ - Assume port 9160 if only a hostname is given - Remove super_column param from :meth:`pycassa.columnfamily.ColumnFamily.get_indexed_slices()` - Enable failover on functions that previously lacked it - Increase base backoff time to 0.01 seconds - Add a timeout paremeter to :class:`pycassa.system_manager.SystemManger` - Return timestamp on single-column inserts Changes in Version 1.0.4 ------------------------ - Fixed threadlocal issues that broke multithreading - Fix bug in :meth:`pycassa.columnfamily.ColumnFamily.remove()` when a super_column argument is supplied - Fix minor PoolLogger logging bugs - Added :meth:`pycassa.system_manager.SystemManager.describe_partitioner()` - Added :meth:`pycassa.system_manager.SystemManager.describe_snitch()` - Added :meth:`pycassa.system_manager.SystemManager.get_keyspace_properties()` - Moved :meth:`pycassa.system_manager.SystemManager.describe_keyspace()` and :meth:`pycassa.system_manager.SystemManager.describe_column_family()` to pycassaShell describe_keyspace() and describe_column_family() Deprecated ~~~~~~~~~~ - Renamed :meth:`pycassa.system_manager.SystemManager.get_keyspace_description()` to :meth:`pycassa.system_manager.SystemManager.get_keyspace_column_families()` and deprecated the previous name Changes in Version 1.0.3 ------------------------ - Fixed supercolumn slice bug in get() - pycassaShell now runs scripts with execfile to allow for multiline statements - 2.4 compatability fixes Changes in Version 1.0.2 ------------------------ - Failover handles a greater set of potential failures - pycassaShell now loads/reloads :class:`pycassa.columnfamily.ColumnFamily` instances when the underlying column family is created or updated - Added an option to pycassaShell to run a script after startup - Added :meth:`pycassa.system_manager.SystemManager.list_keyspaces()` Changes in Version 1.0.1 ------------------------ - Allow pycassaShell to be run without specifying a keyspace - Added :meth:`pycassa.system_manager.SystemManager.describe_schema_versions()` Changes in Version 1.0.0 ------------------------ - Created the :class:`~pycassa.system_manager.SystemManager` class to allow for keyspace, column family, and index creation, modification, and deletion. These operations are no longer provided by a Connection class. - Updated pycassaShell to use the SystemManager class - Improved retry behavior, including exponential backoff and proper resetting of the retry attempt counter - Condensed connection pooling classes into only :class:`pycassa.pool.ConnectionPool` to provide a simpler API - Changed :meth:`pycassa.connection.connect()` to return a connection pool - Use more performant Thrift API methods for :meth:`insert()` and :meth:`get()` where possible - Bundled :class:`~pycassa.util.OrderedDict` and set it as the default dictionary class for column families - Provide better :exc:`TypeError` feedback when columns are the wrong type - Use Thrift API 19.4.0 Deprecated ~~~~~~~~~~ - :meth:`ColumnFamilyMap.get_count()` has been deprecated. Use :meth:`ColumnFamily.get_count()` instead. Changes in Version 0.5.4 ------------------------ - Allow for more backward and forward compatibility - Mark a server as being down more quickly in :class:`~pycassa.connection.Connection` Changes in Version 0.5.3 ------------------------ - Added :class:`~pycassa.columnfamily.PooledColumnFamily`, which makes it easy to use connection pooling automatically with a ColumnFamily. Changes in Version 0.5.2 ------------------------ - Support for adding/updating/dropping Keyspaces and CFs in :class:`pycassa.connection.Connection` - :meth:`~pycassa.columnfamily.ColumnFamily.get_range()` optimization and more configurable batch size - batch :meth:`~pycassa.columnfamily.ColumnFamily.get_indexed_slices()` similar to :meth:`.ColumnFamily.get_range()` - Reorganized pycassa logging - More efficient packing of data types - Fix error condition that results in infinite recursion - Limit pooling retries to only appropriate exceptions - Use Thrift API 19.3.0 Changes in Version 0.5.1 ------------------------ - Automatically detect if a column family is a standard column family or a super column family - :meth:`~pycassa.columnfamily.ColumnFamily.multiget_count()` support - Allow preservation of key order in :meth:`~pycassa.columnfamily.ColumnFamily.multiget()` if an ordered dictionary is used - Convert timestamps to v1 UUIDs where appropriate - pycassaShell documentation - Use Thrift API 17.1.0 Changes in Version 0.5.0 ------------------------ - Connection Pooling support: :mod:`pycassa.pool` - Started moving logging to :mod:`pycassa.logger` - Use Thrift API 14.0.0 Changes in Version 0.4.3 ------------------------ - Autopack on CF's default_validation_class - Use Thrift API 13.0.0 Changes in Version 0.4.2 ------------------------ - Added batch mutations interface: :mod:`pycassa.batch` - Made bundled thrift-gen code a subpackage of pycassa - Don't attempt to reencode already encoded UTF8 strings Changes in Version 0.4.1 ------------------------ - Added :meth:`~pycassa.columnfamily.ColumnFamily.batch_insert()` - Redifined :meth:`~pycassa.columnfamily.ColumnFamily.insert()` in terms of :meth:`~pycassa.columnfamily.ColumnFamily.batch_insert()` - Fixed UTF8 autopacking - Convert datetime slice args to uuids when appropriate - Changed how thrift-gen code is bundled - Assert that the major version of the thrift API is the same on the client and on the server - Use Thrift API 12.0.0 Changes in Version 0.4.0 ------------------------ - Added pycassaShell, a simple interactive shell - Converted the test config from xml to yaml - Fixed overflow error on :meth:`~pycassa.columnfamily.ColumnFamily.get_count()` - Only insert columns which exist in the model object - Make ColumnFamilyMap not ignore the ColumnFamily's dict_class - Specify keyspace as argument to :meth:`~pycassa.connection.connect()` - Add support for framed transport and default to using it - Added autopacking for column names and values - Added support for secondary indexes with :meth:`~pycassa.columnfamily.ColumnFamily.get_indexed_slices()` and :mod:`pycassa.index` - Added :meth:`~pycassa.columnfamily.ColumnFamily.truncate()` - Use Thrift API 11.0.0 pycassa-1.11.2.1/doc/conf.py000066400000000000000000000114341303744607500154560ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # PyMongo documentation build configuration file # # This file is execfile()d with the current directory set to its containing dir. import sys import os sys.path.append(os.path.abspath('..')) import pycassa # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = 'pycassa' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = pycassa.__version__ # The full version, including alpha/beta/rc tags. release = pycassa.__version__ # List of documents that shouldn't be included in the build. unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for extensions ---------------------------------------------------- autoclass_content = 'both' # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = 'default' # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = 'favicon.ico' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. html_show_copyright = False # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'pycassa' + release.replace('.', '_') # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'pycassa.tex', 'pycassa Documentation', 'Jonathan Hseu', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True pycassa-1.11.2.1/doc/development.rst000066400000000000000000000056251303744607500172400ustar00rootroot00000000000000Development =========== New thrift API -------------- pycassa includes Cassandra's Python Thrift API in `pycassa.cassandra`. Since Cassandra 1.1.0, the generated Thrift definitions are fully backwards compatible, allowing you to use attributes that have been deprecated or removed in recent versions of Cassandra. So, even though the code is generated from a Cassandra 1.1.0 definition, you can use the resulting code with 0.7 and still have full access to attributes that were removed after 0.7, such as the memtable flush thresholds. The following explains the procedure using Mac OS Lion as an example. Other Linux and BSD versions should work in similar ways. Of course you need to have a supported Java JDK installed, the Apple provided JDK is fine. This approach doesn't install any of the tools globally but keeps them isolated. As such we are using `virtualenv `. First you need some prerequisites, installed via macports or some other package management system:: sudo port install boost libevent Download and unpack thrift (http://thrift.apache.org/download/):: wget http://apache.osuosl.org/thrift/0.8.0/thrift-0.8.0.tar.gz tar xzf thrift-0.8.0.tar.gz Create a virtualenv and tell thrift to install into it:: cd thrift-0.8.0 virtualenv-2.7 . export PY_PREFIX=$PWD Configure and build thrift with the minimal functionality we need:: ./configure --prefix=$PWD --disable-static --with-boost=/opt/local \ --with-libevent=/opt/local --without-csharp --without-cpp \ --without-java --without-erlang --without-perl --without-php \ --without-php_extension --without-ruby --without-haskell --without-go make make install You can test the successful install:: bin/thrift -version bin/python -c "from thrift.protocol import fastbinary; print(fastbinary)" Next up is Cassandra. Clone the Git repository:: cd .. git clone http://git-wip-us.apache.org/repos/asf/cassandra.git cd cassandra We will build the Thrift API for the 1.1.1 release, so checkout the tag (instructions simplified from build.xml `gen-thrift-py`):: git checkout cassandra-1.1.1 cd interface ../../thrift-0.8.0/bin/thrift --gen py:new_style -o thrift/ \ cassandra.thrift We are only interested in the generated Python modules:: ls thrift/gen-py/cassandra/*.py These should replace the python files in `pycassa/cassandra`, allowing you to use the latest Thrift methods and object definitions, such as CfDef (which controls what attributes you may set when creating or updating a column family). Don't forget to review the documentation. Make sure you run the tests, especially if adjusting the default protocol version or introducing backwards incompatible API changes. References ---------- * http://thrift.apache.org/docs/install/ * http://wiki.apache.org/cassandra/HowToContribute * http://wiki.apache.org/cassandra/InstallThrift pycassa-1.11.2.1/doc/example/000077500000000000000000000000001303744607500156075ustar00rootroot00000000000000pycassa-1.11.2.1/doc/example/index.rst000066400000000000000000000002521303744607500174470ustar00rootroot00000000000000Twissandra Example ================== This example shows you how to work with Twissandra, a Twitter-like example Cassandra application. Setup ----- To be completed... pycassa-1.11.2.1/doc/index.rst000066400000000000000000000054151303744607500160220ustar00rootroot00000000000000pycassa |release| Documentation =============================== pycassa is a Thrift-based Python client for `Apache Cassandra `_. pycassa does not support CQL or Cassandra's native protocol, which are a replacement for the Thrift interface that pycassa is based on. If you are starting a new project, *it is highly recommended that you use the newer* `DataStax python driver `_ instead of pycassa. pycassa is open source under the `MIT license `_. The source code repository for pycassa can be found on `Github `_. Contents -------- :doc:`installation` How to install pycassa. :doc:`tutorial` A short overview of pycassa usage. :doc:`api/index` The pycassa API documentation. :doc:`Assorted Functionality ` How to work with various Cassandra and pycassa features. :doc:`using_with/index` How to use pycassa with other projects, including eventlet and Celery. :doc:`changelog` The changelog for every version of pycassa. :doc:`development` Notes for developing pycassa itself. Help ------------ Mailing Lists * User list: mail to `pycassa-discuss@googlegroups.com `_ or `view online `_. * Developer list: mail to `pycassa-devel@googlegroups.com `_ or `view online `_. IRC * Use #cassandra on `irc.freenode.net `_. If you don't have an IRC client, you can use `freenode's web based client `_. Issues ------ Bugs and feature requests for pycassa are currently tracked through the `github issue tracker `_. Contributing ------------ You are encouraged to offer any contributions or ideas you have. Contributing to the documentation or examples, reporting bugs, requesting features, and (of course) improving the code are all equally welcome. To contribute, fork the project on `github `_ and make a `pull request `_. About This Documentation ------------------------ This documentation is generated using the `Sphinx `_ documentation generator. The source files for the documentation are located in the *doc/* directory of pycassa. To generate the documentation, run the following command from the root directory of pycassa: .. code-block:: bash $ python setup.py doc .. toctree:: :hidden: installation tutorial example/index api/index changelog assorted/index using_with/index development pycassa-1.11.2.1/doc/installation.rst000066400000000000000000000015051303744607500174100ustar00rootroot00000000000000.. _installing: Installing ========== Requirements ------------ You need to have either Python 2.6 or 2.7 installed. Installing from PyPi -------------------- If you have :file:`pip` installed, you can simply do: .. code-block:: bash $ pip install pycassa This will also install the Thrift python bindings automatically. Manual Installation ------------------- Make sure that you have Thrift's python bindings installed: .. code-block:: bash $ pip install thrift You can download a release from `github `_ or check out the latest source from github:: $ git clone git://github.com/pycassa/pycassa.git You can simply copy the pycassa directory into your project, or you can install pycassa system-wide: .. code-block:: bash $ cd pycassa/ $ sudo python setup.py install pycassa-1.11.2.1/doc/tutorial.rst000066400000000000000000000311271303744607500165550ustar00rootroot00000000000000Tutorial ======== This tutorial is intended as an introduction to working with Cassandra and **pycassa**. .. toctree:: :maxdepth: 2 Prerequisites ------------- Before we start, make sure that you have **pycassa** :doc:`installed `. In the Python shell, the following should run without raising an exception: .. code-block:: python >>> import pycassa This tutorial also assumes that a Cassandra instance is running on the default host and port. Read the `instructions for getting started with Cassandra `_ if you need help with this. You can start Cassandra like so: .. code-block:: bash $ pwd ~/cassandra $ bin/cassandra -f Creating a Keyspace and Column Families --------------------------------------- We need to create a keyspace and some column families to work with. There are two good ways to do this: using cassandra-cli, or using pycassaShell. Both are documented below. Using cassandra-cli ^^^^^^^^^^^^^^^^^^^ The cassandra-cli utility is included with Cassandra. It allows you to create and modify the schema, explore or modify data, and examine a few things about your cluster. Here's how to create the keyspace and column family we need for this tutorial: .. code-block:: none user@~ $ cassandra-cli Welcome to cassandra CLI. Type 'help;' or '?' for help. Type 'quit;' or 'exit;' to quit. [default@unknown] connect localhost/9160; Connected to: "Test Cluster" on localhost/9160 [default@unknown] create keyspace Keyspace1; 4f9e42c4-645e-11e0-ad9e-e700f669bcfc Waiting for schema agreement... ... schemas agree across the cluster [default@unknown] use Keyspace1; Authenticated to keyspace: Keyspace1 [default@Keyspace1] create column family ColumnFamily1; 632cf985-645e-11e0-ad9e-e700f669bcfc Waiting for schema agreement... ... schemas agree across the cluster [default@Keyspace1] quit; user@~ $ This connects to a local instance of Cassandra and creates a keyspace named 'Keyspace1' with a column family named 'ColumnFamily1'. You can find further `documentation for the CLI online `_. Using pycassaShell ^^^^^^^^^^^^^^^^^^ :ref:`pycassa-shell` is an interactive Python shell that is included with **pycassa**. Upon starting, it sets up many of the objects that you typically work with when using **pycassa**. It provides most of the functionality that cassandra-cli does, but also gives you a full Python environment to work with. Here's how to create the keyspace and column family: .. code-block:: none user@~ $ pycassaShell ---------------------------------- Cassandra Interactive Python Shell ---------------------------------- Keyspace: None Host: localhost:9160 ColumnFamily instances are only available if a keyspace is specified with -k/--keyspace Schema definition tools and cluster information are available through SYSTEM_MANAGER. .. code-block:: python >>> SYSTEM_MANAGER.create_keyspace('Keyspace1', strategy_options={"replication_factor": "1"}) >>> SYSTEM_MANAGER.create_column_family('Keyspace1', 'ColumnFamily1') Connecting to Cassandra ----------------------- The first step when working with **pycassa** is to connect to the running cassandra instance: .. code-block:: python >>> from pycassa.pool import ConnectionPool >>> pool = ConnectionPool('Keyspace1') The above code will connect by default to ``localhost:9160``. We can also specify the host (or hosts) and port explicitly as follows: .. code-block:: python >>> pool = ConnectionPool('Keyspace1', ['localhost:9160']) This creates a small connection pool for use with a :class:`~pycassa.columnfamily.ColumnFamily` . See `Connection Pooling`_ for more details. Getting a ColumnFamily ---------------------- A column family is a collection of rows and columns in Cassandra, and can be thought of as roughly the equivalent of a table in a relational database. We'll use one of the column families that are included in the default schema file: .. code-block:: python >>> from pycassa.pool import ConnectionPool >>> from pycassa.columnfamily import ColumnFamily >>> >>> pool = ConnectionPool('Keyspace1') >>> col_fam = ColumnFamily(pool, 'ColumnFamily1') If you get an error about the keyspace or column family not existing, make sure you created the keyspace and column family as shown above. Inserting Data -------------- To insert a row into a column family we can use the :meth:`~pycassa.columnfamily.ColumnFamily.insert` method: .. code-block:: python >>> col_fam.insert('row_key', {'col_name': 'col_val'}) 1354459123410932 We can also insert more than one column at a time: .. code-block:: python >>> col_fam.insert('row_key', {'col_name':'col_val', 'col_name2':'col_val2'}) 1354459123410932 And we can insert more than one row at a time: .. code-block:: python >>> col_fam.batch_insert({'row1': {'name1': 'val1', 'name2': 'val2'}, ... 'row2': {'foo': 'bar'}}) 1354491238721387 Getting Data ------------ There are many more ways to get data out of Cassandra than there are to insert data. The simplest way to get data is to use :meth:`~pycassa.columnfamily.ColumnFamily.get()`: .. code-block:: python >>> col_fam.get('row_key') {'col_name': 'col_val', 'col_name2': 'col_val2'} Without any other arguments, :meth:`~pycassa.columnfamily.ColumnFamily.get()` returns every column in the row (up to `column_count`, which defaults to 100). If you only want a few of the columns and you know them by name, you can specify them using a `columns` argument: .. code-block:: python >>> col_fam.get('row_key', columns=['col_name', 'col_name2']) {'col_name': 'col_val', 'col_name2': 'col_val2'} We may also get a slice (or subrange) of the columns in a row. To do this, use the `column_start` and `column_finish` parameters. One or both of these may be left empty to allow the slice to extend to one or both ends. Note that `column_finish` is inclusive. .. code-block:: python >>> for i in range(1, 10): ... col_fam.insert('row_key', {str(i): 'val'}) ... 1302542571215334 1302542571218485 1302542571220599 1302542571221991 1302542571223388 1302542571224629 1302542571225859 1302542571227029 1302542571228472 >>> col_fam.get('row_key', column_start='5', column_finish='7') {'5': 'val', '6': 'val', '7': 'val'} Sometimes you want to get columns in reverse sorted order. A common example of this is getting the last N columns from a row that represents a timeline. To do this, set `column_reversed` to ``True``. If you think of the columns as being sorted from left to right, when `column_reversed` is ``True``, `column_start` will determine the right end of the range while `column_finish` will determine the left. Here's an example of getting the last three columns in a row: .. code-block:: python >>> col_fam.get('row_key', column_reversed=True, column_count=3) {'9': 'val', '8': 'val', '7': 'val'} There are a few ways to get multiple rows at the same time. The first is to specify them by name using :meth:`~pycassa.columnfamily.ColumnFamily.multiget()`: .. code-block:: python >>> col_fam.multiget(['row1', 'row2']) {'row1': {'name1': 'val1', 'name2': 'val2'}, 'row_key2': {'foo': 'bar'}} Another way is to get a range of keys at once by using :meth:`~pycassa.columnfamily.ColumnFamily.get_range()`. The parameter `finish` is also inclusive here, too. Assuming we've inserted some rows with keys 'row_key1' through 'row_key9', we can do this: .. code-block:: python >>> result = col_fam.get_range(start='row_key5', finish='row_key7') >>> for key, columns in result: ... print key, '=>', columns ... 'row_key5' => {'name':'val'} 'row_key6' => {'name':'val'} 'row_key7' => {'name':'val'} .. note:: Cassandra must be using an OrderPreservingPartitioner for you to be able to get a meaningful range of rows; the default, RandomPartitioner, stores rows in the order of the MD5 hash of their keys. See http://www.datastax.com/docs/1.1/cluster_architecture/partitioning. The last way to get multiple rows at a time is to take advantage of secondary indexes by using :meth:`~pycassa.columnfamily.ColumnFamily.get_indexed_slices()`, which is described in the :ref:`secondary-indexes` section. It's also possible to specify a set of columns or a slice for :meth:`~pycassa.columnfamily.ColumnFamily.multiget()` and :meth:`~pycassa.columnfamily.ColumnFamily.get_range()` just like we did for :meth:`~pycassa.columnfamily.ColumnFamily.get()`. Counting -------- If you just want to know how many columns are in a row, you can use :meth:`~pycassa.columnfamily.ColumnFamily.get_count()`: .. code-block:: python >>> col_fam.get_count('row_key') 3 If you only want to get a count of the number of columns that are inside of a slice or have particular names, you can do that as well: .. code-block:: python >>> col_fam.get_count('row_key', columns=['foo', 'bar']) 2 >>> col_fam.get_count('row_key', column_start='foo') 3 You can also do this in parallel for multiple rows using :meth:`~pycassa.columnfamily.ColumnFamily.multiget_count()`: .. code-block:: python >>> col_fam.multiget_count(['fib0', 'fib1', 'fib2', 'fib3', 'fib4']) {'fib0': 1, 'fib1': 1, 'fib2': 2, 'fib3': 3, 'fib4': 5'} .. code-block:: python >>> col_fam.multiget_count(['fib0', 'fib1', 'fib2', 'fib3', 'fib4'], ... columns=['col1', 'col2', 'col3']) {'fib0': 1, 'fib1': 1, 'fib2': 2, 'fib3': 3, 'fib4': 3'} .. code-block:: python >>> col_fam.multiget_count(['fib0', 'fib1', 'fib2', 'fib3', 'fib4'], ... column_start='col1', column_finish='col3') {'fib0': 1, 'fib1': 1, 'fib2': 2, 'fib3': 3, 'fib4': 3'} Typed Column Names and Values ----------------------------- Within a column family, column names have a specified `comparator type` which controls how they are sorted. Column values and row keys may also have a `validation class`, which validates that inserted values are the correct type. The different types available include ASCII strings, integers, dates, UTF8, raw bytes, UUIDs, and more. See :mod:`pycassa.types` for a full list. Cassandra requires you to pack column names and values into a format it can understand by using something like :meth:`struct.pack()`. Fortunately, when **pycassa** sees that a column family has a particular comparator type or validation class, it knows to pack and unpack these data types automatically for you. So, if we want to write to the StandardInt column family, which has an IntegerType comparator, we can do the following: .. code-block:: python >>> col_fam = pycassa.ColumnFamily(pool, 'StandardInt') >>> col_fam.insert('row_key', {42: 'some_val'}) 1354491238721387 >>> col_fam.get('row_key') {42: 'some_val'} Notice that 42 is an integer here, not a string. As mentioned above, Cassandra also offers validators on column values and keys with the same set of types. Column value validators can be set for an entire column family, for individual columns, or both. **pycassa** knows to pack these column values automatically too. Suppose we have a `Users` column family with two columns, ``name`` and ``age``, with types UTF8Type and IntegerType: .. code-block:: python >>> col_fam = pycassa.ColumnFamily(pool, 'Users') >>> col_fam.insert('thobbs', {'name': 'Tyler', 'age': 24}) 1354491238782746 >>> col_fam.get('thobbs') {'name': 'Tyler', 'age': 24} Of course, if **pycassa**'s automatic behavior isn't working for you, you can turn it off or change it using :attr:`~.ColumnFamily.autopack_names`, :attr:`~.ColumnFamily.autopack_values`, :attr:`~.ColumnFamily.column_name_class`, :attr:`~.ColumnFamily.default_validation_class`, and so on. Connection Pooling ------------------ Pycassa uses connection pools to maintain connections to Cassandra servers. The :class:`~pycassa.pool.ConnectionPool` class is used to create the connection pool. After creating the pool, it may be used to create multiple :class:`~pycassa.columnfamily.ColumnFamily` objects. .. code-block:: python >>> pool = pycassa.ConnectionPool('Keyspace1', pool_size=20) >>> standard_cf = pycassa.ColumnFamily(pool, 'Standard1') >>> standard_cf.insert('key', {'col': 'val'}) 1354491238782746 >>> super_cf = pycassa.ColumnFamily(pool, 'Super1') >>> super_cf.insert('key2', {'column' : {'col': 'val'}}) 1354491239779182 >>> standard_cf.get('key') {'col': 'val'} >>> pool.dispose() Automatic retries (or "failover") happen by default with ConectionPools. This means that if any operation fails, it will be transparently retried on other servers until it succeeds or a maximum number of failures is reached. pycassa-1.11.2.1/doc/using_with/000077500000000000000000000000001303744607500163345ustar00rootroot00000000000000pycassa-1.11.2.1/doc/using_with/celery.rst000066400000000000000000000035211303744607500203520ustar00rootroot00000000000000.. _using_with_celery: Using with Celery ================= `Celery `_ is an asynchronous task queue/job queue based on distributed message passing. Usage in a Worker ----------------- Workers in celery may be created by spawning new processes or threads from the celeryd process. The `multiprocessing `_ module is used to spawn new worker processes, while `eventlet `_ is used to spawn new worker green threads. :mod:`multiprocessing` ^^^^^^^^^^^^^^^^^^^^^^ The :class:`~.ConnectionPool` class is not :mod:`multiprocessing`-safe. Because celery evaluates globals prior to spawning new worker processes, a global :class:`~.ConnectionPool` will be shared among multiple processes. This is inherently unsafe and will result in race conditions. Instead of having celery spawn multiple child processes, it is recommended that you set `CELERYD_CONCURRENCY `_ to 1 and start multiple separate celery processes. The process argument ``--pool=solo`` may also be used when starting the celery processes. .. seealso:: :ref:`using_with_multiprocessing` :mod:`eventlet` ^^^^^^^^^^^^^^^ Because the :class:`~.ConnectionPool` class uses concurrency primitives from the :mod:`threading` module, you can use :mod:`eventlet` worker threads after `monkey patching `_ the standard library. Specifically, the :mod:`threading` and :mod:`socket` modules must monkey-patched. Be aware that you may need to install `dnspython `_ in order to connect to your nodes. .. seealso:: :ref:`using_with_eventlet` Usage as a Broker Backend ------------------------- pycassa is not currently a broker backend option. pycassa-1.11.2.1/doc/using_with/eventlet.rst000066400000000000000000000010251303744607500207120ustar00rootroot00000000000000.. _using_with_eventlet: Using with Eventlet =================== Because the :class:`~.ConnectionPool` class uses concurrency primitives from the :mod:`threading` module, you can use :mod:`eventlet` green threads after `monkey patching `_ the standard library. Specifically, the :mod:`threading` and :mod:`socket` modules must monkey-patched. Be aware that you may need to install `dnspython `_ in order to connect to your nodes. pycassa-1.11.2.1/doc/using_with/index.rst000066400000000000000000000002021303744607500201670ustar00rootroot00000000000000Using pycassa with Other Tools ============================== .. toctree:: :maxdepth: 2 celery eventlet multiprocessing pycassa-1.11.2.1/doc/using_with/multiprocessing.rst000066400000000000000000000006441303744607500223210ustar00rootroot00000000000000.. _using_with_multiprocessing: Using with :mod:`multiprocessing` --------------------------------- The :class:`~.ConnectionPool` class is not :mod:`multiprocessing`-safe. If you're using pycassa with multiprocessing, be sure to create one :class:`~.ConnectionPool` per process. Creating a :class:`~.ConnectionPool` before forking and sharing it among processes is inherently unsafe and will result in race conditions. pycassa-1.11.2.1/ez_setup.py000066400000000000000000000206241303744607500156230ustar00rootroot00000000000000#!python """Bootstrap setuptools installation If you want to use setuptools in your package's setup.py, just include this file in the same directory with it, and add this to the top of your setup.py:: from ez_setup import use_setuptools use_setuptools() If you want to require a specific version of setuptools, set a download mirror, or use an alternate download directory, you can do so by supplying the appropriate options to ``use_setuptools()``. This file can also be run as a script to install or upgrade setuptools. """ import os import shutil import sys import tempfile import tarfile import optparse import subprocess from distutils import log try: from site import USER_SITE except ImportError: USER_SITE = None DEFAULT_VERSION = "0.9.6" DEFAULT_URL = "https://pypi.python.org/packages/source/s/setuptools/" def _python_cmd(*args): args = (sys.executable,) + args return subprocess.call(args) == 0 def _install(tarball, install_args=()): # extracting the tarball tmpdir = tempfile.mkdtemp() log.warn('Extracting in %s', tmpdir) old_wd = os.getcwd() try: os.chdir(tmpdir) tar = tarfile.open(tarball) _extractall(tar) tar.close() # going in the directory subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0]) os.chdir(subdir) log.warn('Now working in %s', subdir) # installing log.warn('Installing Setuptools') if not _python_cmd('setup.py', 'install', *install_args): log.warn('Something went wrong during the installation.') log.warn('See the error message above.') # exitcode will be 2 return 2 finally: os.chdir(old_wd) shutil.rmtree(tmpdir) def _build_egg(egg, tarball, to_dir): # extracting the tarball tmpdir = tempfile.mkdtemp() log.warn('Extracting in %s', tmpdir) old_wd = os.getcwd() try: os.chdir(tmpdir) tar = tarfile.open(tarball) _extractall(tar) tar.close() # going in the directory subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0]) os.chdir(subdir) log.warn('Now working in %s', subdir) # building an egg log.warn('Building a Setuptools egg in %s', to_dir) _python_cmd('setup.py', '-q', 'bdist_egg', '--dist-dir', to_dir) finally: os.chdir(old_wd) shutil.rmtree(tmpdir) # returning the result log.warn(egg) if not os.path.exists(egg): raise IOError('Could not build the egg.') def _do_download(version, download_base, to_dir, download_delay): egg = os.path.join(to_dir, 'setuptools-%s-py%d.%d.egg' % (version, sys.version_info[0], sys.version_info[1])) if not os.path.exists(egg): tarball = download_setuptools(version, download_base, to_dir, download_delay) _build_egg(egg, tarball, to_dir) sys.path.insert(0, egg) import setuptools setuptools.bootstrap_install_from = egg def use_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, download_delay=15): # making sure we use the absolute path to_dir = os.path.abspath(to_dir) was_imported = 'pkg_resources' in sys.modules or \ 'setuptools' in sys.modules try: import pkg_resources except ImportError: return _do_download(version, download_base, to_dir, download_delay) try: pkg_resources.require("setuptools>=" + version) return except pkg_resources.VersionConflict: e = sys.exc_info()[1] if was_imported: sys.stderr.write( "The required version of setuptools (>=%s) is not available,\n" "and can't be installed while this script is running. Please\n" "install a more recent version first, using\n" "'easy_install -U setuptools'." "\n\n(Currently using %r)\n" % (version, e.args[0])) sys.exit(2) else: del pkg_resources, sys.modules['pkg_resources'] # reload ok return _do_download(version, download_base, to_dir, download_delay) except pkg_resources.DistributionNotFound: return _do_download(version, download_base, to_dir, download_delay) def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, delay=15): """Download setuptools from a specified location and return its filename `version` should be a valid setuptools version number that is available as an egg for download under the `download_base` URL (which should end with a '/'). `to_dir` is the directory where the egg will be downloaded. `delay` is the number of seconds to pause before an actual download attempt. """ # making sure we use the absolute path to_dir = os.path.abspath(to_dir) try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen tgz_name = "setuptools-%s.tar.gz" % version url = download_base + tgz_name saveto = os.path.join(to_dir, tgz_name) src = dst = None if not os.path.exists(saveto): # Avoid repeated downloads try: log.warn("Downloading %s", url) src = urlopen(url) # Read/write all in one block, so we don't create a corrupt file # if the download is interrupted. data = src.read() dst = open(saveto, "wb") dst.write(data) finally: if src: src.close() if dst: dst.close() return os.path.realpath(saveto) def _extractall(self, path=".", members=None): """Extract all members from the archive to the current working directory and set owner, modification time and permissions on directories afterwards. `path' specifies a different directory to extract to. `members' is optional and must be a subset of the list returned by getmembers(). """ import copy import operator from tarfile import ExtractError directories = [] if members is None: members = self for tarinfo in members: if tarinfo.isdir(): # Extract directories with a safe mode. directories.append(tarinfo) tarinfo = copy.copy(tarinfo) tarinfo.mode = 448 # decimal for oct 0700 self.extract(tarinfo, path) # Reverse sort directories. if sys.version_info < (2, 4): def sorter(dir1, dir2): return cmp(dir1.name, dir2.name) directories.sort(sorter) directories.reverse() else: directories.sort(key=operator.attrgetter('name'), reverse=True) # Set correct owner, mtime and filemode on directories. for tarinfo in directories: dirpath = os.path.join(path, tarinfo.name) try: self.chown(tarinfo, dirpath) self.utime(tarinfo, dirpath) self.chmod(tarinfo, dirpath) except ExtractError: e = sys.exc_info()[1] if self.errorlevel > 1: raise else: self._dbg(1, "tarfile: %s" % e) def _build_install_args(options): """ Build the arguments to 'python setup.py install' on the setuptools package """ install_args = [] if options.user_install: if sys.version_info < (2, 6): log.warn("--user requires Python 2.6 or later") raise SystemExit(1) install_args.append('--user') return install_args def _parse_args(): """ Parse the command line for options """ parser = optparse.OptionParser() parser.add_option( '--user', dest='user_install', action='store_true', default=False, help='install in user site package (requires Python 2.6 or later)') parser.add_option( '--download-base', dest='download_base', metavar="URL", default=DEFAULT_URL, help='alternative URL from where to download the setuptools package') options, args = parser.parse_args() # positional arguments are ignored return options def main(version=DEFAULT_VERSION): """Install or upgrade setuptools and EasyInstall""" options = _parse_args() tarball = download_setuptools(download_base=options.download_base) return _install(tarball, _build_install_args(options)) if __name__ == '__main__': sys.exit(main()) pycassa-1.11.2.1/pycassa/000077500000000000000000000000001303744607500150525ustar00rootroot00000000000000pycassa-1.11.2.1/pycassa/__init__.py000066400000000000000000000007641303744607500171720ustar00rootroot00000000000000from pycassa.columnfamily import * from pycassa.columnfamilymap import * from pycassa.index import * from pycassa.pool import * from pycassa.system_manager import * from pycassa.cassandra.ttypes import AuthenticationException,\ AuthorizationException, ConsistencyLevel, InvalidRequestException,\ NotFoundException, UnavailableException, TimedOutException from pycassa.logging.pycassa_logger import * __version_info__ = (1, 11, 1, 'post') __version__ = '.'.join(map(str, __version_info__)) pycassa-1.11.2.1/pycassa/batch.py000066400000000000000000000174121303744607500165120ustar00rootroot00000000000000""" The batch interface allows insert, update, and remove operations to be performed in batches. This allows a convenient mechanism for streaming updates or doing a large number of operations while reducing number of RPC roundtrips. Batch mutator objects are synchronized and can be safely passed around threads. .. code-block:: python >>> b = cf.batch(queue_size=10) >>> b.insert('key1', {'col1':'value11', 'col2':'value21'}) >>> b.insert('key2', {'col1':'value12', 'col2':'value22'}, ttl=15) >>> b.remove('key1', ['col2']) >>> b.remove('key2') >>> b.send() One can use the `queue_size` argument to control how many mutations will be queued before an automatic :meth:`send` is performed. This allows simple streaming of updates. If set to ``None``, automatic checkpoints are disabled. Default is 100. Supercolumns are supported: .. code-block:: python >>> b = scf.batch() >>> b.insert('key1', {'supercol1': {'colA':'value1a', 'colB':'value1b'} ... {'supercol2': {'colA':'value2a', 'colB':'value2b'}}) >>> b.remove('key1', ['colA'], 'supercol1') >>> b.send() You may also create a :class:`.Mutator` directly, allowing operations on multiple column families: .. code-block:: python >>> b = Mutator(pool) >>> b.insert(cf, 'key1', {'col1':'value1', 'col2':'value2'}) >>> b.insert(supercf, 'key1', {'subkey1': {'col1':'value1', 'col2':'value2'}}) >>> b.send() .. note:: This interface does not implement atomic operations across column families. All the limitations of the `batch_mutate` Thrift API call applies. Remember, a mutation in Cassandra is always atomic per key per column family only. .. note:: If a single operation in a batch fails, the whole batch fails. In addition mutators can be used as context managers, where an implicit :meth:`send` will be called upon exit. .. code-block:: python >>> with cf.batch() as b: ... b.insert('key1', {'col1':'value11', 'col2':'value21'}) ... b.insert('key2', {'col1':'value12', 'col2':'value22'}) Calls to :meth:`insert` and :meth:`remove` can also be chained: .. code-block:: python >>> cf.batch().remove('foo').remove('bar').send() To use atomic batches (supported in Cassandra 1.2 and later), pass the atomic option in when creating the batch: .. code-block:: python >>> cf.batch(atomic=True) or when sending it: .. code-block:: python >>> b = cf.batch() >>> b.insert('key1', {'col1':'val2'}) >>> b.insert('key2', {'col1':'val2'}) >>> b.send(atomic=True) """ import threading from pycassa.cassandra.ttypes import (ConsistencyLevel, Deletion, Mutation, SlicePredicate) __all__ = ['Mutator', 'CfMutator'] class Mutator(object): """ Batch update convenience mechanism. Queues insert/update/remove operations and executes them when the queue is full or `send` is called explicitly. """ def __init__(self, pool, queue_size=100, write_consistency_level=None, allow_retries=True, atomic=False): """ `pool` is the :class:`~pycassa.pool.ConnectionPool` that will be used for operations. After `queue_size` operations, :meth:`send()` will be executed automatically. Use 0 to disable automatic sends. """ self._buffer = [] self._lock = threading.RLock() self.pool = pool self.limit = queue_size self.allow_retries = allow_retries self.atomic = atomic if write_consistency_level is None: self.write_consistency_level = ConsistencyLevel.ONE else: self.write_consistency_level = write_consistency_level def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.send() def _enqueue(self, key, column_family, mutations): self._lock.acquire() try: mutation = (key, column_family.column_family, mutations) self._buffer.append(mutation) if self.limit and len(self._buffer) >= self.limit: self.send() finally: self._lock.release() return self def send(self, write_consistency_level=None, atomic=None): """ Sends all operations currently in the batch and clears the batch. """ if write_consistency_level is None: write_consistency_level = self.write_consistency_level if atomic is None: atomic = self.atomic mutations = {} conn = None self._lock.acquire() try: for key, column_family, cols in self._buffer: mutations.setdefault(key, {}).setdefault(column_family, []).extend(cols) if mutations: conn = self.pool.get() mutatefn = conn.atomic_batch_mutate if atomic else conn.batch_mutate mutatefn(mutations, write_consistency_level, allow_retries=self.allow_retries) self._buffer = [] finally: if conn: conn.return_to_pool() self._lock.release() def insert(self, column_family, key, columns, timestamp=None, ttl=None): """ Adds a single row insert to the batch. `column_family` is the :class:`~pycassa.columnfamily.ColumnFamily` that the insert will be executed on. If this is used on a counter column family, integers may be used for column values, and they will be taken as counter adjustments. """ if columns: if timestamp is None: timestamp = column_family.timestamp() packed_key = column_family._pack_key(key) mut_list = column_family._make_mutation_list(columns, timestamp, ttl) self._enqueue(packed_key, column_family, mut_list) return self def remove(self, column_family, key, columns=None, super_column=None, timestamp=None): """ Adds a single row remove to the batch. `column_family` is the :class:`~pycassa.columnfamily.ColumnFamily` that the remove will be executed on. """ if timestamp is None: timestamp = column_family.timestamp() deletion = Deletion(timestamp=timestamp) _pack_name = column_family._pack_name if super_column is not None: deletion.super_column = _pack_name(super_column, True) if columns is not None: is_super = column_family.super and super_column is None packed_cols = [_pack_name(col, is_super) for col in columns] deletion.predicate = SlicePredicate(column_names=packed_cols) mutation = Mutation(deletion=deletion) packed_key = column_family._pack_key(key) self._enqueue(packed_key, column_family, (mutation,)) return self class CfMutator(Mutator): """ A :class:`~pycassa.batch.Mutator` that deals only with one column family. """ def __init__(self, column_family, queue_size=100, write_consistency_level=None, allow_retries=True, atomic=False): """ `column_family` is the :class:`~pycassa.columnfamily.ColumnFamily` that all operations will be executed on. """ wcl = write_consistency_level or column_family.write_consistency_level Mutator.__init__(self, column_family.pool, queue_size, wcl, allow_retries, atomic) self._column_family = column_family def insert(self, key, cols, timestamp=None, ttl=None): """ Adds a single row insert to the batch. """ return Mutator.insert(self, self._column_family, key, cols, timestamp, ttl) def remove(self, key, columns=None, super_column=None, timestamp=None): """ Adds a single row remove to the batch. """ return Mutator.remove(self, self._column_family, key, columns, super_column, timestamp) pycassa-1.11.2.1/pycassa/cassandra/000077500000000000000000000000001303744607500170115ustar00rootroot00000000000000pycassa-1.11.2.1/pycassa/cassandra/Cassandra.py000066400000000000000000011574301303744607500212750ustar00rootroot00000000000000# # Autogenerated by Thrift Compiler (0.9.0) # # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING # # options string: py:new_style # from thrift.Thrift import TType, TMessageType, TException, TApplicationException from ttypes import * from thrift.Thrift import TProcessor from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol, TProtocol try: from thrift.protocol import fastbinary except: fastbinary = None class Iface(object): def login(self, auth_request): """ Parameters: - auth_request """ pass def set_keyspace(self, keyspace): """ Parameters: - keyspace """ pass def get(self, key, column_path, consistency_level): """ Get the Column or SuperColumn at the given column_path. If no value is present, NotFoundException is thrown. (This is the only method that can throw an exception under non-failure conditions.) Parameters: - key - column_path - consistency_level """ pass def get_slice(self, key, column_parent, predicate, consistency_level): """ Get the group of columns contained by column_parent (either a ColumnFamily name or a ColumnFamily/SuperColumn name pair) specified by the given SlicePredicate. If no matching values are found, an empty list is returned. Parameters: - key - column_parent - predicate - consistency_level """ pass def get_count(self, key, column_parent, predicate, consistency_level): """ returns the number of columns matching predicate for a particular key, ColumnFamily and optionally SuperColumn. Parameters: - key - column_parent - predicate - consistency_level """ pass def multiget_slice(self, keys, column_parent, predicate, consistency_level): """ Performs a get_slice for column_parent and predicate for the given keys in parallel. Parameters: - keys - column_parent - predicate - consistency_level """ pass def multiget_count(self, keys, column_parent, predicate, consistency_level): """ Perform a get_count in parallel on the given list keys. The return value maps keys to the count found. Parameters: - keys - column_parent - predicate - consistency_level """ pass def get_range_slices(self, column_parent, predicate, range, consistency_level): """ returns a subset of columns for a contiguous range of keys. Parameters: - column_parent - predicate - range - consistency_level """ pass def get_paged_slice(self, column_family, range, start_column, consistency_level): """ returns a range of columns, wrapping to the next rows if necessary to collect max_results. Parameters: - column_family - range - start_column - consistency_level """ pass def get_indexed_slices(self, column_parent, index_clause, column_predicate, consistency_level): """ Returns the subset of columns specified in SlicePredicate for the rows matching the IndexClause @deprecated use get_range_slices instead with range.row_filter specified Parameters: - column_parent - index_clause - column_predicate - consistency_level """ pass def insert(self, key, column_parent, column, consistency_level): """ Insert a Column at the given column_parent.column_family and optional column_parent.super_column. Parameters: - key - column_parent - column - consistency_level """ pass def add(self, key, column_parent, column, consistency_level): """ Increment or decrement a counter. Parameters: - key - column_parent - column - consistency_level """ pass def remove(self, key, column_path, timestamp, consistency_level): """ Remove data from the row specified by key at the granularity specified by column_path, and the given timestamp. Note that all the values in column_path besides column_path.column_family are truly optional: you can remove the entire row by just specifying the ColumnFamily, or you can remove a SuperColumn or a single Column by specifying those levels too. Parameters: - key - column_path - timestamp - consistency_level """ pass def remove_counter(self, key, path, consistency_level): """ Remove a counter at the specified location. Note that counters have limited support for deletes: if you remove a counter, you must wait to issue any following update until the delete has reached all the nodes and all of them have been fully compacted. Parameters: - key - path - consistency_level """ pass def batch_mutate(self, mutation_map, consistency_level): """ Mutate many columns or super columns for many row keys. See also: Mutation. mutation_map maps key to column family to a list of Mutation objects to take place at that scope. * Parameters: - mutation_map - consistency_level """ pass def atomic_batch_mutate(self, mutation_map, consistency_level): """ Atomically mutate many columns or super columns for many row keys. See also: Mutation. mutation_map maps key to column family to a list of Mutation objects to take place at that scope. * Parameters: - mutation_map - consistency_level """ pass def truncate(self, cfname): """ Truncate will mark and entire column family as deleted. From the user's perspective a successful call to truncate will result complete data deletion from cfname. Internally, however, disk space will not be immediatily released, as with all deletes in cassandra, this one only marks the data as deleted. The operation succeeds only if all hosts in the cluster at available and will throw an UnavailableException if some hosts are down. Parameters: - cfname """ pass def describe_schema_versions(self, ): """ for each schema version present in the cluster, returns a list of nodes at that version. hosts that do not respond will be under the key DatabaseDescriptor.INITIAL_VERSION. the cluster is all on the same version if the size of the map is 1. """ pass def describe_keyspaces(self, ): """ list the defined keyspaces in this cluster """ pass def describe_cluster_name(self, ): """ get the cluster name """ pass def describe_version(self, ): """ get the thrift api version """ pass def describe_ring(self, keyspace): """ get the token ring: a map of ranges to host addresses, represented as a set of TokenRange instead of a map from range to list of endpoints, because you can't use Thrift structs as map keys: https://issues.apache.org/jira/browse/THRIFT-162 for the same reason, we can't return a set here, even though order is neither important nor predictable. Parameters: - keyspace """ pass def describe_token_map(self, ): """ get the mapping between token->node ip without taking replication into consideration https://issues.apache.org/jira/browse/CASSANDRA-4092 """ pass def describe_partitioner(self, ): """ returns the partitioner used by this cluster """ pass def describe_snitch(self, ): """ returns the snitch used by this cluster """ pass def describe_keyspace(self, keyspace): """ describe specified keyspace Parameters: - keyspace """ pass def describe_splits(self, cfName, start_token, end_token, keys_per_split): """ experimental API for hadoop/parallel query support. may change violently and without warning. returns list of token strings such that first subrange is (list[0], list[1]], next is (list[1], list[2]], etc. Parameters: - cfName - start_token - end_token - keys_per_split """ pass def trace_next_query(self, ): """ Enables tracing for the next query in this connection and returns the UUID for that trace session The next query will be traced idependently of trace probability and the returned UUID can be used to query the trace keyspace """ pass def describe_splits_ex(self, cfName, start_token, end_token, keys_per_split): """ Parameters: - cfName - start_token - end_token - keys_per_split """ pass def system_add_column_family(self, cf_def): """ adds a column family. returns the new schema id. Parameters: - cf_def """ pass def system_drop_column_family(self, column_family): """ drops a column family. returns the new schema id. Parameters: - column_family """ pass def system_add_keyspace(self, ks_def): """ adds a keyspace and any column families that are part of it. returns the new schema id. Parameters: - ks_def """ pass def system_drop_keyspace(self, keyspace): """ drops a keyspace and any column families that are part of it. returns the new schema id. Parameters: - keyspace """ pass def system_update_keyspace(self, ks_def): """ updates properties of a keyspace. returns the new schema id. Parameters: - ks_def """ pass def system_update_column_family(self, cf_def): """ updates properties of a column family. returns the new schema id. Parameters: - cf_def """ pass def execute_cql_query(self, query, compression): """ Executes a CQL (Cassandra Query Language) statement and returns a CqlResult containing the results. Parameters: - query - compression """ pass def execute_cql3_query(self, query, compression, consistency): """ Parameters: - query - compression - consistency """ pass def prepare_cql_query(self, query, compression): """ Prepare a CQL (Cassandra Query Language) statement by compiling and returning - the type of CQL statement - an id token of the compiled CQL stored on the server side. - a count of the discovered bound markers in the statement Parameters: - query - compression """ pass def prepare_cql3_query(self, query, compression): """ Parameters: - query - compression """ pass def execute_prepared_cql_query(self, itemId, values): """ Executes a prepared CQL (Cassandra Query Language) statement by passing an id token and a list of variables to bind and returns a CqlResult containing the results. Parameters: - itemId - values """ pass def execute_prepared_cql3_query(self, itemId, values, consistency): """ Parameters: - itemId - values - consistency """ pass def set_cql_version(self, version): """ @deprecated This is now a no-op. Please use the CQL3 specific methods instead. Parameters: - version """ pass class Client(Iface): def __init__(self, iprot, oprot=None): self._iprot = self._oprot = iprot if oprot is not None: self._oprot = oprot self._seqid = 0 def login(self, auth_request): """ Parameters: - auth_request """ self.send_login(auth_request) self.recv_login() def send_login(self, auth_request): self._oprot.writeMessageBegin('login', TMessageType.CALL, self._seqid) args = login_args() args.auth_request = auth_request args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_login(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = login_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.authnx is not None: raise result.authnx if result.authzx is not None: raise result.authzx return def set_keyspace(self, keyspace): """ Parameters: - keyspace """ self.send_set_keyspace(keyspace) self.recv_set_keyspace() def send_set_keyspace(self, keyspace): self._oprot.writeMessageBegin('set_keyspace', TMessageType.CALL, self._seqid) args = set_keyspace_args() args.keyspace = keyspace args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_set_keyspace(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = set_keyspace_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.ire is not None: raise result.ire return def get(self, key, column_path, consistency_level): """ Get the Column or SuperColumn at the given column_path. If no value is present, NotFoundException is thrown. (This is the only method that can throw an exception under non-failure conditions.) Parameters: - key - column_path - consistency_level """ self.send_get(key, column_path, consistency_level) return self.recv_get() def send_get(self, key, column_path, consistency_level): self._oprot.writeMessageBegin('get', TMessageType.CALL, self._seqid) args = get_args() args.key = key args.column_path = column_path args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_get(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = get_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.nfe is not None: raise result.nfe if result.ue is not None: raise result.ue if result.te is not None: raise result.te raise TApplicationException(TApplicationException.MISSING_RESULT, "get failed: unknown result"); def get_slice(self, key, column_parent, predicate, consistency_level): """ Get the group of columns contained by column_parent (either a ColumnFamily name or a ColumnFamily/SuperColumn name pair) specified by the given SlicePredicate. If no matching values are found, an empty list is returned. Parameters: - key - column_parent - predicate - consistency_level """ self.send_get_slice(key, column_parent, predicate, consistency_level) return self.recv_get_slice() def send_get_slice(self, key, column_parent, predicate, consistency_level): self._oprot.writeMessageBegin('get_slice', TMessageType.CALL, self._seqid) args = get_slice_args() args.key = key args.column_parent = column_parent args.predicate = predicate args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_get_slice(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = get_slice_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te raise TApplicationException(TApplicationException.MISSING_RESULT, "get_slice failed: unknown result"); def get_count(self, key, column_parent, predicate, consistency_level): """ returns the number of columns matching predicate for a particular key, ColumnFamily and optionally SuperColumn. Parameters: - key - column_parent - predicate - consistency_level """ self.send_get_count(key, column_parent, predicate, consistency_level) return self.recv_get_count() def send_get_count(self, key, column_parent, predicate, consistency_level): self._oprot.writeMessageBegin('get_count', TMessageType.CALL, self._seqid) args = get_count_args() args.key = key args.column_parent = column_parent args.predicate = predicate args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_get_count(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = get_count_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te raise TApplicationException(TApplicationException.MISSING_RESULT, "get_count failed: unknown result"); def multiget_slice(self, keys, column_parent, predicate, consistency_level): """ Performs a get_slice for column_parent and predicate for the given keys in parallel. Parameters: - keys - column_parent - predicate - consistency_level """ self.send_multiget_slice(keys, column_parent, predicate, consistency_level) return self.recv_multiget_slice() def send_multiget_slice(self, keys, column_parent, predicate, consistency_level): self._oprot.writeMessageBegin('multiget_slice', TMessageType.CALL, self._seqid) args = multiget_slice_args() args.keys = keys args.column_parent = column_parent args.predicate = predicate args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_multiget_slice(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = multiget_slice_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te raise TApplicationException(TApplicationException.MISSING_RESULT, "multiget_slice failed: unknown result"); def multiget_count(self, keys, column_parent, predicate, consistency_level): """ Perform a get_count in parallel on the given list keys. The return value maps keys to the count found. Parameters: - keys - column_parent - predicate - consistency_level """ self.send_multiget_count(keys, column_parent, predicate, consistency_level) return self.recv_multiget_count() def send_multiget_count(self, keys, column_parent, predicate, consistency_level): self._oprot.writeMessageBegin('multiget_count', TMessageType.CALL, self._seqid) args = multiget_count_args() args.keys = keys args.column_parent = column_parent args.predicate = predicate args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_multiget_count(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = multiget_count_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te raise TApplicationException(TApplicationException.MISSING_RESULT, "multiget_count failed: unknown result"); def get_range_slices(self, column_parent, predicate, range, consistency_level): """ returns a subset of columns for a contiguous range of keys. Parameters: - column_parent - predicate - range - consistency_level """ self.send_get_range_slices(column_parent, predicate, range, consistency_level) return self.recv_get_range_slices() def send_get_range_slices(self, column_parent, predicate, range, consistency_level): self._oprot.writeMessageBegin('get_range_slices', TMessageType.CALL, self._seqid) args = get_range_slices_args() args.column_parent = column_parent args.predicate = predicate args.range = range args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_get_range_slices(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = get_range_slices_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te raise TApplicationException(TApplicationException.MISSING_RESULT, "get_range_slices failed: unknown result"); def get_paged_slice(self, column_family, range, start_column, consistency_level): """ returns a range of columns, wrapping to the next rows if necessary to collect max_results. Parameters: - column_family - range - start_column - consistency_level """ self.send_get_paged_slice(column_family, range, start_column, consistency_level) return self.recv_get_paged_slice() def send_get_paged_slice(self, column_family, range, start_column, consistency_level): self._oprot.writeMessageBegin('get_paged_slice', TMessageType.CALL, self._seqid) args = get_paged_slice_args() args.column_family = column_family args.range = range args.start_column = start_column args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_get_paged_slice(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = get_paged_slice_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te raise TApplicationException(TApplicationException.MISSING_RESULT, "get_paged_slice failed: unknown result"); def get_indexed_slices(self, column_parent, index_clause, column_predicate, consistency_level): """ Returns the subset of columns specified in SlicePredicate for the rows matching the IndexClause @deprecated use get_range_slices instead with range.row_filter specified Parameters: - column_parent - index_clause - column_predicate - consistency_level """ self.send_get_indexed_slices(column_parent, index_clause, column_predicate, consistency_level) return self.recv_get_indexed_slices() def send_get_indexed_slices(self, column_parent, index_clause, column_predicate, consistency_level): self._oprot.writeMessageBegin('get_indexed_slices', TMessageType.CALL, self._seqid) args = get_indexed_slices_args() args.column_parent = column_parent args.index_clause = index_clause args.column_predicate = column_predicate args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_get_indexed_slices(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = get_indexed_slices_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te raise TApplicationException(TApplicationException.MISSING_RESULT, "get_indexed_slices failed: unknown result"); def insert(self, key, column_parent, column, consistency_level): """ Insert a Column at the given column_parent.column_family and optional column_parent.super_column. Parameters: - key - column_parent - column - consistency_level """ self.send_insert(key, column_parent, column, consistency_level) self.recv_insert() def send_insert(self, key, column_parent, column, consistency_level): self._oprot.writeMessageBegin('insert', TMessageType.CALL, self._seqid) args = insert_args() args.key = key args.column_parent = column_parent args.column = column args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_insert(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = insert_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te return def add(self, key, column_parent, column, consistency_level): """ Increment or decrement a counter. Parameters: - key - column_parent - column - consistency_level """ self.send_add(key, column_parent, column, consistency_level) self.recv_add() def send_add(self, key, column_parent, column, consistency_level): self._oprot.writeMessageBegin('add', TMessageType.CALL, self._seqid) args = add_args() args.key = key args.column_parent = column_parent args.column = column args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_add(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = add_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te return def remove(self, key, column_path, timestamp, consistency_level): """ Remove data from the row specified by key at the granularity specified by column_path, and the given timestamp. Note that all the values in column_path besides column_path.column_family are truly optional: you can remove the entire row by just specifying the ColumnFamily, or you can remove a SuperColumn or a single Column by specifying those levels too. Parameters: - key - column_path - timestamp - consistency_level """ self.send_remove(key, column_path, timestamp, consistency_level) self.recv_remove() def send_remove(self, key, column_path, timestamp, consistency_level): self._oprot.writeMessageBegin('remove', TMessageType.CALL, self._seqid) args = remove_args() args.key = key args.column_path = column_path args.timestamp = timestamp args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_remove(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = remove_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te return def remove_counter(self, key, path, consistency_level): """ Remove a counter at the specified location. Note that counters have limited support for deletes: if you remove a counter, you must wait to issue any following update until the delete has reached all the nodes and all of them have been fully compacted. Parameters: - key - path - consistency_level """ self.send_remove_counter(key, path, consistency_level) self.recv_remove_counter() def send_remove_counter(self, key, path, consistency_level): self._oprot.writeMessageBegin('remove_counter', TMessageType.CALL, self._seqid) args = remove_counter_args() args.key = key args.path = path args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_remove_counter(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = remove_counter_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te return def batch_mutate(self, mutation_map, consistency_level): """ Mutate many columns or super columns for many row keys. See also: Mutation. mutation_map maps key to column family to a list of Mutation objects to take place at that scope. * Parameters: - mutation_map - consistency_level """ self.send_batch_mutate(mutation_map, consistency_level) self.recv_batch_mutate() def send_batch_mutate(self, mutation_map, consistency_level): self._oprot.writeMessageBegin('batch_mutate', TMessageType.CALL, self._seqid) args = batch_mutate_args() args.mutation_map = mutation_map args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_batch_mutate(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = batch_mutate_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te return def atomic_batch_mutate(self, mutation_map, consistency_level): """ Atomically mutate many columns or super columns for many row keys. See also: Mutation. mutation_map maps key to column family to a list of Mutation objects to take place at that scope. * Parameters: - mutation_map - consistency_level """ self.send_atomic_batch_mutate(mutation_map, consistency_level) self.recv_atomic_batch_mutate() def send_atomic_batch_mutate(self, mutation_map, consistency_level): self._oprot.writeMessageBegin('atomic_batch_mutate', TMessageType.CALL, self._seqid) args = atomic_batch_mutate_args() args.mutation_map = mutation_map args.consistency_level = consistency_level args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_atomic_batch_mutate(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = atomic_batch_mutate_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te return def truncate(self, cfname): """ Truncate will mark and entire column family as deleted. From the user's perspective a successful call to truncate will result complete data deletion from cfname. Internally, however, disk space will not be immediatily released, as with all deletes in cassandra, this one only marks the data as deleted. The operation succeeds only if all hosts in the cluster at available and will throw an UnavailableException if some hosts are down. Parameters: - cfname """ self.send_truncate(cfname) self.recv_truncate() def send_truncate(self, cfname): self._oprot.writeMessageBegin('truncate', TMessageType.CALL, self._seqid) args = truncate_args() args.cfname = cfname args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_truncate(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = truncate_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te return def describe_schema_versions(self, ): """ for each schema version present in the cluster, returns a list of nodes at that version. hosts that do not respond will be under the key DatabaseDescriptor.INITIAL_VERSION. the cluster is all on the same version if the size of the map is 1. """ self.send_describe_schema_versions() return self.recv_describe_schema_versions() def send_describe_schema_versions(self, ): self._oprot.writeMessageBegin('describe_schema_versions', TMessageType.CALL, self._seqid) args = describe_schema_versions_args() args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_schema_versions(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_schema_versions_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_schema_versions failed: unknown result"); def describe_keyspaces(self, ): """ list the defined keyspaces in this cluster """ self.send_describe_keyspaces() return self.recv_describe_keyspaces() def send_describe_keyspaces(self, ): self._oprot.writeMessageBegin('describe_keyspaces', TMessageType.CALL, self._seqid) args = describe_keyspaces_args() args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_keyspaces(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_keyspaces_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_keyspaces failed: unknown result"); def describe_cluster_name(self, ): """ get the cluster name """ self.send_describe_cluster_name() return self.recv_describe_cluster_name() def send_describe_cluster_name(self, ): self._oprot.writeMessageBegin('describe_cluster_name', TMessageType.CALL, self._seqid) args = describe_cluster_name_args() args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_cluster_name(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_cluster_name_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_cluster_name failed: unknown result"); def describe_version(self, ): """ get the thrift api version """ self.send_describe_version() return self.recv_describe_version() def send_describe_version(self, ): self._oprot.writeMessageBegin('describe_version', TMessageType.CALL, self._seqid) args = describe_version_args() args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_version(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_version_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_version failed: unknown result"); def describe_ring(self, keyspace): """ get the token ring: a map of ranges to host addresses, represented as a set of TokenRange instead of a map from range to list of endpoints, because you can't use Thrift structs as map keys: https://issues.apache.org/jira/browse/THRIFT-162 for the same reason, we can't return a set here, even though order is neither important nor predictable. Parameters: - keyspace """ self.send_describe_ring(keyspace) return self.recv_describe_ring() def send_describe_ring(self, keyspace): self._oprot.writeMessageBegin('describe_ring', TMessageType.CALL, self._seqid) args = describe_ring_args() args.keyspace = keyspace args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_ring(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_ring_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_ring failed: unknown result"); def describe_token_map(self, ): """ get the mapping between token->node ip without taking replication into consideration https://issues.apache.org/jira/browse/CASSANDRA-4092 """ self.send_describe_token_map() return self.recv_describe_token_map() def send_describe_token_map(self, ): self._oprot.writeMessageBegin('describe_token_map', TMessageType.CALL, self._seqid) args = describe_token_map_args() args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_token_map(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_token_map_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_token_map failed: unknown result"); def describe_partitioner(self, ): """ returns the partitioner used by this cluster """ self.send_describe_partitioner() return self.recv_describe_partitioner() def send_describe_partitioner(self, ): self._oprot.writeMessageBegin('describe_partitioner', TMessageType.CALL, self._seqid) args = describe_partitioner_args() args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_partitioner(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_partitioner_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_partitioner failed: unknown result"); def describe_snitch(self, ): """ returns the snitch used by this cluster """ self.send_describe_snitch() return self.recv_describe_snitch() def send_describe_snitch(self, ): self._oprot.writeMessageBegin('describe_snitch', TMessageType.CALL, self._seqid) args = describe_snitch_args() args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_snitch(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_snitch_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_snitch failed: unknown result"); def describe_keyspace(self, keyspace): """ describe specified keyspace Parameters: - keyspace """ self.send_describe_keyspace(keyspace) return self.recv_describe_keyspace() def send_describe_keyspace(self, keyspace): self._oprot.writeMessageBegin('describe_keyspace', TMessageType.CALL, self._seqid) args = describe_keyspace_args() args.keyspace = keyspace args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_keyspace(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_keyspace_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.nfe is not None: raise result.nfe if result.ire is not None: raise result.ire raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_keyspace failed: unknown result"); def describe_splits(self, cfName, start_token, end_token, keys_per_split): """ experimental API for hadoop/parallel query support. may change violently and without warning. returns list of token strings such that first subrange is (list[0], list[1]], next is (list[1], list[2]], etc. Parameters: - cfName - start_token - end_token - keys_per_split """ self.send_describe_splits(cfName, start_token, end_token, keys_per_split) return self.recv_describe_splits() def send_describe_splits(self, cfName, start_token, end_token, keys_per_split): self._oprot.writeMessageBegin('describe_splits', TMessageType.CALL, self._seqid) args = describe_splits_args() args.cfName = cfName args.start_token = start_token args.end_token = end_token args.keys_per_split = keys_per_split args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_splits(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_splits_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_splits failed: unknown result"); def trace_next_query(self, ): """ Enables tracing for the next query in this connection and returns the UUID for that trace session The next query will be traced idependently of trace probability and the returned UUID can be used to query the trace keyspace """ self.send_trace_next_query() return self.recv_trace_next_query() def send_trace_next_query(self, ): self._oprot.writeMessageBegin('trace_next_query', TMessageType.CALL, self._seqid) args = trace_next_query_args() args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_trace_next_query(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = trace_next_query_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success raise TApplicationException(TApplicationException.MISSING_RESULT, "trace_next_query failed: unknown result"); def describe_splits_ex(self, cfName, start_token, end_token, keys_per_split): """ Parameters: - cfName - start_token - end_token - keys_per_split """ self.send_describe_splits_ex(cfName, start_token, end_token, keys_per_split) return self.recv_describe_splits_ex() def send_describe_splits_ex(self, cfName, start_token, end_token, keys_per_split): self._oprot.writeMessageBegin('describe_splits_ex', TMessageType.CALL, self._seqid) args = describe_splits_ex_args() args.cfName = cfName args.start_token = start_token args.end_token = end_token args.keys_per_split = keys_per_split args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_describe_splits_ex(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = describe_splits_ex_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire raise TApplicationException(TApplicationException.MISSING_RESULT, "describe_splits_ex failed: unknown result"); def system_add_column_family(self, cf_def): """ adds a column family. returns the new schema id. Parameters: - cf_def """ self.send_system_add_column_family(cf_def) return self.recv_system_add_column_family() def send_system_add_column_family(self, cf_def): self._oprot.writeMessageBegin('system_add_column_family', TMessageType.CALL, self._seqid) args = system_add_column_family_args() args.cf_def = cf_def args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_system_add_column_family(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = system_add_column_family_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "system_add_column_family failed: unknown result"); def system_drop_column_family(self, column_family): """ drops a column family. returns the new schema id. Parameters: - column_family """ self.send_system_drop_column_family(column_family) return self.recv_system_drop_column_family() def send_system_drop_column_family(self, column_family): self._oprot.writeMessageBegin('system_drop_column_family', TMessageType.CALL, self._seqid) args = system_drop_column_family_args() args.column_family = column_family args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_system_drop_column_family(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = system_drop_column_family_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "system_drop_column_family failed: unknown result"); def system_add_keyspace(self, ks_def): """ adds a keyspace and any column families that are part of it. returns the new schema id. Parameters: - ks_def """ self.send_system_add_keyspace(ks_def) return self.recv_system_add_keyspace() def send_system_add_keyspace(self, ks_def): self._oprot.writeMessageBegin('system_add_keyspace', TMessageType.CALL, self._seqid) args = system_add_keyspace_args() args.ks_def = ks_def args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_system_add_keyspace(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = system_add_keyspace_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "system_add_keyspace failed: unknown result"); def system_drop_keyspace(self, keyspace): """ drops a keyspace and any column families that are part of it. returns the new schema id. Parameters: - keyspace """ self.send_system_drop_keyspace(keyspace) return self.recv_system_drop_keyspace() def send_system_drop_keyspace(self, keyspace): self._oprot.writeMessageBegin('system_drop_keyspace', TMessageType.CALL, self._seqid) args = system_drop_keyspace_args() args.keyspace = keyspace args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_system_drop_keyspace(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = system_drop_keyspace_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "system_drop_keyspace failed: unknown result"); def system_update_keyspace(self, ks_def): """ updates properties of a keyspace. returns the new schema id. Parameters: - ks_def """ self.send_system_update_keyspace(ks_def) return self.recv_system_update_keyspace() def send_system_update_keyspace(self, ks_def): self._oprot.writeMessageBegin('system_update_keyspace', TMessageType.CALL, self._seqid) args = system_update_keyspace_args() args.ks_def = ks_def args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_system_update_keyspace(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = system_update_keyspace_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "system_update_keyspace failed: unknown result"); def system_update_column_family(self, cf_def): """ updates properties of a column family. returns the new schema id. Parameters: - cf_def """ self.send_system_update_column_family(cf_def) return self.recv_system_update_column_family() def send_system_update_column_family(self, cf_def): self._oprot.writeMessageBegin('system_update_column_family', TMessageType.CALL, self._seqid) args = system_update_column_family_args() args.cf_def = cf_def args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_system_update_column_family(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = system_update_column_family_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "system_update_column_family failed: unknown result"); def execute_cql_query(self, query, compression): """ Executes a CQL (Cassandra Query Language) statement and returns a CqlResult containing the results. Parameters: - query - compression """ self.send_execute_cql_query(query, compression) return self.recv_execute_cql_query() def send_execute_cql_query(self, query, compression): self._oprot.writeMessageBegin('execute_cql_query', TMessageType.CALL, self._seqid) args = execute_cql_query_args() args.query = query args.compression = compression args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_execute_cql_query(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = execute_cql_query_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "execute_cql_query failed: unknown result"); def execute_cql3_query(self, query, compression, consistency): """ Parameters: - query - compression - consistency """ self.send_execute_cql3_query(query, compression, consistency) return self.recv_execute_cql3_query() def send_execute_cql3_query(self, query, compression, consistency): self._oprot.writeMessageBegin('execute_cql3_query', TMessageType.CALL, self._seqid) args = execute_cql3_query_args() args.query = query args.compression = compression args.consistency = consistency args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_execute_cql3_query(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = execute_cql3_query_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "execute_cql3_query failed: unknown result"); def prepare_cql_query(self, query, compression): """ Prepare a CQL (Cassandra Query Language) statement by compiling and returning - the type of CQL statement - an id token of the compiled CQL stored on the server side. - a count of the discovered bound markers in the statement Parameters: - query - compression """ self.send_prepare_cql_query(query, compression) return self.recv_prepare_cql_query() def send_prepare_cql_query(self, query, compression): self._oprot.writeMessageBegin('prepare_cql_query', TMessageType.CALL, self._seqid) args = prepare_cql_query_args() args.query = query args.compression = compression args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_prepare_cql_query(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = prepare_cql_query_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire raise TApplicationException(TApplicationException.MISSING_RESULT, "prepare_cql_query failed: unknown result"); def prepare_cql3_query(self, query, compression): """ Parameters: - query - compression """ self.send_prepare_cql3_query(query, compression) return self.recv_prepare_cql3_query() def send_prepare_cql3_query(self, query, compression): self._oprot.writeMessageBegin('prepare_cql3_query', TMessageType.CALL, self._seqid) args = prepare_cql3_query_args() args.query = query args.compression = compression args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_prepare_cql3_query(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = prepare_cql3_query_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire raise TApplicationException(TApplicationException.MISSING_RESULT, "prepare_cql3_query failed: unknown result"); def execute_prepared_cql_query(self, itemId, values): """ Executes a prepared CQL (Cassandra Query Language) statement by passing an id token and a list of variables to bind and returns a CqlResult containing the results. Parameters: - itemId - values """ self.send_execute_prepared_cql_query(itemId, values) return self.recv_execute_prepared_cql_query() def send_execute_prepared_cql_query(self, itemId, values): self._oprot.writeMessageBegin('execute_prepared_cql_query', TMessageType.CALL, self._seqid) args = execute_prepared_cql_query_args() args.itemId = itemId args.values = values args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_execute_prepared_cql_query(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = execute_prepared_cql_query_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "execute_prepared_cql_query failed: unknown result"); def execute_prepared_cql3_query(self, itemId, values, consistency): """ Parameters: - itemId - values - consistency """ self.send_execute_prepared_cql3_query(itemId, values, consistency) return self.recv_execute_prepared_cql3_query() def send_execute_prepared_cql3_query(self, itemId, values, consistency): self._oprot.writeMessageBegin('execute_prepared_cql3_query', TMessageType.CALL, self._seqid) args = execute_prepared_cql3_query_args() args.itemId = itemId args.values = values args.consistency = consistency args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_execute_prepared_cql3_query(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = execute_prepared_cql3_query_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.success is not None: return result.success if result.ire is not None: raise result.ire if result.ue is not None: raise result.ue if result.te is not None: raise result.te if result.sde is not None: raise result.sde raise TApplicationException(TApplicationException.MISSING_RESULT, "execute_prepared_cql3_query failed: unknown result"); def set_cql_version(self, version): """ @deprecated This is now a no-op. Please use the CQL3 specific methods instead. Parameters: - version """ self.send_set_cql_version(version) self.recv_set_cql_version() def send_set_cql_version(self, version): self._oprot.writeMessageBegin('set_cql_version', TMessageType.CALL, self._seqid) args = set_cql_version_args() args.version = version args.write(self._oprot) self._oprot.writeMessageEnd() self._oprot.trans.flush() def recv_set_cql_version(self, ): (fname, mtype, rseqid) = self._iprot.readMessageBegin() if mtype == TMessageType.EXCEPTION: x = TApplicationException() x.read(self._iprot) self._iprot.readMessageEnd() raise x result = set_cql_version_result() result.read(self._iprot) self._iprot.readMessageEnd() if result.ire is not None: raise result.ire return class Processor(Iface, TProcessor): def __init__(self, handler): self._handler = handler self._processMap = {} self._processMap["login"] = Processor.process_login self._processMap["set_keyspace"] = Processor.process_set_keyspace self._processMap["get"] = Processor.process_get self._processMap["get_slice"] = Processor.process_get_slice self._processMap["get_count"] = Processor.process_get_count self._processMap["multiget_slice"] = Processor.process_multiget_slice self._processMap["multiget_count"] = Processor.process_multiget_count self._processMap["get_range_slices"] = Processor.process_get_range_slices self._processMap["get_paged_slice"] = Processor.process_get_paged_slice self._processMap["get_indexed_slices"] = Processor.process_get_indexed_slices self._processMap["insert"] = Processor.process_insert self._processMap["add"] = Processor.process_add self._processMap["remove"] = Processor.process_remove self._processMap["remove_counter"] = Processor.process_remove_counter self._processMap["batch_mutate"] = Processor.process_batch_mutate self._processMap["atomic_batch_mutate"] = Processor.process_atomic_batch_mutate self._processMap["truncate"] = Processor.process_truncate self._processMap["describe_schema_versions"] = Processor.process_describe_schema_versions self._processMap["describe_keyspaces"] = Processor.process_describe_keyspaces self._processMap["describe_cluster_name"] = Processor.process_describe_cluster_name self._processMap["describe_version"] = Processor.process_describe_version self._processMap["describe_ring"] = Processor.process_describe_ring self._processMap["describe_token_map"] = Processor.process_describe_token_map self._processMap["describe_partitioner"] = Processor.process_describe_partitioner self._processMap["describe_snitch"] = Processor.process_describe_snitch self._processMap["describe_keyspace"] = Processor.process_describe_keyspace self._processMap["describe_splits"] = Processor.process_describe_splits self._processMap["trace_next_query"] = Processor.process_trace_next_query self._processMap["describe_splits_ex"] = Processor.process_describe_splits_ex self._processMap["system_add_column_family"] = Processor.process_system_add_column_family self._processMap["system_drop_column_family"] = Processor.process_system_drop_column_family self._processMap["system_add_keyspace"] = Processor.process_system_add_keyspace self._processMap["system_drop_keyspace"] = Processor.process_system_drop_keyspace self._processMap["system_update_keyspace"] = Processor.process_system_update_keyspace self._processMap["system_update_column_family"] = Processor.process_system_update_column_family self._processMap["execute_cql_query"] = Processor.process_execute_cql_query self._processMap["execute_cql3_query"] = Processor.process_execute_cql3_query self._processMap["prepare_cql_query"] = Processor.process_prepare_cql_query self._processMap["prepare_cql3_query"] = Processor.process_prepare_cql3_query self._processMap["execute_prepared_cql_query"] = Processor.process_execute_prepared_cql_query self._processMap["execute_prepared_cql3_query"] = Processor.process_execute_prepared_cql3_query self._processMap["set_cql_version"] = Processor.process_set_cql_version def process(self, iprot, oprot): (name, type, seqid) = iprot.readMessageBegin() if name not in self._processMap: iprot.skip(TType.STRUCT) iprot.readMessageEnd() x = TApplicationException(TApplicationException.UNKNOWN_METHOD, 'Unknown function %s' % (name)) oprot.writeMessageBegin(name, TMessageType.EXCEPTION, seqid) x.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() return else: self._processMap[name](self, seqid, iprot, oprot) return True def process_login(self, seqid, iprot, oprot): args = login_args() args.read(iprot) iprot.readMessageEnd() result = login_result() try: self._handler.login(args.auth_request) except AuthenticationException as authnx: result.authnx = authnx except AuthorizationException as authzx: result.authzx = authzx oprot.writeMessageBegin("login", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_set_keyspace(self, seqid, iprot, oprot): args = set_keyspace_args() args.read(iprot) iprot.readMessageEnd() result = set_keyspace_result() try: self._handler.set_keyspace(args.keyspace) except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("set_keyspace", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_get(self, seqid, iprot, oprot): args = get_args() args.read(iprot) iprot.readMessageEnd() result = get_result() try: result.success = self._handler.get(args.key, args.column_path, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except NotFoundException as nfe: result.nfe = nfe except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("get", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_get_slice(self, seqid, iprot, oprot): args = get_slice_args() args.read(iprot) iprot.readMessageEnd() result = get_slice_result() try: result.success = self._handler.get_slice(args.key, args.column_parent, args.predicate, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("get_slice", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_get_count(self, seqid, iprot, oprot): args = get_count_args() args.read(iprot) iprot.readMessageEnd() result = get_count_result() try: result.success = self._handler.get_count(args.key, args.column_parent, args.predicate, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("get_count", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_multiget_slice(self, seqid, iprot, oprot): args = multiget_slice_args() args.read(iprot) iprot.readMessageEnd() result = multiget_slice_result() try: result.success = self._handler.multiget_slice(args.keys, args.column_parent, args.predicate, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("multiget_slice", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_multiget_count(self, seqid, iprot, oprot): args = multiget_count_args() args.read(iprot) iprot.readMessageEnd() result = multiget_count_result() try: result.success = self._handler.multiget_count(args.keys, args.column_parent, args.predicate, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("multiget_count", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_get_range_slices(self, seqid, iprot, oprot): args = get_range_slices_args() args.read(iprot) iprot.readMessageEnd() result = get_range_slices_result() try: result.success = self._handler.get_range_slices(args.column_parent, args.predicate, args.range, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("get_range_slices", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_get_paged_slice(self, seqid, iprot, oprot): args = get_paged_slice_args() args.read(iprot) iprot.readMessageEnd() result = get_paged_slice_result() try: result.success = self._handler.get_paged_slice(args.column_family, args.range, args.start_column, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("get_paged_slice", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_get_indexed_slices(self, seqid, iprot, oprot): args = get_indexed_slices_args() args.read(iprot) iprot.readMessageEnd() result = get_indexed_slices_result() try: result.success = self._handler.get_indexed_slices(args.column_parent, args.index_clause, args.column_predicate, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("get_indexed_slices", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_insert(self, seqid, iprot, oprot): args = insert_args() args.read(iprot) iprot.readMessageEnd() result = insert_result() try: self._handler.insert(args.key, args.column_parent, args.column, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("insert", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_add(self, seqid, iprot, oprot): args = add_args() args.read(iprot) iprot.readMessageEnd() result = add_result() try: self._handler.add(args.key, args.column_parent, args.column, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("add", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_remove(self, seqid, iprot, oprot): args = remove_args() args.read(iprot) iprot.readMessageEnd() result = remove_result() try: self._handler.remove(args.key, args.column_path, args.timestamp, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("remove", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_remove_counter(self, seqid, iprot, oprot): args = remove_counter_args() args.read(iprot) iprot.readMessageEnd() result = remove_counter_result() try: self._handler.remove_counter(args.key, args.path, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("remove_counter", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_batch_mutate(self, seqid, iprot, oprot): args = batch_mutate_args() args.read(iprot) iprot.readMessageEnd() result = batch_mutate_result() try: self._handler.batch_mutate(args.mutation_map, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("batch_mutate", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_atomic_batch_mutate(self, seqid, iprot, oprot): args = atomic_batch_mutate_args() args.read(iprot) iprot.readMessageEnd() result = atomic_batch_mutate_result() try: self._handler.atomic_batch_mutate(args.mutation_map, args.consistency_level) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("atomic_batch_mutate", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_truncate(self, seqid, iprot, oprot): args = truncate_args() args.read(iprot) iprot.readMessageEnd() result = truncate_result() try: self._handler.truncate(args.cfname) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te oprot.writeMessageBegin("truncate", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_schema_versions(self, seqid, iprot, oprot): args = describe_schema_versions_args() args.read(iprot) iprot.readMessageEnd() result = describe_schema_versions_result() try: result.success = self._handler.describe_schema_versions() except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("describe_schema_versions", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_keyspaces(self, seqid, iprot, oprot): args = describe_keyspaces_args() args.read(iprot) iprot.readMessageEnd() result = describe_keyspaces_result() try: result.success = self._handler.describe_keyspaces() except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("describe_keyspaces", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_cluster_name(self, seqid, iprot, oprot): args = describe_cluster_name_args() args.read(iprot) iprot.readMessageEnd() result = describe_cluster_name_result() result.success = self._handler.describe_cluster_name() oprot.writeMessageBegin("describe_cluster_name", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_version(self, seqid, iprot, oprot): args = describe_version_args() args.read(iprot) iprot.readMessageEnd() result = describe_version_result() result.success = self._handler.describe_version() oprot.writeMessageBegin("describe_version", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_ring(self, seqid, iprot, oprot): args = describe_ring_args() args.read(iprot) iprot.readMessageEnd() result = describe_ring_result() try: result.success = self._handler.describe_ring(args.keyspace) except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("describe_ring", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_token_map(self, seqid, iprot, oprot): args = describe_token_map_args() args.read(iprot) iprot.readMessageEnd() result = describe_token_map_result() try: result.success = self._handler.describe_token_map() except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("describe_token_map", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_partitioner(self, seqid, iprot, oprot): args = describe_partitioner_args() args.read(iprot) iprot.readMessageEnd() result = describe_partitioner_result() result.success = self._handler.describe_partitioner() oprot.writeMessageBegin("describe_partitioner", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_snitch(self, seqid, iprot, oprot): args = describe_snitch_args() args.read(iprot) iprot.readMessageEnd() result = describe_snitch_result() result.success = self._handler.describe_snitch() oprot.writeMessageBegin("describe_snitch", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_keyspace(self, seqid, iprot, oprot): args = describe_keyspace_args() args.read(iprot) iprot.readMessageEnd() result = describe_keyspace_result() try: result.success = self._handler.describe_keyspace(args.keyspace) except NotFoundException as nfe: result.nfe = nfe except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("describe_keyspace", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_splits(self, seqid, iprot, oprot): args = describe_splits_args() args.read(iprot) iprot.readMessageEnd() result = describe_splits_result() try: result.success = self._handler.describe_splits(args.cfName, args.start_token, args.end_token, args.keys_per_split) except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("describe_splits", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_trace_next_query(self, seqid, iprot, oprot): args = trace_next_query_args() args.read(iprot) iprot.readMessageEnd() result = trace_next_query_result() result.success = self._handler.trace_next_query() oprot.writeMessageBegin("trace_next_query", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_describe_splits_ex(self, seqid, iprot, oprot): args = describe_splits_ex_args() args.read(iprot) iprot.readMessageEnd() result = describe_splits_ex_result() try: result.success = self._handler.describe_splits_ex(args.cfName, args.start_token, args.end_token, args.keys_per_split) except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("describe_splits_ex", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_system_add_column_family(self, seqid, iprot, oprot): args = system_add_column_family_args() args.read(iprot) iprot.readMessageEnd() result = system_add_column_family_result() try: result.success = self._handler.system_add_column_family(args.cf_def) except InvalidRequestException as ire: result.ire = ire except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("system_add_column_family", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_system_drop_column_family(self, seqid, iprot, oprot): args = system_drop_column_family_args() args.read(iprot) iprot.readMessageEnd() result = system_drop_column_family_result() try: result.success = self._handler.system_drop_column_family(args.column_family) except InvalidRequestException as ire: result.ire = ire except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("system_drop_column_family", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_system_add_keyspace(self, seqid, iprot, oprot): args = system_add_keyspace_args() args.read(iprot) iprot.readMessageEnd() result = system_add_keyspace_result() try: result.success = self._handler.system_add_keyspace(args.ks_def) except InvalidRequestException as ire: result.ire = ire except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("system_add_keyspace", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_system_drop_keyspace(self, seqid, iprot, oprot): args = system_drop_keyspace_args() args.read(iprot) iprot.readMessageEnd() result = system_drop_keyspace_result() try: result.success = self._handler.system_drop_keyspace(args.keyspace) except InvalidRequestException as ire: result.ire = ire except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("system_drop_keyspace", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_system_update_keyspace(self, seqid, iprot, oprot): args = system_update_keyspace_args() args.read(iprot) iprot.readMessageEnd() result = system_update_keyspace_result() try: result.success = self._handler.system_update_keyspace(args.ks_def) except InvalidRequestException as ire: result.ire = ire except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("system_update_keyspace", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_system_update_column_family(self, seqid, iprot, oprot): args = system_update_column_family_args() args.read(iprot) iprot.readMessageEnd() result = system_update_column_family_result() try: result.success = self._handler.system_update_column_family(args.cf_def) except InvalidRequestException as ire: result.ire = ire except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("system_update_column_family", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_execute_cql_query(self, seqid, iprot, oprot): args = execute_cql_query_args() args.read(iprot) iprot.readMessageEnd() result = execute_cql_query_result() try: result.success = self._handler.execute_cql_query(args.query, args.compression) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("execute_cql_query", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_execute_cql3_query(self, seqid, iprot, oprot): args = execute_cql3_query_args() args.read(iprot) iprot.readMessageEnd() result = execute_cql3_query_result() try: result.success = self._handler.execute_cql3_query(args.query, args.compression, args.consistency) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("execute_cql3_query", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_prepare_cql_query(self, seqid, iprot, oprot): args = prepare_cql_query_args() args.read(iprot) iprot.readMessageEnd() result = prepare_cql_query_result() try: result.success = self._handler.prepare_cql_query(args.query, args.compression) except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("prepare_cql_query", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_prepare_cql3_query(self, seqid, iprot, oprot): args = prepare_cql3_query_args() args.read(iprot) iprot.readMessageEnd() result = prepare_cql3_query_result() try: result.success = self._handler.prepare_cql3_query(args.query, args.compression) except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("prepare_cql3_query", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_execute_prepared_cql_query(self, seqid, iprot, oprot): args = execute_prepared_cql_query_args() args.read(iprot) iprot.readMessageEnd() result = execute_prepared_cql_query_result() try: result.success = self._handler.execute_prepared_cql_query(args.itemId, args.values) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("execute_prepared_cql_query", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_execute_prepared_cql3_query(self, seqid, iprot, oprot): args = execute_prepared_cql3_query_args() args.read(iprot) iprot.readMessageEnd() result = execute_prepared_cql3_query_result() try: result.success = self._handler.execute_prepared_cql3_query(args.itemId, args.values, args.consistency) except InvalidRequestException as ire: result.ire = ire except UnavailableException as ue: result.ue = ue except TimedOutException as te: result.te = te except SchemaDisagreementException as sde: result.sde = sde oprot.writeMessageBegin("execute_prepared_cql3_query", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() def process_set_cql_version(self, seqid, iprot, oprot): args = set_cql_version_args() args.read(iprot) iprot.readMessageEnd() result = set_cql_version_result() try: self._handler.set_cql_version(args.version) except InvalidRequestException as ire: result.ire = ire oprot.writeMessageBegin("set_cql_version", TMessageType.REPLY, seqid) result.write(oprot) oprot.writeMessageEnd() oprot.trans.flush() # HELPER FUNCTIONS AND STRUCTURES class login_args(object): """ Attributes: - auth_request """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'auth_request', (AuthenticationRequest, AuthenticationRequest.thrift_spec), None, ), # 1 ) def __init__(self, auth_request=None,): self.auth_request = auth_request def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.auth_request = AuthenticationRequest() self.auth_request.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('login_args') if self.auth_request is not None: oprot.writeFieldBegin('auth_request', TType.STRUCT, 1) self.auth_request.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.auth_request is None: raise TProtocol.TProtocolException(message='Required field auth_request is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class login_result(object): """ Attributes: - authnx - authzx """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'authnx', (AuthenticationException, AuthenticationException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'authzx', (AuthorizationException, AuthorizationException.thrift_spec), None, ), # 2 ) def __init__(self, authnx=None, authzx=None,): self.authnx = authnx self.authzx = authzx def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.authnx = AuthenticationException() self.authnx.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.authzx = AuthorizationException() self.authzx.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('login_result') if self.authnx is not None: oprot.writeFieldBegin('authnx', TType.STRUCT, 1) self.authnx.write(oprot) oprot.writeFieldEnd() if self.authzx is not None: oprot.writeFieldBegin('authzx', TType.STRUCT, 2) self.authzx.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class set_keyspace_args(object): """ Attributes: - keyspace """ thrift_spec = ( None, # 0 (1, TType.STRING, 'keyspace', None, None, ), # 1 ) def __init__(self, keyspace=None,): self.keyspace = keyspace def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.keyspace = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('set_keyspace_args') if self.keyspace is not None: oprot.writeFieldBegin('keyspace', TType.STRING, 1) oprot.writeString(self.keyspace) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.keyspace is None: raise TProtocol.TProtocolException(message='Required field keyspace is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class set_keyspace_result(object): """ Attributes: - ire """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, ire=None,): self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('set_keyspace_result') if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_args(object): """ Attributes: - key - column_path - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.STRUCT, 'column_path', (ColumnPath, ColumnPath.thrift_spec), None, ), # 2 (3, TType.I32, 'consistency_level', None, 1, ), # 3 ) def __init__(self, key=None, column_path=None, consistency_level=thrift_spec[3][4],): self.key = key self.column_path = column_path self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.column_path = ColumnPath() self.column_path.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_args') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.column_path is not None: oprot.writeFieldBegin('column_path', TType.STRUCT, 2) self.column_path.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 3) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.column_path is None: raise TProtocol.TProtocolException(message='Required field column_path is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_result(object): """ Attributes: - success - ire - nfe - ue - te """ thrift_spec = ( (0, TType.STRUCT, 'success', (ColumnOrSuperColumn, ColumnOrSuperColumn.thrift_spec), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'nfe', (NotFoundException, NotFoundException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 3 (4, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 4 ) def __init__(self, success=None, ire=None, nfe=None, ue=None, te=None,): self.success = success self.ire = ire self.nfe = nfe self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRUCT: self.success = ColumnOrSuperColumn() self.success.read(iprot) else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.nfe = NotFoundException() self.nfe.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRUCT, 0) self.success.write(oprot) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.nfe is not None: oprot.writeFieldBegin('nfe', TType.STRUCT, 2) self.nfe.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 3) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 4) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_slice_args(object): """ Attributes: - key - column_parent - predicate - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.STRUCT, 'column_parent', (ColumnParent, ColumnParent.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'predicate', (SlicePredicate, SlicePredicate.thrift_spec), None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, key=None, column_parent=None, predicate=None, consistency_level=thrift_spec[4][4],): self.key = key self.column_parent = column_parent self.predicate = predicate self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.column_parent = ColumnParent() self.column_parent.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.predicate = SlicePredicate() self.predicate.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_slice_args') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.column_parent is not None: oprot.writeFieldBegin('column_parent', TType.STRUCT, 2) self.column_parent.write(oprot) oprot.writeFieldEnd() if self.predicate is not None: oprot.writeFieldBegin('predicate', TType.STRUCT, 3) self.predicate.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.column_parent is None: raise TProtocol.TProtocolException(message='Required field column_parent is unset!') if self.predicate is None: raise TProtocol.TProtocolException(message='Required field predicate is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_slice_result(object): """ Attributes: - success - ire - ue - te """ thrift_spec = ( (0, TType.LIST, 'success', (TType.STRUCT,(ColumnOrSuperColumn, ColumnOrSuperColumn.thrift_spec)), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, success=None, ire=None, ue=None, te=None,): self.success = success self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.LIST: self.success = [] (_etype171, _size168) = iprot.readListBegin() for _i172 in xrange(_size168): _elem173 = ColumnOrSuperColumn() _elem173.read(iprot) self.success.append(_elem173) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_slice_result') if self.success is not None: oprot.writeFieldBegin('success', TType.LIST, 0) oprot.writeListBegin(TType.STRUCT, len(self.success)) for iter174 in self.success: iter174.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_count_args(object): """ Attributes: - key - column_parent - predicate - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.STRUCT, 'column_parent', (ColumnParent, ColumnParent.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'predicate', (SlicePredicate, SlicePredicate.thrift_spec), None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, key=None, column_parent=None, predicate=None, consistency_level=thrift_spec[4][4],): self.key = key self.column_parent = column_parent self.predicate = predicate self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.column_parent = ColumnParent() self.column_parent.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.predicate = SlicePredicate() self.predicate.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_count_args') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.column_parent is not None: oprot.writeFieldBegin('column_parent', TType.STRUCT, 2) self.column_parent.write(oprot) oprot.writeFieldEnd() if self.predicate is not None: oprot.writeFieldBegin('predicate', TType.STRUCT, 3) self.predicate.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.column_parent is None: raise TProtocol.TProtocolException(message='Required field column_parent is unset!') if self.predicate is None: raise TProtocol.TProtocolException(message='Required field predicate is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_count_result(object): """ Attributes: - success - ire - ue - te """ thrift_spec = ( (0, TType.I32, 'success', None, None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, success=None, ire=None, ue=None, te=None,): self.success = success self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.I32: self.success = iprot.readI32(); else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_count_result') if self.success is not None: oprot.writeFieldBegin('success', TType.I32, 0) oprot.writeI32(self.success) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class multiget_slice_args(object): """ Attributes: - keys - column_parent - predicate - consistency_level """ thrift_spec = ( None, # 0 (1, TType.LIST, 'keys', (TType.STRING,None), None, ), # 1 (2, TType.STRUCT, 'column_parent', (ColumnParent, ColumnParent.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'predicate', (SlicePredicate, SlicePredicate.thrift_spec), None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, keys=None, column_parent=None, predicate=None, consistency_level=thrift_spec[4][4],): self.keys = keys self.column_parent = column_parent self.predicate = predicate self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.LIST: self.keys = [] (_etype178, _size175) = iprot.readListBegin() for _i179 in xrange(_size175): _elem180 = iprot.readString(); self.keys.append(_elem180) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.column_parent = ColumnParent() self.column_parent.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.predicate = SlicePredicate() self.predicate.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('multiget_slice_args') if self.keys is not None: oprot.writeFieldBegin('keys', TType.LIST, 1) oprot.writeListBegin(TType.STRING, len(self.keys)) for iter181 in self.keys: oprot.writeString(iter181) oprot.writeListEnd() oprot.writeFieldEnd() if self.column_parent is not None: oprot.writeFieldBegin('column_parent', TType.STRUCT, 2) self.column_parent.write(oprot) oprot.writeFieldEnd() if self.predicate is not None: oprot.writeFieldBegin('predicate', TType.STRUCT, 3) self.predicate.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.keys is None: raise TProtocol.TProtocolException(message='Required field keys is unset!') if self.column_parent is None: raise TProtocol.TProtocolException(message='Required field column_parent is unset!') if self.predicate is None: raise TProtocol.TProtocolException(message='Required field predicate is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class multiget_slice_result(object): """ Attributes: - success - ire - ue - te """ thrift_spec = ( (0, TType.MAP, 'success', (TType.STRING,None,TType.LIST,(TType.STRUCT,(ColumnOrSuperColumn, ColumnOrSuperColumn.thrift_spec))), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, success=None, ire=None, ue=None, te=None,): self.success = success self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.MAP: self.success = {} (_ktype183, _vtype184, _size182 ) = iprot.readMapBegin() for _i186 in xrange(_size182): _key187 = iprot.readString(); _val188 = [] (_etype192, _size189) = iprot.readListBegin() for _i193 in xrange(_size189): _elem194 = ColumnOrSuperColumn() _elem194.read(iprot) _val188.append(_elem194) iprot.readListEnd() self.success[_key187] = _val188 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('multiget_slice_result') if self.success is not None: oprot.writeFieldBegin('success', TType.MAP, 0) oprot.writeMapBegin(TType.STRING, TType.LIST, len(self.success)) for kiter195,viter196 in self.success.items(): oprot.writeString(kiter195) oprot.writeListBegin(TType.STRUCT, len(viter196)) for iter197 in viter196: iter197.write(oprot) oprot.writeListEnd() oprot.writeMapEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class multiget_count_args(object): """ Attributes: - keys - column_parent - predicate - consistency_level """ thrift_spec = ( None, # 0 (1, TType.LIST, 'keys', (TType.STRING,None), None, ), # 1 (2, TType.STRUCT, 'column_parent', (ColumnParent, ColumnParent.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'predicate', (SlicePredicate, SlicePredicate.thrift_spec), None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, keys=None, column_parent=None, predicate=None, consistency_level=thrift_spec[4][4],): self.keys = keys self.column_parent = column_parent self.predicate = predicate self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.LIST: self.keys = [] (_etype201, _size198) = iprot.readListBegin() for _i202 in xrange(_size198): _elem203 = iprot.readString(); self.keys.append(_elem203) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.column_parent = ColumnParent() self.column_parent.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.predicate = SlicePredicate() self.predicate.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('multiget_count_args') if self.keys is not None: oprot.writeFieldBegin('keys', TType.LIST, 1) oprot.writeListBegin(TType.STRING, len(self.keys)) for iter204 in self.keys: oprot.writeString(iter204) oprot.writeListEnd() oprot.writeFieldEnd() if self.column_parent is not None: oprot.writeFieldBegin('column_parent', TType.STRUCT, 2) self.column_parent.write(oprot) oprot.writeFieldEnd() if self.predicate is not None: oprot.writeFieldBegin('predicate', TType.STRUCT, 3) self.predicate.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.keys is None: raise TProtocol.TProtocolException(message='Required field keys is unset!') if self.column_parent is None: raise TProtocol.TProtocolException(message='Required field column_parent is unset!') if self.predicate is None: raise TProtocol.TProtocolException(message='Required field predicate is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class multiget_count_result(object): """ Attributes: - success - ire - ue - te """ thrift_spec = ( (0, TType.MAP, 'success', (TType.STRING,None,TType.I32,None), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, success=None, ire=None, ue=None, te=None,): self.success = success self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.MAP: self.success = {} (_ktype206, _vtype207, _size205 ) = iprot.readMapBegin() for _i209 in xrange(_size205): _key210 = iprot.readString(); _val211 = iprot.readI32(); self.success[_key210] = _val211 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('multiget_count_result') if self.success is not None: oprot.writeFieldBegin('success', TType.MAP, 0) oprot.writeMapBegin(TType.STRING, TType.I32, len(self.success)) for kiter212,viter213 in self.success.items(): oprot.writeString(kiter212) oprot.writeI32(viter213) oprot.writeMapEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_range_slices_args(object): """ Attributes: - column_parent - predicate - range - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'column_parent', (ColumnParent, ColumnParent.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'predicate', (SlicePredicate, SlicePredicate.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'range', (KeyRange, KeyRange.thrift_spec), None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, column_parent=None, predicate=None, range=None, consistency_level=thrift_spec[4][4],): self.column_parent = column_parent self.predicate = predicate self.range = range self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.column_parent = ColumnParent() self.column_parent.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.predicate = SlicePredicate() self.predicate.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.range = KeyRange() self.range.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_range_slices_args') if self.column_parent is not None: oprot.writeFieldBegin('column_parent', TType.STRUCT, 1) self.column_parent.write(oprot) oprot.writeFieldEnd() if self.predicate is not None: oprot.writeFieldBegin('predicate', TType.STRUCT, 2) self.predicate.write(oprot) oprot.writeFieldEnd() if self.range is not None: oprot.writeFieldBegin('range', TType.STRUCT, 3) self.range.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.column_parent is None: raise TProtocol.TProtocolException(message='Required field column_parent is unset!') if self.predicate is None: raise TProtocol.TProtocolException(message='Required field predicate is unset!') if self.range is None: raise TProtocol.TProtocolException(message='Required field range is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_range_slices_result(object): """ Attributes: - success - ire - ue - te """ thrift_spec = ( (0, TType.LIST, 'success', (TType.STRUCT,(KeySlice, KeySlice.thrift_spec)), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, success=None, ire=None, ue=None, te=None,): self.success = success self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.LIST: self.success = [] (_etype217, _size214) = iprot.readListBegin() for _i218 in xrange(_size214): _elem219 = KeySlice() _elem219.read(iprot) self.success.append(_elem219) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_range_slices_result') if self.success is not None: oprot.writeFieldBegin('success', TType.LIST, 0) oprot.writeListBegin(TType.STRUCT, len(self.success)) for iter220 in self.success: iter220.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_paged_slice_args(object): """ Attributes: - column_family - range - start_column - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRING, 'column_family', None, None, ), # 1 (2, TType.STRUCT, 'range', (KeyRange, KeyRange.thrift_spec), None, ), # 2 (3, TType.STRING, 'start_column', None, None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, column_family=None, range=None, start_column=None, consistency_level=thrift_spec[4][4],): self.column_family = column_family self.range = range self.start_column = start_column self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.column_family = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.range = KeyRange() self.range.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRING: self.start_column = iprot.readString(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_paged_slice_args') if self.column_family is not None: oprot.writeFieldBegin('column_family', TType.STRING, 1) oprot.writeString(self.column_family) oprot.writeFieldEnd() if self.range is not None: oprot.writeFieldBegin('range', TType.STRUCT, 2) self.range.write(oprot) oprot.writeFieldEnd() if self.start_column is not None: oprot.writeFieldBegin('start_column', TType.STRING, 3) oprot.writeString(self.start_column) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.column_family is None: raise TProtocol.TProtocolException(message='Required field column_family is unset!') if self.range is None: raise TProtocol.TProtocolException(message='Required field range is unset!') if self.start_column is None: raise TProtocol.TProtocolException(message='Required field start_column is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_paged_slice_result(object): """ Attributes: - success - ire - ue - te """ thrift_spec = ( (0, TType.LIST, 'success', (TType.STRUCT,(KeySlice, KeySlice.thrift_spec)), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, success=None, ire=None, ue=None, te=None,): self.success = success self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.LIST: self.success = [] (_etype224, _size221) = iprot.readListBegin() for _i225 in xrange(_size221): _elem226 = KeySlice() _elem226.read(iprot) self.success.append(_elem226) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_paged_slice_result') if self.success is not None: oprot.writeFieldBegin('success', TType.LIST, 0) oprot.writeListBegin(TType.STRUCT, len(self.success)) for iter227 in self.success: iter227.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_indexed_slices_args(object): """ Attributes: - column_parent - index_clause - column_predicate - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'column_parent', (ColumnParent, ColumnParent.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'index_clause', (IndexClause, IndexClause.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'column_predicate', (SlicePredicate, SlicePredicate.thrift_spec), None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, column_parent=None, index_clause=None, column_predicate=None, consistency_level=thrift_spec[4][4],): self.column_parent = column_parent self.index_clause = index_clause self.column_predicate = column_predicate self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.column_parent = ColumnParent() self.column_parent.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.index_clause = IndexClause() self.index_clause.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.column_predicate = SlicePredicate() self.column_predicate.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_indexed_slices_args') if self.column_parent is not None: oprot.writeFieldBegin('column_parent', TType.STRUCT, 1) self.column_parent.write(oprot) oprot.writeFieldEnd() if self.index_clause is not None: oprot.writeFieldBegin('index_clause', TType.STRUCT, 2) self.index_clause.write(oprot) oprot.writeFieldEnd() if self.column_predicate is not None: oprot.writeFieldBegin('column_predicate', TType.STRUCT, 3) self.column_predicate.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.column_parent is None: raise TProtocol.TProtocolException(message='Required field column_parent is unset!') if self.index_clause is None: raise TProtocol.TProtocolException(message='Required field index_clause is unset!') if self.column_predicate is None: raise TProtocol.TProtocolException(message='Required field column_predicate is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class get_indexed_slices_result(object): """ Attributes: - success - ire - ue - te """ thrift_spec = ( (0, TType.LIST, 'success', (TType.STRUCT,(KeySlice, KeySlice.thrift_spec)), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, success=None, ire=None, ue=None, te=None,): self.success = success self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.LIST: self.success = [] (_etype231, _size228) = iprot.readListBegin() for _i232 in xrange(_size228): _elem233 = KeySlice() _elem233.read(iprot) self.success.append(_elem233) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('get_indexed_slices_result') if self.success is not None: oprot.writeFieldBegin('success', TType.LIST, 0) oprot.writeListBegin(TType.STRUCT, len(self.success)) for iter234 in self.success: iter234.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class insert_args(object): """ Attributes: - key - column_parent - column - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.STRUCT, 'column_parent', (ColumnParent, ColumnParent.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'column', (Column, Column.thrift_spec), None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, key=None, column_parent=None, column=None, consistency_level=thrift_spec[4][4],): self.key = key self.column_parent = column_parent self.column = column self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.column_parent = ColumnParent() self.column_parent.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.column = Column() self.column.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('insert_args') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.column_parent is not None: oprot.writeFieldBegin('column_parent', TType.STRUCT, 2) self.column_parent.write(oprot) oprot.writeFieldEnd() if self.column is not None: oprot.writeFieldBegin('column', TType.STRUCT, 3) self.column.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.column_parent is None: raise TProtocol.TProtocolException(message='Required field column_parent is unset!') if self.column is None: raise TProtocol.TProtocolException(message='Required field column is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class insert_result(object): """ Attributes: - ire - ue - te """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, ire=None, ue=None, te=None,): self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('insert_result') if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class add_args(object): """ Attributes: - key - column_parent - column - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.STRUCT, 'column_parent', (ColumnParent, ColumnParent.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'column', (CounterColumn, CounterColumn.thrift_spec), None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, key=None, column_parent=None, column=None, consistency_level=thrift_spec[4][4],): self.key = key self.column_parent = column_parent self.column = column self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.column_parent = ColumnParent() self.column_parent.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.column = CounterColumn() self.column.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('add_args') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.column_parent is not None: oprot.writeFieldBegin('column_parent', TType.STRUCT, 2) self.column_parent.write(oprot) oprot.writeFieldEnd() if self.column is not None: oprot.writeFieldBegin('column', TType.STRUCT, 3) self.column.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.column_parent is None: raise TProtocol.TProtocolException(message='Required field column_parent is unset!') if self.column is None: raise TProtocol.TProtocolException(message='Required field column is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class add_result(object): """ Attributes: - ire - ue - te """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, ire=None, ue=None, te=None,): self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('add_result') if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class remove_args(object): """ Attributes: - key - column_path - timestamp - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.STRUCT, 'column_path', (ColumnPath, ColumnPath.thrift_spec), None, ), # 2 (3, TType.I64, 'timestamp', None, None, ), # 3 (4, TType.I32, 'consistency_level', None, 1, ), # 4 ) def __init__(self, key=None, column_path=None, timestamp=None, consistency_level=thrift_spec[4][4],): self.key = key self.column_path = column_path self.timestamp = timestamp self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.column_path = ColumnPath() self.column_path.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I64: self.timestamp = iprot.readI64(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('remove_args') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.column_path is not None: oprot.writeFieldBegin('column_path', TType.STRUCT, 2) self.column_path.write(oprot) oprot.writeFieldEnd() if self.timestamp is not None: oprot.writeFieldBegin('timestamp', TType.I64, 3) oprot.writeI64(self.timestamp) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 4) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.column_path is None: raise TProtocol.TProtocolException(message='Required field column_path is unset!') if self.timestamp is None: raise TProtocol.TProtocolException(message='Required field timestamp is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class remove_result(object): """ Attributes: - ire - ue - te """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, ire=None, ue=None, te=None,): self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('remove_result') if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class remove_counter_args(object): """ Attributes: - key - path - consistency_level """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.STRUCT, 'path', (ColumnPath, ColumnPath.thrift_spec), None, ), # 2 (3, TType.I32, 'consistency_level', None, 1, ), # 3 ) def __init__(self, key=None, path=None, consistency_level=thrift_spec[3][4],): self.key = key self.path = path self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.path = ColumnPath() self.path.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('remove_counter_args') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.path is not None: oprot.writeFieldBegin('path', TType.STRUCT, 2) self.path.write(oprot) oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 3) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.path is None: raise TProtocol.TProtocolException(message='Required field path is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class remove_counter_result(object): """ Attributes: - ire - ue - te """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, ire=None, ue=None, te=None,): self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('remove_counter_result') if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class batch_mutate_args(object): """ Attributes: - mutation_map - consistency_level """ thrift_spec = ( None, # 0 (1, TType.MAP, 'mutation_map', (TType.STRING,None,TType.MAP,(TType.STRING,None,TType.LIST,(TType.STRUCT,(Mutation, Mutation.thrift_spec)))), None, ), # 1 (2, TType.I32, 'consistency_level', None, 1, ), # 2 ) def __init__(self, mutation_map=None, consistency_level=thrift_spec[2][4],): self.mutation_map = mutation_map self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.MAP: self.mutation_map = {} (_ktype236, _vtype237, _size235 ) = iprot.readMapBegin() for _i239 in xrange(_size235): _key240 = iprot.readString(); _val241 = {} (_ktype243, _vtype244, _size242 ) = iprot.readMapBegin() for _i246 in xrange(_size242): _key247 = iprot.readString(); _val248 = [] (_etype252, _size249) = iprot.readListBegin() for _i253 in xrange(_size249): _elem254 = Mutation() _elem254.read(iprot) _val248.append(_elem254) iprot.readListEnd() _val241[_key247] = _val248 iprot.readMapEnd() self.mutation_map[_key240] = _val241 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('batch_mutate_args') if self.mutation_map is not None: oprot.writeFieldBegin('mutation_map', TType.MAP, 1) oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.mutation_map)) for kiter255,viter256 in self.mutation_map.items(): oprot.writeString(kiter255) oprot.writeMapBegin(TType.STRING, TType.LIST, len(viter256)) for kiter257,viter258 in viter256.items(): oprot.writeString(kiter257) oprot.writeListBegin(TType.STRUCT, len(viter258)) for iter259 in viter258: iter259.write(oprot) oprot.writeListEnd() oprot.writeMapEnd() oprot.writeMapEnd() oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 2) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.mutation_map is None: raise TProtocol.TProtocolException(message='Required field mutation_map is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class batch_mutate_result(object): """ Attributes: - ire - ue - te """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, ire=None, ue=None, te=None,): self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('batch_mutate_result') if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class atomic_batch_mutate_args(object): """ Attributes: - mutation_map - consistency_level """ thrift_spec = ( None, # 0 (1, TType.MAP, 'mutation_map', (TType.STRING,None,TType.MAP,(TType.STRING,None,TType.LIST,(TType.STRUCT,(Mutation, Mutation.thrift_spec)))), None, ), # 1 (2, TType.I32, 'consistency_level', None, 1, ), # 2 ) def __init__(self, mutation_map=None, consistency_level=thrift_spec[2][4],): self.mutation_map = mutation_map self.consistency_level = consistency_level def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.MAP: self.mutation_map = {} (_ktype261, _vtype262, _size260 ) = iprot.readMapBegin() for _i264 in xrange(_size260): _key265 = iprot.readString(); _val266 = {} (_ktype268, _vtype269, _size267 ) = iprot.readMapBegin() for _i271 in xrange(_size267): _key272 = iprot.readString(); _val273 = [] (_etype277, _size274) = iprot.readListBegin() for _i278 in xrange(_size274): _elem279 = Mutation() _elem279.read(iprot) _val273.append(_elem279) iprot.readListEnd() _val266[_key272] = _val273 iprot.readMapEnd() self.mutation_map[_key265] = _val266 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I32: self.consistency_level = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('atomic_batch_mutate_args') if self.mutation_map is not None: oprot.writeFieldBegin('mutation_map', TType.MAP, 1) oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.mutation_map)) for kiter280,viter281 in self.mutation_map.items(): oprot.writeString(kiter280) oprot.writeMapBegin(TType.STRING, TType.LIST, len(viter281)) for kiter282,viter283 in viter281.items(): oprot.writeString(kiter282) oprot.writeListBegin(TType.STRUCT, len(viter283)) for iter284 in viter283: iter284.write(oprot) oprot.writeListEnd() oprot.writeMapEnd() oprot.writeMapEnd() oprot.writeFieldEnd() if self.consistency_level is not None: oprot.writeFieldBegin('consistency_level', TType.I32, 2) oprot.writeI32(self.consistency_level) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.mutation_map is None: raise TProtocol.TProtocolException(message='Required field mutation_map is unset!') if self.consistency_level is None: raise TProtocol.TProtocolException(message='Required field consistency_level is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class atomic_batch_mutate_result(object): """ Attributes: - ire - ue - te """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, ire=None, ue=None, te=None,): self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('atomic_batch_mutate_result') if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class truncate_args(object): """ Attributes: - cfname """ thrift_spec = ( None, # 0 (1, TType.STRING, 'cfname', None, None, ), # 1 ) def __init__(self, cfname=None,): self.cfname = cfname def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.cfname = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('truncate_args') if self.cfname is not None: oprot.writeFieldBegin('cfname', TType.STRING, 1) oprot.writeString(self.cfname) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.cfname is None: raise TProtocol.TProtocolException(message='Required field cfname is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class truncate_result(object): """ Attributes: - ire - ue - te """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 ) def __init__(self, ire=None, ue=None, te=None,): self.ire = ire self.ue = ue self.te = te def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('truncate_result') if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_schema_versions_args(object): thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_schema_versions_args') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_schema_versions_result(object): """ Attributes: - success - ire """ thrift_spec = ( (0, TType.MAP, 'success', (TType.STRING,None,TType.LIST,(TType.STRING,None)), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, success=None, ire=None,): self.success = success self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.MAP: self.success = {} (_ktype286, _vtype287, _size285 ) = iprot.readMapBegin() for _i289 in xrange(_size285): _key290 = iprot.readString(); _val291 = [] (_etype295, _size292) = iprot.readListBegin() for _i296 in xrange(_size292): _elem297 = iprot.readString(); _val291.append(_elem297) iprot.readListEnd() self.success[_key290] = _val291 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_schema_versions_result') if self.success is not None: oprot.writeFieldBegin('success', TType.MAP, 0) oprot.writeMapBegin(TType.STRING, TType.LIST, len(self.success)) for kiter298,viter299 in self.success.items(): oprot.writeString(kiter298) oprot.writeListBegin(TType.STRING, len(viter299)) for iter300 in viter299: oprot.writeString(iter300) oprot.writeListEnd() oprot.writeMapEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_keyspaces_args(object): thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_keyspaces_args') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_keyspaces_result(object): """ Attributes: - success - ire """ thrift_spec = ( (0, TType.LIST, 'success', (TType.STRUCT,(KsDef, KsDef.thrift_spec)), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, success=None, ire=None,): self.success = success self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.LIST: self.success = [] (_etype304, _size301) = iprot.readListBegin() for _i305 in xrange(_size301): _elem306 = KsDef() _elem306.read(iprot) self.success.append(_elem306) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_keyspaces_result') if self.success is not None: oprot.writeFieldBegin('success', TType.LIST, 0) oprot.writeListBegin(TType.STRUCT, len(self.success)) for iter307 in self.success: iter307.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_cluster_name_args(object): thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_cluster_name_args') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_cluster_name_result(object): """ Attributes: - success """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_cluster_name_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_version_args(object): thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_version_args') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_version_result(object): """ Attributes: - success """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_version_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_ring_args(object): """ Attributes: - keyspace """ thrift_spec = ( None, # 0 (1, TType.STRING, 'keyspace', None, None, ), # 1 ) def __init__(self, keyspace=None,): self.keyspace = keyspace def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.keyspace = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_ring_args') if self.keyspace is not None: oprot.writeFieldBegin('keyspace', TType.STRING, 1) oprot.writeString(self.keyspace) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.keyspace is None: raise TProtocol.TProtocolException(message='Required field keyspace is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_ring_result(object): """ Attributes: - success - ire """ thrift_spec = ( (0, TType.LIST, 'success', (TType.STRUCT,(TokenRange, TokenRange.thrift_spec)), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, success=None, ire=None,): self.success = success self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.LIST: self.success = [] (_etype311, _size308) = iprot.readListBegin() for _i312 in xrange(_size308): _elem313 = TokenRange() _elem313.read(iprot) self.success.append(_elem313) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_ring_result') if self.success is not None: oprot.writeFieldBegin('success', TType.LIST, 0) oprot.writeListBegin(TType.STRUCT, len(self.success)) for iter314 in self.success: iter314.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_token_map_args(object): thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_token_map_args') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_token_map_result(object): """ Attributes: - success - ire """ thrift_spec = ( (0, TType.MAP, 'success', (TType.STRING,None,TType.STRING,None), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, success=None, ire=None,): self.success = success self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.MAP: self.success = {} (_ktype316, _vtype317, _size315 ) = iprot.readMapBegin() for _i319 in xrange(_size315): _key320 = iprot.readString(); _val321 = iprot.readString(); self.success[_key320] = _val321 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_token_map_result') if self.success is not None: oprot.writeFieldBegin('success', TType.MAP, 0) oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.success)) for kiter322,viter323 in self.success.items(): oprot.writeString(kiter322) oprot.writeString(viter323) oprot.writeMapEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_partitioner_args(object): thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_partitioner_args') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_partitioner_result(object): """ Attributes: - success """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_partitioner_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_snitch_args(object): thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_snitch_args') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_snitch_result(object): """ Attributes: - success """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_snitch_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_keyspace_args(object): """ Attributes: - keyspace """ thrift_spec = ( None, # 0 (1, TType.STRING, 'keyspace', None, None, ), # 1 ) def __init__(self, keyspace=None,): self.keyspace = keyspace def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.keyspace = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_keyspace_args') if self.keyspace is not None: oprot.writeFieldBegin('keyspace', TType.STRING, 1) oprot.writeString(self.keyspace) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.keyspace is None: raise TProtocol.TProtocolException(message='Required field keyspace is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_keyspace_result(object): """ Attributes: - success - nfe - ire """ thrift_spec = ( (0, TType.STRUCT, 'success', (KsDef, KsDef.thrift_spec), None, ), # 0 (1, TType.STRUCT, 'nfe', (NotFoundException, NotFoundException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 2 ) def __init__(self, success=None, nfe=None, ire=None,): self.success = success self.nfe = nfe self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRUCT: self.success = KsDef() self.success.read(iprot) else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.nfe = NotFoundException() self.nfe.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_keyspace_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRUCT, 0) self.success.write(oprot) oprot.writeFieldEnd() if self.nfe is not None: oprot.writeFieldBegin('nfe', TType.STRUCT, 1) self.nfe.write(oprot) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 2) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_splits_args(object): """ Attributes: - cfName - start_token - end_token - keys_per_split """ thrift_spec = ( None, # 0 (1, TType.STRING, 'cfName', None, None, ), # 1 (2, TType.STRING, 'start_token', None, None, ), # 2 (3, TType.STRING, 'end_token', None, None, ), # 3 (4, TType.I32, 'keys_per_split', None, None, ), # 4 ) def __init__(self, cfName=None, start_token=None, end_token=None, keys_per_split=None,): self.cfName = cfName self.start_token = start_token self.end_token = end_token self.keys_per_split = keys_per_split def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.cfName = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.start_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRING: self.end_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.keys_per_split = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_splits_args') if self.cfName is not None: oprot.writeFieldBegin('cfName', TType.STRING, 1) oprot.writeString(self.cfName) oprot.writeFieldEnd() if self.start_token is not None: oprot.writeFieldBegin('start_token', TType.STRING, 2) oprot.writeString(self.start_token) oprot.writeFieldEnd() if self.end_token is not None: oprot.writeFieldBegin('end_token', TType.STRING, 3) oprot.writeString(self.end_token) oprot.writeFieldEnd() if self.keys_per_split is not None: oprot.writeFieldBegin('keys_per_split', TType.I32, 4) oprot.writeI32(self.keys_per_split) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.cfName is None: raise TProtocol.TProtocolException(message='Required field cfName is unset!') if self.start_token is None: raise TProtocol.TProtocolException(message='Required field start_token is unset!') if self.end_token is None: raise TProtocol.TProtocolException(message='Required field end_token is unset!') if self.keys_per_split is None: raise TProtocol.TProtocolException(message='Required field keys_per_split is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_splits_result(object): """ Attributes: - success - ire """ thrift_spec = ( (0, TType.LIST, 'success', (TType.STRING,None), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, success=None, ire=None,): self.success = success self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.LIST: self.success = [] (_etype327, _size324) = iprot.readListBegin() for _i328 in xrange(_size324): _elem329 = iprot.readString(); self.success.append(_elem329) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_splits_result') if self.success is not None: oprot.writeFieldBegin('success', TType.LIST, 0) oprot.writeListBegin(TType.STRING, len(self.success)) for iter330 in self.success: oprot.writeString(iter330) oprot.writeListEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class trace_next_query_args(object): thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('trace_next_query_args') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class trace_next_query_result(object): """ Attributes: - success """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 ) def __init__(self, success=None,): self.success = success def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('trace_next_query_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_splits_ex_args(object): """ Attributes: - cfName - start_token - end_token - keys_per_split """ thrift_spec = ( None, # 0 (1, TType.STRING, 'cfName', None, None, ), # 1 (2, TType.STRING, 'start_token', None, None, ), # 2 (3, TType.STRING, 'end_token', None, None, ), # 3 (4, TType.I32, 'keys_per_split', None, None, ), # 4 ) def __init__(self, cfName=None, start_token=None, end_token=None, keys_per_split=None,): self.cfName = cfName self.start_token = start_token self.end_token = end_token self.keys_per_split = keys_per_split def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.cfName = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.start_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRING: self.end_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.keys_per_split = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_splits_ex_args') if self.cfName is not None: oprot.writeFieldBegin('cfName', TType.STRING, 1) oprot.writeString(self.cfName) oprot.writeFieldEnd() if self.start_token is not None: oprot.writeFieldBegin('start_token', TType.STRING, 2) oprot.writeString(self.start_token) oprot.writeFieldEnd() if self.end_token is not None: oprot.writeFieldBegin('end_token', TType.STRING, 3) oprot.writeString(self.end_token) oprot.writeFieldEnd() if self.keys_per_split is not None: oprot.writeFieldBegin('keys_per_split', TType.I32, 4) oprot.writeI32(self.keys_per_split) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.cfName is None: raise TProtocol.TProtocolException(message='Required field cfName is unset!') if self.start_token is None: raise TProtocol.TProtocolException(message='Required field start_token is unset!') if self.end_token is None: raise TProtocol.TProtocolException(message='Required field end_token is unset!') if self.keys_per_split is None: raise TProtocol.TProtocolException(message='Required field keys_per_split is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class describe_splits_ex_result(object): """ Attributes: - success - ire """ thrift_spec = ( (0, TType.LIST, 'success', (TType.STRUCT,(CfSplit, CfSplit.thrift_spec)), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, success=None, ire=None,): self.success = success self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.LIST: self.success = [] (_etype334, _size331) = iprot.readListBegin() for _i335 in xrange(_size331): _elem336 = CfSplit() _elem336.read(iprot) self.success.append(_elem336) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('describe_splits_ex_result') if self.success is not None: oprot.writeFieldBegin('success', TType.LIST, 0) oprot.writeListBegin(TType.STRUCT, len(self.success)) for iter337 in self.success: iter337.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_add_column_family_args(object): """ Attributes: - cf_def """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'cf_def', (CfDef, CfDef.thrift_spec), None, ), # 1 ) def __init__(self, cf_def=None,): self.cf_def = cf_def def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.cf_def = CfDef() self.cf_def.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_add_column_family_args') if self.cf_def is not None: oprot.writeFieldBegin('cf_def', TType.STRUCT, 1) self.cf_def.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.cf_def is None: raise TProtocol.TProtocolException(message='Required field cf_def is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_add_column_family_result(object): """ Attributes: - success - ire - sde """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 2 ) def __init__(self, success=None, ire=None, sde=None,): self.success = success self.ire = ire self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_add_column_family_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 2) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_drop_column_family_args(object): """ Attributes: - column_family """ thrift_spec = ( None, # 0 (1, TType.STRING, 'column_family', None, None, ), # 1 ) def __init__(self, column_family=None,): self.column_family = column_family def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.column_family = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_drop_column_family_args') if self.column_family is not None: oprot.writeFieldBegin('column_family', TType.STRING, 1) oprot.writeString(self.column_family) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.column_family is None: raise TProtocol.TProtocolException(message='Required field column_family is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_drop_column_family_result(object): """ Attributes: - success - ire - sde """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 2 ) def __init__(self, success=None, ire=None, sde=None,): self.success = success self.ire = ire self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_drop_column_family_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 2) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_add_keyspace_args(object): """ Attributes: - ks_def """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ks_def', (KsDef, KsDef.thrift_spec), None, ), # 1 ) def __init__(self, ks_def=None,): self.ks_def = ks_def def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ks_def = KsDef() self.ks_def.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_add_keyspace_args') if self.ks_def is not None: oprot.writeFieldBegin('ks_def', TType.STRUCT, 1) self.ks_def.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.ks_def is None: raise TProtocol.TProtocolException(message='Required field ks_def is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_add_keyspace_result(object): """ Attributes: - success - ire - sde """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 2 ) def __init__(self, success=None, ire=None, sde=None,): self.success = success self.ire = ire self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_add_keyspace_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 2) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_drop_keyspace_args(object): """ Attributes: - keyspace """ thrift_spec = ( None, # 0 (1, TType.STRING, 'keyspace', None, None, ), # 1 ) def __init__(self, keyspace=None,): self.keyspace = keyspace def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.keyspace = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_drop_keyspace_args') if self.keyspace is not None: oprot.writeFieldBegin('keyspace', TType.STRING, 1) oprot.writeString(self.keyspace) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.keyspace is None: raise TProtocol.TProtocolException(message='Required field keyspace is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_drop_keyspace_result(object): """ Attributes: - success - ire - sde """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 2 ) def __init__(self, success=None, ire=None, sde=None,): self.success = success self.ire = ire self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_drop_keyspace_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 2) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_update_keyspace_args(object): """ Attributes: - ks_def """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ks_def', (KsDef, KsDef.thrift_spec), None, ), # 1 ) def __init__(self, ks_def=None,): self.ks_def = ks_def def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ks_def = KsDef() self.ks_def.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_update_keyspace_args') if self.ks_def is not None: oprot.writeFieldBegin('ks_def', TType.STRUCT, 1) self.ks_def.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.ks_def is None: raise TProtocol.TProtocolException(message='Required field ks_def is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_update_keyspace_result(object): """ Attributes: - success - ire - sde """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 2 ) def __init__(self, success=None, ire=None, sde=None,): self.success = success self.ire = ire self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_update_keyspace_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 2) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_update_column_family_args(object): """ Attributes: - cf_def """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'cf_def', (CfDef, CfDef.thrift_spec), None, ), # 1 ) def __init__(self, cf_def=None,): self.cf_def = cf_def def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.cf_def = CfDef() self.cf_def.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_update_column_family_args') if self.cf_def is not None: oprot.writeFieldBegin('cf_def', TType.STRUCT, 1) self.cf_def.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.cf_def is None: raise TProtocol.TProtocolException(message='Required field cf_def is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class system_update_column_family_result(object): """ Attributes: - success - ire - sde """ thrift_spec = ( (0, TType.STRING, 'success', None, None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 2 ) def __init__(self, success=None, ire=None, sde=None,): self.success = success self.ire = ire self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRING: self.success = iprot.readString(); else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('system_update_column_family_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRING, 0) oprot.writeString(self.success) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 2) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class execute_cql_query_args(object): """ Attributes: - query - compression """ thrift_spec = ( None, # 0 (1, TType.STRING, 'query', None, None, ), # 1 (2, TType.I32, 'compression', None, None, ), # 2 ) def __init__(self, query=None, compression=None,): self.query = query self.compression = compression def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.query = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I32: self.compression = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('execute_cql_query_args') if self.query is not None: oprot.writeFieldBegin('query', TType.STRING, 1) oprot.writeString(self.query) oprot.writeFieldEnd() if self.compression is not None: oprot.writeFieldBegin('compression', TType.I32, 2) oprot.writeI32(self.compression) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.query is None: raise TProtocol.TProtocolException(message='Required field query is unset!') if self.compression is None: raise TProtocol.TProtocolException(message='Required field compression is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class execute_cql_query_result(object): """ Attributes: - success - ire - ue - te - sde """ thrift_spec = ( (0, TType.STRUCT, 'success', (CqlResult, CqlResult.thrift_spec), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 (4, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 4 ) def __init__(self, success=None, ire=None, ue=None, te=None, sde=None,): self.success = success self.ire = ire self.ue = ue self.te = te self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRUCT: self.success = CqlResult() self.success.read(iprot) else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('execute_cql_query_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRUCT, 0) self.success.write(oprot) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 4) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class execute_cql3_query_args(object): """ Attributes: - query - compression - consistency """ thrift_spec = ( None, # 0 (1, TType.STRING, 'query', None, None, ), # 1 (2, TType.I32, 'compression', None, None, ), # 2 (3, TType.I32, 'consistency', None, None, ), # 3 ) def __init__(self, query=None, compression=None, consistency=None,): self.query = query self.compression = compression self.consistency = consistency def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.query = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I32: self.compression = iprot.readI32(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I32: self.consistency = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('execute_cql3_query_args') if self.query is not None: oprot.writeFieldBegin('query', TType.STRING, 1) oprot.writeString(self.query) oprot.writeFieldEnd() if self.compression is not None: oprot.writeFieldBegin('compression', TType.I32, 2) oprot.writeI32(self.compression) oprot.writeFieldEnd() if self.consistency is not None: oprot.writeFieldBegin('consistency', TType.I32, 3) oprot.writeI32(self.consistency) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.query is None: raise TProtocol.TProtocolException(message='Required field query is unset!') if self.compression is None: raise TProtocol.TProtocolException(message='Required field compression is unset!') if self.consistency is None: raise TProtocol.TProtocolException(message='Required field consistency is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class execute_cql3_query_result(object): """ Attributes: - success - ire - ue - te - sde """ thrift_spec = ( (0, TType.STRUCT, 'success', (CqlResult, CqlResult.thrift_spec), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 (4, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 4 ) def __init__(self, success=None, ire=None, ue=None, te=None, sde=None,): self.success = success self.ire = ire self.ue = ue self.te = te self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRUCT: self.success = CqlResult() self.success.read(iprot) else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('execute_cql3_query_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRUCT, 0) self.success.write(oprot) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 4) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class prepare_cql_query_args(object): """ Attributes: - query - compression """ thrift_spec = ( None, # 0 (1, TType.STRING, 'query', None, None, ), # 1 (2, TType.I32, 'compression', None, None, ), # 2 ) def __init__(self, query=None, compression=None,): self.query = query self.compression = compression def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.query = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I32: self.compression = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('prepare_cql_query_args') if self.query is not None: oprot.writeFieldBegin('query', TType.STRING, 1) oprot.writeString(self.query) oprot.writeFieldEnd() if self.compression is not None: oprot.writeFieldBegin('compression', TType.I32, 2) oprot.writeI32(self.compression) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.query is None: raise TProtocol.TProtocolException(message='Required field query is unset!') if self.compression is None: raise TProtocol.TProtocolException(message='Required field compression is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class prepare_cql_query_result(object): """ Attributes: - success - ire """ thrift_spec = ( (0, TType.STRUCT, 'success', (CqlPreparedResult, CqlPreparedResult.thrift_spec), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, success=None, ire=None,): self.success = success self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRUCT: self.success = CqlPreparedResult() self.success.read(iprot) else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('prepare_cql_query_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRUCT, 0) self.success.write(oprot) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class prepare_cql3_query_args(object): """ Attributes: - query - compression """ thrift_spec = ( None, # 0 (1, TType.STRING, 'query', None, None, ), # 1 (2, TType.I32, 'compression', None, None, ), # 2 ) def __init__(self, query=None, compression=None,): self.query = query self.compression = compression def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.query = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I32: self.compression = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('prepare_cql3_query_args') if self.query is not None: oprot.writeFieldBegin('query', TType.STRING, 1) oprot.writeString(self.query) oprot.writeFieldEnd() if self.compression is not None: oprot.writeFieldBegin('compression', TType.I32, 2) oprot.writeI32(self.compression) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.query is None: raise TProtocol.TProtocolException(message='Required field query is unset!') if self.compression is None: raise TProtocol.TProtocolException(message='Required field compression is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class prepare_cql3_query_result(object): """ Attributes: - success - ire """ thrift_spec = ( (0, TType.STRUCT, 'success', (CqlPreparedResult, CqlPreparedResult.thrift_spec), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, success=None, ire=None,): self.success = success self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRUCT: self.success = CqlPreparedResult() self.success.read(iprot) else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('prepare_cql3_query_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRUCT, 0) self.success.write(oprot) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class execute_prepared_cql_query_args(object): """ Attributes: - itemId - values """ thrift_spec = ( None, # 0 (1, TType.I32, 'itemId', None, None, ), # 1 (2, TType.LIST, 'values', (TType.STRING,None), None, ), # 2 ) def __init__(self, itemId=None, values=None,): self.itemId = itemId self.values = values def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.I32: self.itemId = iprot.readI32(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.LIST: self.values = [] (_etype341, _size338) = iprot.readListBegin() for _i342 in xrange(_size338): _elem343 = iprot.readString(); self.values.append(_elem343) iprot.readListEnd() else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('execute_prepared_cql_query_args') if self.itemId is not None: oprot.writeFieldBegin('itemId', TType.I32, 1) oprot.writeI32(self.itemId) oprot.writeFieldEnd() if self.values is not None: oprot.writeFieldBegin('values', TType.LIST, 2) oprot.writeListBegin(TType.STRING, len(self.values)) for iter344 in self.values: oprot.writeString(iter344) oprot.writeListEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.itemId is None: raise TProtocol.TProtocolException(message='Required field itemId is unset!') if self.values is None: raise TProtocol.TProtocolException(message='Required field values is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class execute_prepared_cql_query_result(object): """ Attributes: - success - ire - ue - te - sde """ thrift_spec = ( (0, TType.STRUCT, 'success', (CqlResult, CqlResult.thrift_spec), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 (4, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 4 ) def __init__(self, success=None, ire=None, ue=None, te=None, sde=None,): self.success = success self.ire = ire self.ue = ue self.te = te self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRUCT: self.success = CqlResult() self.success.read(iprot) else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('execute_prepared_cql_query_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRUCT, 0) self.success.write(oprot) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 4) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class execute_prepared_cql3_query_args(object): """ Attributes: - itemId - values - consistency """ thrift_spec = ( None, # 0 (1, TType.I32, 'itemId', None, None, ), # 1 (2, TType.LIST, 'values', (TType.STRING,None), None, ), # 2 (3, TType.I32, 'consistency', None, None, ), # 3 ) def __init__(self, itemId=None, values=None, consistency=None,): self.itemId = itemId self.values = values self.consistency = consistency def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.I32: self.itemId = iprot.readI32(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.LIST: self.values = [] (_etype348, _size345) = iprot.readListBegin() for _i349 in xrange(_size345): _elem350 = iprot.readString(); self.values.append(_elem350) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I32: self.consistency = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('execute_prepared_cql3_query_args') if self.itemId is not None: oprot.writeFieldBegin('itemId', TType.I32, 1) oprot.writeI32(self.itemId) oprot.writeFieldEnd() if self.values is not None: oprot.writeFieldBegin('values', TType.LIST, 2) oprot.writeListBegin(TType.STRING, len(self.values)) for iter351 in self.values: oprot.writeString(iter351) oprot.writeListEnd() oprot.writeFieldEnd() if self.consistency is not None: oprot.writeFieldBegin('consistency', TType.I32, 3) oprot.writeI32(self.consistency) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.itemId is None: raise TProtocol.TProtocolException(message='Required field itemId is unset!') if self.values is None: raise TProtocol.TProtocolException(message='Required field values is unset!') if self.consistency is None: raise TProtocol.TProtocolException(message='Required field consistency is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class execute_prepared_cql3_query_result(object): """ Attributes: - success - ire - ue - te - sde """ thrift_spec = ( (0, TType.STRUCT, 'success', (CqlResult, CqlResult.thrift_spec), None, ), # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'ue', (UnavailableException, UnavailableException.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'te', (TimedOutException, TimedOutException.thrift_spec), None, ), # 3 (4, TType.STRUCT, 'sde', (SchemaDisagreementException, SchemaDisagreementException.thrift_spec), None, ), # 4 ) def __init__(self, success=None, ire=None, ue=None, te=None, sde=None,): self.success = success self.ire = ire self.ue = ue self.te = te self.sde = sde def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 0: if ftype == TType.STRUCT: self.success = CqlResult() self.success.read(iprot) else: iprot.skip(ftype) elif fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.ue = UnavailableException() self.ue.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.te = TimedOutException() self.te.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRUCT: self.sde = SchemaDisagreementException() self.sde.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('execute_prepared_cql3_query_result') if self.success is not None: oprot.writeFieldBegin('success', TType.STRUCT, 0) self.success.write(oprot) oprot.writeFieldEnd() if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() if self.ue is not None: oprot.writeFieldBegin('ue', TType.STRUCT, 2) self.ue.write(oprot) oprot.writeFieldEnd() if self.te is not None: oprot.writeFieldBegin('te', TType.STRUCT, 3) self.te.write(oprot) oprot.writeFieldEnd() if self.sde is not None: oprot.writeFieldBegin('sde', TType.STRUCT, 4) self.sde.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class set_cql_version_args(object): """ Attributes: - version """ thrift_spec = ( None, # 0 (1, TType.STRING, 'version', None, None, ), # 1 ) def __init__(self, version=None,): self.version = version def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.version = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('set_cql_version_args') if self.version is not None: oprot.writeFieldBegin('version', TType.STRING, 1) oprot.writeString(self.version) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.version is None: raise TProtocol.TProtocolException(message='Required field version is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class set_cql_version_result(object): """ Attributes: - ire """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'ire', (InvalidRequestException, InvalidRequestException.thrift_spec), None, ), # 1 ) def __init__(self, ire=None,): self.ire = ire def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.ire = InvalidRequestException() self.ire.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('set_cql_version_result') if self.ire is not None: oprot.writeFieldBegin('ire', TType.STRUCT, 1) self.ire.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) pycassa-1.11.2.1/pycassa/cassandra/LICENSE000066400000000000000000000261361303744607500200260ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. pycassa-1.11.2.1/pycassa/cassandra/__init__.py000066400000000000000000000000571303744607500211240ustar00rootroot00000000000000__all__ = ['ttypes', 'constants', 'Cassandra'] pycassa-1.11.2.1/pycassa/cassandra/constants.py000066400000000000000000000004221303744607500213750ustar00rootroot00000000000000# # Autogenerated by Thrift Compiler (0.9.0) # # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING # # options string: py:new_style # from thrift.Thrift import TType, TMessageType, TException, TApplicationException from ttypes import * VERSION = "19.36.1" pycassa-1.11.2.1/pycassa/cassandra/ttypes.py000066400000000000000000003741431303744607500207270ustar00rootroot00000000000000# # Autogenerated by Thrift Compiler (0.9.0) # # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING # # options string: py:new_style # from thrift.Thrift import TType, TMessageType, TException, TApplicationException from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol, TProtocol try: from thrift.protocol import fastbinary except: fastbinary = None class ConsistencyLevel(object): """ The ConsistencyLevel is an enum that controls both read and write behavior based on the ReplicationFactor of the keyspace. The different consistency levels have different meanings, depending on if you're doing a write or read operation. If W + R > ReplicationFactor, where W is the number of nodes to block for on write, and R the number to block for on reads, you will have strongly consistent behavior; that is, readers will always see the most recent write. Of these, the most interesting is to do QUORUM reads and writes, which gives you consistency while still allowing availability in the face of node failures up to half of . Of course if latency is more important than consistency then you can use lower values for either or both. Some ConsistencyLevels (ONE, TWO, THREE) refer to a specific number of replicas rather than a logical concept that adjusts automatically with the replication factor. Of these, only ONE is commonly used; TWO and (even more rarely) THREE are only useful when you care more about guaranteeing a certain level of durability, than consistency. Write consistency levels make the following guarantees before reporting success to the client: ANY Ensure that the write has been written once somewhere, including possibly being hinted in a non-target node. ONE Ensure that the write has been written to at least 1 node's commit log and memory table TWO Ensure that the write has been written to at least 2 node's commit log and memory table THREE Ensure that the write has been written to at least 3 node's commit log and memory table QUORUM Ensure that the write has been written to / 2 + 1 nodes LOCAL_ONE Ensure that the write has been written to 1 node within the local datacenter (requires NetworkTopologyStrategy) LOCAL_QUORUM Ensure that the write has been written to / 2 + 1 nodes, within the local datacenter (requires NetworkTopologyStrategy) EACH_QUORUM Ensure that the write has been written to / 2 + 1 nodes in each datacenter (requires NetworkTopologyStrategy) ALL Ensure that the write is written to <ReplicationFactor> nodes before responding to the client. Read consistency levels make the following guarantees before returning successful results to the client: ANY Not supported. You probably want ONE instead. ONE Returns the record obtained from a single replica. TWO Returns the record with the most recent timestamp once two replicas have replied. THREE Returns the record with the most recent timestamp once three replicas have replied. QUORUM Returns the record with the most recent timestamp once a majority of replicas have replied. LOCAL_ONE Returns the record with the most recent timestamp once a single replica within the local datacenter have replied. LOCAL_QUORUM Returns the record with the most recent timestamp once a majority of replicas within the local datacenter have replied. EACH_QUORUM Returns the record with the most recent timestamp once a majority of replicas within each datacenter have replied. ALL Returns the record with the most recent timestamp once all replicas have replied (implies no replica may be down).. """ ONE = 1 QUORUM = 2 LOCAL_QUORUM = 3 EACH_QUORUM = 4 ALL = 5 ANY = 6 TWO = 7 THREE = 8 LOCAL_ONE = 11 _VALUES_TO_NAMES = { 1: "ONE", 2: "QUORUM", 3: "LOCAL_QUORUM", 4: "EACH_QUORUM", 5: "ALL", 6: "ANY", 7: "TWO", 8: "THREE", 11: "LOCAL_ONE", } _NAMES_TO_VALUES = { "ONE": 1, "QUORUM": 2, "LOCAL_QUORUM": 3, "EACH_QUORUM": 4, "ALL": 5, "ANY": 6, "TWO": 7, "THREE": 8, "LOCAL_ONE": 11, } class IndexOperator(object): EQ = 0 GTE = 1 GT = 2 LTE = 3 LT = 4 _VALUES_TO_NAMES = { 0: "EQ", 1: "GTE", 2: "GT", 3: "LTE", 4: "LT", } _NAMES_TO_VALUES = { "EQ": 0, "GTE": 1, "GT": 2, "LTE": 3, "LT": 4, } class IndexType(object): KEYS = 0 CUSTOM = 1 COMPOSITES = 2 _VALUES_TO_NAMES = { 0: "KEYS", 1: "CUSTOM", 2: "COMPOSITES", } _NAMES_TO_VALUES = { "KEYS": 0, "CUSTOM": 1, "COMPOSITES": 2, } class Compression(object): """ CQL query compression """ GZIP = 1 NONE = 2 _VALUES_TO_NAMES = { 1: "GZIP", 2: "NONE", } _NAMES_TO_VALUES = { "GZIP": 1, "NONE": 2, } class CqlResultType(object): ROWS = 1 VOID = 2 INT = 3 _VALUES_TO_NAMES = { 1: "ROWS", 2: "VOID", 3: "INT", } _NAMES_TO_VALUES = { "ROWS": 1, "VOID": 2, "INT": 3, } class Column(object): """ Basic unit of data within a ColumnFamily. @param name, the name by which this column is set and retrieved. Maximum 64KB long. @param value. The data associated with the name. Maximum 2GB long, but in practice you should limit it to small numbers of MB (since Thrift must read the full value into memory to operate on it). @param timestamp. The timestamp is used for conflict detection/resolution when two columns with same name need to be compared. @param ttl. An optional, positive delay (in seconds) after which the column will be automatically deleted. Attributes: - name - value - timestamp - ttl """ thrift_spec = ( None, # 0 (1, TType.STRING, 'name', None, None, ), # 1 (2, TType.STRING, 'value', None, None, ), # 2 (3, TType.I64, 'timestamp', None, None, ), # 3 (4, TType.I32, 'ttl', None, None, ), # 4 ) def __init__(self, name=None, value=None, timestamp=None, ttl=None,): self.name = name self.value = value self.timestamp = timestamp self.ttl = ttl def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.name = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.value = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I64: self.timestamp = iprot.readI64(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.ttl = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('Column') if self.name is not None: oprot.writeFieldBegin('name', TType.STRING, 1) oprot.writeString(self.name) oprot.writeFieldEnd() if self.value is not None: oprot.writeFieldBegin('value', TType.STRING, 2) oprot.writeString(self.value) oprot.writeFieldEnd() if self.timestamp is not None: oprot.writeFieldBegin('timestamp', TType.I64, 3) oprot.writeI64(self.timestamp) oprot.writeFieldEnd() if self.ttl is not None: oprot.writeFieldBegin('ttl', TType.I32, 4) oprot.writeI32(self.ttl) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.name is None: raise TProtocol.TProtocolException(message='Required field name is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class SuperColumn(object): """ A named list of columns. @param name. see Column.name. @param columns. A collection of standard Columns. The columns within a super column are defined in an adhoc manner. Columns within a super column do not have to have matching structures (similarly named child columns). Attributes: - name - columns """ thrift_spec = ( None, # 0 (1, TType.STRING, 'name', None, None, ), # 1 (2, TType.LIST, 'columns', (TType.STRUCT,(Column, Column.thrift_spec)), None, ), # 2 ) def __init__(self, name=None, columns=None,): self.name = name self.columns = columns def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.name = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.LIST: self.columns = [] (_etype3, _size0) = iprot.readListBegin() for _i4 in xrange(_size0): _elem5 = Column() _elem5.read(iprot) self.columns.append(_elem5) iprot.readListEnd() else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('SuperColumn') if self.name is not None: oprot.writeFieldBegin('name', TType.STRING, 1) oprot.writeString(self.name) oprot.writeFieldEnd() if self.columns is not None: oprot.writeFieldBegin('columns', TType.LIST, 2) oprot.writeListBegin(TType.STRUCT, len(self.columns)) for iter6 in self.columns: iter6.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.name is None: raise TProtocol.TProtocolException(message='Required field name is unset!') if self.columns is None: raise TProtocol.TProtocolException(message='Required field columns is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class CounterColumn(object): """ Attributes: - name - value """ thrift_spec = ( None, # 0 (1, TType.STRING, 'name', None, None, ), # 1 (2, TType.I64, 'value', None, None, ), # 2 ) def __init__(self, name=None, value=None,): self.name = name self.value = value def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.name = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I64: self.value = iprot.readI64(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('CounterColumn') if self.name is not None: oprot.writeFieldBegin('name', TType.STRING, 1) oprot.writeString(self.name) oprot.writeFieldEnd() if self.value is not None: oprot.writeFieldBegin('value', TType.I64, 2) oprot.writeI64(self.value) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.name is None: raise TProtocol.TProtocolException(message='Required field name is unset!') if self.value is None: raise TProtocol.TProtocolException(message='Required field value is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class CounterSuperColumn(object): """ Attributes: - name - columns """ thrift_spec = ( None, # 0 (1, TType.STRING, 'name', None, None, ), # 1 (2, TType.LIST, 'columns', (TType.STRUCT,(CounterColumn, CounterColumn.thrift_spec)), None, ), # 2 ) def __init__(self, name=None, columns=None,): self.name = name self.columns = columns def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.name = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.LIST: self.columns = [] (_etype10, _size7) = iprot.readListBegin() for _i11 in xrange(_size7): _elem12 = CounterColumn() _elem12.read(iprot) self.columns.append(_elem12) iprot.readListEnd() else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('CounterSuperColumn') if self.name is not None: oprot.writeFieldBegin('name', TType.STRING, 1) oprot.writeString(self.name) oprot.writeFieldEnd() if self.columns is not None: oprot.writeFieldBegin('columns', TType.LIST, 2) oprot.writeListBegin(TType.STRUCT, len(self.columns)) for iter13 in self.columns: iter13.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.name is None: raise TProtocol.TProtocolException(message='Required field name is unset!') if self.columns is None: raise TProtocol.TProtocolException(message='Required field columns is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class ColumnOrSuperColumn(object): """ Methods for fetching rows/records from Cassandra will return either a single instance of ColumnOrSuperColumn or a list of ColumnOrSuperColumns (get_slice()). If you're looking up a SuperColumn (or list of SuperColumns) then the resulting instances of ColumnOrSuperColumn will have the requested SuperColumn in the attribute super_column. For queries resulting in Columns, those values will be in the attribute column. This change was made between 0.3 and 0.4 to standardize on single query methods that may return either a SuperColumn or Column. If the query was on a counter column family, you will either get a counter_column (instead of a column) or a counter_super_column (instead of a super_column) @param column. The Column returned by get() or get_slice(). @param super_column. The SuperColumn returned by get() or get_slice(). @param counter_column. The Counterolumn returned by get() or get_slice(). @param counter_super_column. The CounterSuperColumn returned by get() or get_slice(). Attributes: - column - super_column - counter_column - counter_super_column """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'column', (Column, Column.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'super_column', (SuperColumn, SuperColumn.thrift_spec), None, ), # 2 (3, TType.STRUCT, 'counter_column', (CounterColumn, CounterColumn.thrift_spec), None, ), # 3 (4, TType.STRUCT, 'counter_super_column', (CounterSuperColumn, CounterSuperColumn.thrift_spec), None, ), # 4 ) def __init__(self, column=None, super_column=None, counter_column=None, counter_super_column=None,): self.column = column self.super_column = super_column self.counter_column = counter_column self.counter_super_column = counter_super_column def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.column = Column() self.column.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.super_column = SuperColumn() self.super_column.read(iprot) else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.counter_column = CounterColumn() self.counter_column.read(iprot) else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRUCT: self.counter_super_column = CounterSuperColumn() self.counter_super_column.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('ColumnOrSuperColumn') if self.column is not None: oprot.writeFieldBegin('column', TType.STRUCT, 1) self.column.write(oprot) oprot.writeFieldEnd() if self.super_column is not None: oprot.writeFieldBegin('super_column', TType.STRUCT, 2) self.super_column.write(oprot) oprot.writeFieldEnd() if self.counter_column is not None: oprot.writeFieldBegin('counter_column', TType.STRUCT, 3) self.counter_column.write(oprot) oprot.writeFieldEnd() if self.counter_super_column is not None: oprot.writeFieldBegin('counter_super_column', TType.STRUCT, 4) self.counter_super_column.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class NotFoundException(TException): """ A specific column was requested that does not exist. """ thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('NotFoundException') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __str__(self): return repr(self) def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class InvalidRequestException(TException): """ Invalid request could mean keyspace or column family does not exist, required parameters are missing, or a parameter is malformed. why contains an associated error message. Attributes: - why """ thrift_spec = ( None, # 0 (1, TType.STRING, 'why', None, None, ), # 1 ) def __init__(self, why=None,): self.why = why def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.why = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('InvalidRequestException') if self.why is not None: oprot.writeFieldBegin('why', TType.STRING, 1) oprot.writeString(self.why) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.why is None: raise TProtocol.TProtocolException(message='Required field why is unset!') return def __str__(self): return repr(self) def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class UnavailableException(TException): """ Not all the replicas required could be created and/or read. """ thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('UnavailableException') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __str__(self): return repr(self) def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class TimedOutException(TException): """ RPC timeout was exceeded. either a node failed mid-operation, or load was too high, or the requested op was too large. Attributes: - acknowledged_by: if a write operation was acknowledged by some replicas but not by enough to satisfy the required ConsistencyLevel, the number of successful replies will be given here. In case of atomic_batch_mutate method this field will be set to -1 if the batch was written to the batchlog and to 0 if it wasn't. - acknowledged_by_batchlog: in case of atomic_batch_mutate method this field tells if the batch was written to the batchlog. """ thrift_spec = ( None, # 0 (1, TType.I32, 'acknowledged_by', None, None, ), # 1 (2, TType.BOOL, 'acknowledged_by_batchlog', None, None, ), # 2 ) def __init__(self, acknowledged_by=None, acknowledged_by_batchlog=None,): self.acknowledged_by = acknowledged_by self.acknowledged_by_batchlog = acknowledged_by_batchlog def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.I32: self.acknowledged_by = iprot.readI32(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.BOOL: self.acknowledged_by_batchlog = iprot.readBool(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('TimedOutException') if self.acknowledged_by is not None: oprot.writeFieldBegin('acknowledged_by', TType.I32, 1) oprot.writeI32(self.acknowledged_by) oprot.writeFieldEnd() if self.acknowledged_by_batchlog is not None: oprot.writeFieldBegin('acknowledged_by_batchlog', TType.BOOL, 2) oprot.writeBool(self.acknowledged_by_batchlog) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __str__(self): return repr(self) def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class AuthenticationException(TException): """ invalid authentication request (invalid keyspace, user does not exist, or credentials invalid) Attributes: - why """ thrift_spec = ( None, # 0 (1, TType.STRING, 'why', None, None, ), # 1 ) def __init__(self, why=None,): self.why = why def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.why = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('AuthenticationException') if self.why is not None: oprot.writeFieldBegin('why', TType.STRING, 1) oprot.writeString(self.why) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.why is None: raise TProtocol.TProtocolException(message='Required field why is unset!') return def __str__(self): return repr(self) def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class AuthorizationException(TException): """ invalid authorization request (user does not have access to keyspace) Attributes: - why """ thrift_spec = ( None, # 0 (1, TType.STRING, 'why', None, None, ), # 1 ) def __init__(self, why=None,): self.why = why def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.why = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('AuthorizationException') if self.why is not None: oprot.writeFieldBegin('why', TType.STRING, 1) oprot.writeString(self.why) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.why is None: raise TProtocol.TProtocolException(message='Required field why is unset!') return def __str__(self): return repr(self) def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class SchemaDisagreementException(TException): """ NOTE: This up outdated exception left for backward compatibility reasons, no actual schema agreement validation is done starting from Cassandra 1.2 schemas are not in agreement across all nodes """ thrift_spec = ( ) def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('SchemaDisagreementException') oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __str__(self): return repr(self) def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class ColumnParent(object): """ ColumnParent is used when selecting groups of columns from the same ColumnFamily. In directory structure terms, imagine ColumnParent as ColumnPath + '/../'. See also ColumnPath Attributes: - column_family - super_column """ thrift_spec = ( None, # 0 None, # 1 None, # 2 (3, TType.STRING, 'column_family', None, None, ), # 3 (4, TType.STRING, 'super_column', None, None, ), # 4 ) def __init__(self, column_family=None, super_column=None,): self.column_family = column_family self.super_column = super_column def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 3: if ftype == TType.STRING: self.column_family = iprot.readString(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRING: self.super_column = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('ColumnParent') if self.column_family is not None: oprot.writeFieldBegin('column_family', TType.STRING, 3) oprot.writeString(self.column_family) oprot.writeFieldEnd() if self.super_column is not None: oprot.writeFieldBegin('super_column', TType.STRING, 4) oprot.writeString(self.super_column) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.column_family is None: raise TProtocol.TProtocolException(message='Required field column_family is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class ColumnPath(object): """ The ColumnPath is the path to a single column in Cassandra. It might make sense to think of ColumnPath and ColumnParent in terms of a directory structure. ColumnPath is used to looking up a single column. @param column_family. The name of the CF of the column being looked up. @param super_column. The super column name. @param column. The column name. Attributes: - column_family - super_column - column """ thrift_spec = ( None, # 0 None, # 1 None, # 2 (3, TType.STRING, 'column_family', None, None, ), # 3 (4, TType.STRING, 'super_column', None, None, ), # 4 (5, TType.STRING, 'column', None, None, ), # 5 ) def __init__(self, column_family=None, super_column=None, column=None,): self.column_family = column_family self.super_column = super_column self.column = column def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 3: if ftype == TType.STRING: self.column_family = iprot.readString(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRING: self.super_column = iprot.readString(); else: iprot.skip(ftype) elif fid == 5: if ftype == TType.STRING: self.column = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('ColumnPath') if self.column_family is not None: oprot.writeFieldBegin('column_family', TType.STRING, 3) oprot.writeString(self.column_family) oprot.writeFieldEnd() if self.super_column is not None: oprot.writeFieldBegin('super_column', TType.STRING, 4) oprot.writeString(self.super_column) oprot.writeFieldEnd() if self.column is not None: oprot.writeFieldBegin('column', TType.STRING, 5) oprot.writeString(self.column) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.column_family is None: raise TProtocol.TProtocolException(message='Required field column_family is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class SliceRange(object): """ A slice range is a structure that stores basic range, ordering and limit information for a query that will return multiple columns. It could be thought of as Cassandra's version of LIMIT and ORDER BY @param start. The column name to start the slice with. This attribute is not required, though there is no default value, and can be safely set to '', i.e., an empty byte array, to start with the first column name. Otherwise, it must a valid value under the rules of the Comparator defined for the given ColumnFamily. @param finish. The column name to stop the slice at. This attribute is not required, though there is no default value, and can be safely set to an empty byte array to not stop until 'count' results are seen. Otherwise, it must also be a valid value to the ColumnFamily Comparator. @param reversed. Whether the results should be ordered in reversed order. Similar to ORDER BY blah DESC in SQL. @param count. How many columns to return. Similar to LIMIT in SQL. May be arbitrarily large, but Thrift will materialize the whole result into memory before returning it to the client, so be aware that you may be better served by iterating through slices by passing the last value of one call in as the 'start' of the next instead of increasing 'count' arbitrarily large. Attributes: - start - finish - reversed - count """ thrift_spec = ( None, # 0 (1, TType.STRING, 'start', None, None, ), # 1 (2, TType.STRING, 'finish', None, None, ), # 2 (3, TType.BOOL, 'reversed', None, False, ), # 3 (4, TType.I32, 'count', None, 100, ), # 4 ) def __init__(self, start=None, finish=None, reversed=thrift_spec[3][4], count=thrift_spec[4][4],): self.start = start self.finish = finish self.reversed = reversed self.count = count def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.start = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.finish = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.BOOL: self.reversed = iprot.readBool(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.count = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('SliceRange') if self.start is not None: oprot.writeFieldBegin('start', TType.STRING, 1) oprot.writeString(self.start) oprot.writeFieldEnd() if self.finish is not None: oprot.writeFieldBegin('finish', TType.STRING, 2) oprot.writeString(self.finish) oprot.writeFieldEnd() if self.reversed is not None: oprot.writeFieldBegin('reversed', TType.BOOL, 3) oprot.writeBool(self.reversed) oprot.writeFieldEnd() if self.count is not None: oprot.writeFieldBegin('count', TType.I32, 4) oprot.writeI32(self.count) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.start is None: raise TProtocol.TProtocolException(message='Required field start is unset!') if self.finish is None: raise TProtocol.TProtocolException(message='Required field finish is unset!') if self.reversed is None: raise TProtocol.TProtocolException(message='Required field reversed is unset!') if self.count is None: raise TProtocol.TProtocolException(message='Required field count is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class SlicePredicate(object): """ A SlicePredicate is similar to a mathematic predicate (see http://en.wikipedia.org/wiki/Predicate_(mathematical_logic)), which is described as "a property that the elements of a set have in common." SlicePredicate's in Cassandra are described with either a list of column_names or a SliceRange. If column_names is specified, slice_range is ignored. @param column_name. A list of column names to retrieve. This can be used similar to Memcached's "multi-get" feature to fetch N known column names. For instance, if you know you wish to fetch columns 'Joe', 'Jack', and 'Jim' you can pass those column names as a list to fetch all three at once. @param slice_range. A SliceRange describing how to range, order, and/or limit the slice. Attributes: - column_names - slice_range """ thrift_spec = ( None, # 0 (1, TType.LIST, 'column_names', (TType.STRING,None), None, ), # 1 (2, TType.STRUCT, 'slice_range', (SliceRange, SliceRange.thrift_spec), None, ), # 2 ) def __init__(self, column_names=None, slice_range=None,): self.column_names = column_names self.slice_range = slice_range def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.LIST: self.column_names = [] (_etype17, _size14) = iprot.readListBegin() for _i18 in xrange(_size14): _elem19 = iprot.readString(); self.column_names.append(_elem19) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.slice_range = SliceRange() self.slice_range.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('SlicePredicate') if self.column_names is not None: oprot.writeFieldBegin('column_names', TType.LIST, 1) oprot.writeListBegin(TType.STRING, len(self.column_names)) for iter20 in self.column_names: oprot.writeString(iter20) oprot.writeListEnd() oprot.writeFieldEnd() if self.slice_range is not None: oprot.writeFieldBegin('slice_range', TType.STRUCT, 2) self.slice_range.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class IndexExpression(object): """ Attributes: - column_name - op - value """ thrift_spec = ( None, # 0 (1, TType.STRING, 'column_name', None, None, ), # 1 (2, TType.I32, 'op', None, None, ), # 2 (3, TType.STRING, 'value', None, None, ), # 3 ) def __init__(self, column_name=None, op=None, value=None,): self.column_name = column_name self.op = op self.value = value def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.column_name = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I32: self.op = iprot.readI32(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRING: self.value = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('IndexExpression') if self.column_name is not None: oprot.writeFieldBegin('column_name', TType.STRING, 1) oprot.writeString(self.column_name) oprot.writeFieldEnd() if self.op is not None: oprot.writeFieldBegin('op', TType.I32, 2) oprot.writeI32(self.op) oprot.writeFieldEnd() if self.value is not None: oprot.writeFieldBegin('value', TType.STRING, 3) oprot.writeString(self.value) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.column_name is None: raise TProtocol.TProtocolException(message='Required field column_name is unset!') if self.op is None: raise TProtocol.TProtocolException(message='Required field op is unset!') if self.value is None: raise TProtocol.TProtocolException(message='Required field value is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class IndexClause(object): """ @deprecated use a KeyRange with row_filter in get_range_slices instead Attributes: - expressions - start_key - count """ thrift_spec = ( None, # 0 (1, TType.LIST, 'expressions', (TType.STRUCT,(IndexExpression, IndexExpression.thrift_spec)), None, ), # 1 (2, TType.STRING, 'start_key', None, None, ), # 2 (3, TType.I32, 'count', None, 100, ), # 3 ) def __init__(self, expressions=None, start_key=None, count=thrift_spec[3][4],): self.expressions = expressions self.start_key = start_key self.count = count def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.LIST: self.expressions = [] (_etype24, _size21) = iprot.readListBegin() for _i25 in xrange(_size21): _elem26 = IndexExpression() _elem26.read(iprot) self.expressions.append(_elem26) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.start_key = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I32: self.count = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('IndexClause') if self.expressions is not None: oprot.writeFieldBegin('expressions', TType.LIST, 1) oprot.writeListBegin(TType.STRUCT, len(self.expressions)) for iter27 in self.expressions: iter27.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.start_key is not None: oprot.writeFieldBegin('start_key', TType.STRING, 2) oprot.writeString(self.start_key) oprot.writeFieldEnd() if self.count is not None: oprot.writeFieldBegin('count', TType.I32, 3) oprot.writeI32(self.count) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.expressions is None: raise TProtocol.TProtocolException(message='Required field expressions is unset!') if self.start_key is None: raise TProtocol.TProtocolException(message='Required field start_key is unset!') if self.count is None: raise TProtocol.TProtocolException(message='Required field count is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class KeyRange(object): """ The semantics of start keys and tokens are slightly different. Keys are start-inclusive; tokens are start-exclusive. Token ranges may also wrap -- that is, the end token may be less than the start one. Thus, a range from keyX to keyX is a one-element range, but a range from tokenY to tokenY is the full ring. Attributes: - start_key - end_key - start_token - end_token - row_filter - count """ thrift_spec = ( None, # 0 (1, TType.STRING, 'start_key', None, None, ), # 1 (2, TType.STRING, 'end_key', None, None, ), # 2 (3, TType.STRING, 'start_token', None, None, ), # 3 (4, TType.STRING, 'end_token', None, None, ), # 4 (5, TType.I32, 'count', None, 100, ), # 5 (6, TType.LIST, 'row_filter', (TType.STRUCT,(IndexExpression, IndexExpression.thrift_spec)), None, ), # 6 ) def __init__(self, start_key=None, end_key=None, start_token=None, end_token=None, row_filter=None, count=thrift_spec[5][4],): self.start_key = start_key self.end_key = end_key self.start_token = start_token self.end_token = end_token self.row_filter = row_filter self.count = count def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.start_key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.end_key = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRING: self.start_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRING: self.end_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 6: if ftype == TType.LIST: self.row_filter = [] (_etype31, _size28) = iprot.readListBegin() for _i32 in xrange(_size28): _elem33 = IndexExpression() _elem33.read(iprot) self.row_filter.append(_elem33) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 5: if ftype == TType.I32: self.count = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('KeyRange') if self.start_key is not None: oprot.writeFieldBegin('start_key', TType.STRING, 1) oprot.writeString(self.start_key) oprot.writeFieldEnd() if self.end_key is not None: oprot.writeFieldBegin('end_key', TType.STRING, 2) oprot.writeString(self.end_key) oprot.writeFieldEnd() if self.start_token is not None: oprot.writeFieldBegin('start_token', TType.STRING, 3) oprot.writeString(self.start_token) oprot.writeFieldEnd() if self.end_token is not None: oprot.writeFieldBegin('end_token', TType.STRING, 4) oprot.writeString(self.end_token) oprot.writeFieldEnd() if self.count is not None: oprot.writeFieldBegin('count', TType.I32, 5) oprot.writeI32(self.count) oprot.writeFieldEnd() if self.row_filter is not None: oprot.writeFieldBegin('row_filter', TType.LIST, 6) oprot.writeListBegin(TType.STRUCT, len(self.row_filter)) for iter34 in self.row_filter: iter34.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.count is None: raise TProtocol.TProtocolException(message='Required field count is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class KeySlice(object): """ A KeySlice is key followed by the data it maps to. A collection of KeySlice is returned by the get_range_slice operation. @param key. a row key @param columns. List of data represented by the key. Typically, the list is pared down to only the columns specified by a SlicePredicate. Attributes: - key - columns """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.LIST, 'columns', (TType.STRUCT,(ColumnOrSuperColumn, ColumnOrSuperColumn.thrift_spec)), None, ), # 2 ) def __init__(self, key=None, columns=None,): self.key = key self.columns = columns def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.LIST: self.columns = [] (_etype38, _size35) = iprot.readListBegin() for _i39 in xrange(_size35): _elem40 = ColumnOrSuperColumn() _elem40.read(iprot) self.columns.append(_elem40) iprot.readListEnd() else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('KeySlice') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.columns is not None: oprot.writeFieldBegin('columns', TType.LIST, 2) oprot.writeListBegin(TType.STRUCT, len(self.columns)) for iter41 in self.columns: iter41.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.columns is None: raise TProtocol.TProtocolException(message='Required field columns is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class KeyCount(object): """ Attributes: - key - count """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.I32, 'count', None, None, ), # 2 ) def __init__(self, key=None, count=None,): self.key = key self.count = count def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I32: self.count = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('KeyCount') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.count is not None: oprot.writeFieldBegin('count', TType.I32, 2) oprot.writeI32(self.count) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.count is None: raise TProtocol.TProtocolException(message='Required field count is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class Deletion(object): """ Note that the timestamp is only optional in case of counter deletion. Attributes: - timestamp - super_column - predicate """ thrift_spec = ( None, # 0 (1, TType.I64, 'timestamp', None, None, ), # 1 (2, TType.STRING, 'super_column', None, None, ), # 2 (3, TType.STRUCT, 'predicate', (SlicePredicate, SlicePredicate.thrift_spec), None, ), # 3 ) def __init__(self, timestamp=None, super_column=None, predicate=None,): self.timestamp = timestamp self.super_column = super_column self.predicate = predicate def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.I64: self.timestamp = iprot.readI64(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.super_column = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRUCT: self.predicate = SlicePredicate() self.predicate.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('Deletion') if self.timestamp is not None: oprot.writeFieldBegin('timestamp', TType.I64, 1) oprot.writeI64(self.timestamp) oprot.writeFieldEnd() if self.super_column is not None: oprot.writeFieldBegin('super_column', TType.STRING, 2) oprot.writeString(self.super_column) oprot.writeFieldEnd() if self.predicate is not None: oprot.writeFieldBegin('predicate', TType.STRUCT, 3) self.predicate.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class Mutation(object): """ A Mutation is either an insert (represented by filling column_or_supercolumn) or a deletion (represented by filling the deletion attribute). @param column_or_supercolumn. An insert to a column or supercolumn (possibly counter column or supercolumn) @param deletion. A deletion of a column or supercolumn Attributes: - column_or_supercolumn - deletion """ thrift_spec = ( None, # 0 (1, TType.STRUCT, 'column_or_supercolumn', (ColumnOrSuperColumn, ColumnOrSuperColumn.thrift_spec), None, ), # 1 (2, TType.STRUCT, 'deletion', (Deletion, Deletion.thrift_spec), None, ), # 2 ) def __init__(self, column_or_supercolumn=None, deletion=None,): self.column_or_supercolumn = column_or_supercolumn self.deletion = deletion def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRUCT: self.column_or_supercolumn = ColumnOrSuperColumn() self.column_or_supercolumn.read(iprot) else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRUCT: self.deletion = Deletion() self.deletion.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('Mutation') if self.column_or_supercolumn is not None: oprot.writeFieldBegin('column_or_supercolumn', TType.STRUCT, 1) self.column_or_supercolumn.write(oprot) oprot.writeFieldEnd() if self.deletion is not None: oprot.writeFieldBegin('deletion', TType.STRUCT, 2) self.deletion.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class EndpointDetails(object): """ Attributes: - host - datacenter - rack """ thrift_spec = ( None, # 0 (1, TType.STRING, 'host', None, None, ), # 1 (2, TType.STRING, 'datacenter', None, None, ), # 2 (3, TType.STRING, 'rack', None, None, ), # 3 ) def __init__(self, host=None, datacenter=None, rack=None,): self.host = host self.datacenter = datacenter self.rack = rack def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.host = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.datacenter = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRING: self.rack = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('EndpointDetails') if self.host is not None: oprot.writeFieldBegin('host', TType.STRING, 1) oprot.writeString(self.host) oprot.writeFieldEnd() if self.datacenter is not None: oprot.writeFieldBegin('datacenter', TType.STRING, 2) oprot.writeString(self.datacenter) oprot.writeFieldEnd() if self.rack is not None: oprot.writeFieldBegin('rack', TType.STRING, 3) oprot.writeString(self.rack) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class TokenRange(object): """ A TokenRange describes part of the Cassandra ring, it is a mapping from a range to endpoints responsible for that range. @param start_token The first token in the range @param end_token The last token in the range @param endpoints The endpoints responsible for the range (listed by their configured listen_address) @param rpc_endpoints The endpoints responsible for the range (listed by their configured rpc_address) Attributes: - start_token - end_token - endpoints - rpc_endpoints - endpoint_details """ thrift_spec = ( None, # 0 (1, TType.STRING, 'start_token', None, None, ), # 1 (2, TType.STRING, 'end_token', None, None, ), # 2 (3, TType.LIST, 'endpoints', (TType.STRING,None), None, ), # 3 (4, TType.LIST, 'rpc_endpoints', (TType.STRING,None), None, ), # 4 (5, TType.LIST, 'endpoint_details', (TType.STRUCT,(EndpointDetails, EndpointDetails.thrift_spec)), None, ), # 5 ) def __init__(self, start_token=None, end_token=None, endpoints=None, rpc_endpoints=None, endpoint_details=None,): self.start_token = start_token self.end_token = end_token self.endpoints = endpoints self.rpc_endpoints = rpc_endpoints self.endpoint_details = endpoint_details def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.start_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.end_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.LIST: self.endpoints = [] (_etype45, _size42) = iprot.readListBegin() for _i46 in xrange(_size42): _elem47 = iprot.readString(); self.endpoints.append(_elem47) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 4: if ftype == TType.LIST: self.rpc_endpoints = [] (_etype51, _size48) = iprot.readListBegin() for _i52 in xrange(_size48): _elem53 = iprot.readString(); self.rpc_endpoints.append(_elem53) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 5: if ftype == TType.LIST: self.endpoint_details = [] (_etype57, _size54) = iprot.readListBegin() for _i58 in xrange(_size54): _elem59 = EndpointDetails() _elem59.read(iprot) self.endpoint_details.append(_elem59) iprot.readListEnd() else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('TokenRange') if self.start_token is not None: oprot.writeFieldBegin('start_token', TType.STRING, 1) oprot.writeString(self.start_token) oprot.writeFieldEnd() if self.end_token is not None: oprot.writeFieldBegin('end_token', TType.STRING, 2) oprot.writeString(self.end_token) oprot.writeFieldEnd() if self.endpoints is not None: oprot.writeFieldBegin('endpoints', TType.LIST, 3) oprot.writeListBegin(TType.STRING, len(self.endpoints)) for iter60 in self.endpoints: oprot.writeString(iter60) oprot.writeListEnd() oprot.writeFieldEnd() if self.rpc_endpoints is not None: oprot.writeFieldBegin('rpc_endpoints', TType.LIST, 4) oprot.writeListBegin(TType.STRING, len(self.rpc_endpoints)) for iter61 in self.rpc_endpoints: oprot.writeString(iter61) oprot.writeListEnd() oprot.writeFieldEnd() if self.endpoint_details is not None: oprot.writeFieldBegin('endpoint_details', TType.LIST, 5) oprot.writeListBegin(TType.STRUCT, len(self.endpoint_details)) for iter62 in self.endpoint_details: iter62.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.start_token is None: raise TProtocol.TProtocolException(message='Required field start_token is unset!') if self.end_token is None: raise TProtocol.TProtocolException(message='Required field end_token is unset!') if self.endpoints is None: raise TProtocol.TProtocolException(message='Required field endpoints is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class AuthenticationRequest(object): """ Authentication requests can contain any data, dependent on the IAuthenticator used Attributes: - credentials """ thrift_spec = ( None, # 0 (1, TType.MAP, 'credentials', (TType.STRING,None,TType.STRING,None), None, ), # 1 ) def __init__(self, credentials=None,): self.credentials = credentials def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.MAP: self.credentials = {} (_ktype64, _vtype65, _size63 ) = iprot.readMapBegin() for _i67 in xrange(_size63): _key68 = iprot.readString(); _val69 = iprot.readString(); self.credentials[_key68] = _val69 iprot.readMapEnd() else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('AuthenticationRequest') if self.credentials is not None: oprot.writeFieldBegin('credentials', TType.MAP, 1) oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.credentials)) for kiter70,viter71 in self.credentials.items(): oprot.writeString(kiter70) oprot.writeString(viter71) oprot.writeMapEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.credentials is None: raise TProtocol.TProtocolException(message='Required field credentials is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class ColumnDef(object): """ Attributes: - name - validation_class - index_type - index_name - index_options """ thrift_spec = ( None, # 0 (1, TType.STRING, 'name', None, None, ), # 1 (2, TType.STRING, 'validation_class', None, None, ), # 2 (3, TType.I32, 'index_type', None, None, ), # 3 (4, TType.STRING, 'index_name', None, None, ), # 4 (5, TType.MAP, 'index_options', (TType.STRING,None,TType.STRING,None), None, ), # 5 ) def __init__(self, name=None, validation_class=None, index_type=None, index_name=None, index_options=None,): self.name = name self.validation_class = validation_class self.index_type = index_type self.index_name = index_name self.index_options = index_options def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.name = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.validation_class = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I32: self.index_type = iprot.readI32(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRING: self.index_name = iprot.readString(); else: iprot.skip(ftype) elif fid == 5: if ftype == TType.MAP: self.index_options = {} (_ktype73, _vtype74, _size72 ) = iprot.readMapBegin() for _i76 in xrange(_size72): _key77 = iprot.readString(); _val78 = iprot.readString(); self.index_options[_key77] = _val78 iprot.readMapEnd() else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('ColumnDef') if self.name is not None: oprot.writeFieldBegin('name', TType.STRING, 1) oprot.writeString(self.name) oprot.writeFieldEnd() if self.validation_class is not None: oprot.writeFieldBegin('validation_class', TType.STRING, 2) oprot.writeString(self.validation_class) oprot.writeFieldEnd() if self.index_type is not None: oprot.writeFieldBegin('index_type', TType.I32, 3) oprot.writeI32(self.index_type) oprot.writeFieldEnd() if self.index_name is not None: oprot.writeFieldBegin('index_name', TType.STRING, 4) oprot.writeString(self.index_name) oprot.writeFieldEnd() if self.index_options is not None: oprot.writeFieldBegin('index_options', TType.MAP, 5) oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.index_options)) for kiter79,viter80 in self.index_options.items(): oprot.writeString(kiter79) oprot.writeString(viter80) oprot.writeMapEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.name is None: raise TProtocol.TProtocolException(message='Required field name is unset!') if self.validation_class is None: raise TProtocol.TProtocolException(message='Required field validation_class is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class CfDef(object): """ Attributes: - keyspace - name - column_type - comparator_type - subcomparator_type - comment - read_repair_chance - column_metadata - gc_grace_seconds - default_validation_class - id - min_compaction_threshold - max_compaction_threshold - replicate_on_write - key_validation_class - key_alias - compaction_strategy - compaction_strategy_options - compression_options - bloom_filter_fp_chance - caching - dclocal_read_repair_chance - populate_io_cache_on_flush - row_cache_size: @deprecated - key_cache_size: @deprecated - row_cache_save_period_in_seconds: @deprecated - key_cache_save_period_in_seconds: @deprecated - memtable_flush_after_mins: @deprecated - memtable_throughput_in_mb: @deprecated - memtable_operations_in_millions: @deprecated - merge_shards_chance: @deprecated - row_cache_provider: @deprecated - row_cache_keys_to_save: @deprecated """ thrift_spec = ( None, # 0 (1, TType.STRING, 'keyspace', None, None, ), # 1 (2, TType.STRING, 'name', None, None, ), # 2 (3, TType.STRING, 'column_type', None, "Standard", ), # 3 None, # 4 (5, TType.STRING, 'comparator_type', None, "BytesType", ), # 5 (6, TType.STRING, 'subcomparator_type', None, None, ), # 6 None, # 7 (8, TType.STRING, 'comment', None, None, ), # 8 (9, TType.DOUBLE, 'row_cache_size', None, None, ), # 9 None, # 10 (11, TType.DOUBLE, 'key_cache_size', None, None, ), # 11 (12, TType.DOUBLE, 'read_repair_chance', None, None, ), # 12 (13, TType.LIST, 'column_metadata', (TType.STRUCT,(ColumnDef, ColumnDef.thrift_spec)), None, ), # 13 (14, TType.I32, 'gc_grace_seconds', None, None, ), # 14 (15, TType.STRING, 'default_validation_class', None, None, ), # 15 (16, TType.I32, 'id', None, None, ), # 16 (17, TType.I32, 'min_compaction_threshold', None, None, ), # 17 (18, TType.I32, 'max_compaction_threshold', None, None, ), # 18 (19, TType.I32, 'row_cache_save_period_in_seconds', None, None, ), # 19 (20, TType.I32, 'key_cache_save_period_in_seconds', None, None, ), # 20 (21, TType.I32, 'memtable_flush_after_mins', None, None, ), # 21 (22, TType.I32, 'memtable_throughput_in_mb', None, None, ), # 22 (23, TType.DOUBLE, 'memtable_operations_in_millions', None, None, ), # 23 (24, TType.BOOL, 'replicate_on_write', None, None, ), # 24 (25, TType.DOUBLE, 'merge_shards_chance', None, None, ), # 25 (26, TType.STRING, 'key_validation_class', None, None, ), # 26 (27, TType.STRING, 'row_cache_provider', None, None, ), # 27 (28, TType.STRING, 'key_alias', None, None, ), # 28 (29, TType.STRING, 'compaction_strategy', None, None, ), # 29 (30, TType.MAP, 'compaction_strategy_options', (TType.STRING,None,TType.STRING,None), None, ), # 30 (31, TType.I32, 'row_cache_keys_to_save', None, None, ), # 31 (32, TType.MAP, 'compression_options', (TType.STRING,None,TType.STRING,None), None, ), # 32 (33, TType.DOUBLE, 'bloom_filter_fp_chance', None, None, ), # 33 (34, TType.STRING, 'caching', None, "keys_only", ), # 34 None, # 35 None, # 36 (37, TType.DOUBLE, 'dclocal_read_repair_chance', None, 0, ), # 37 (38, TType.BOOL, 'populate_io_cache_on_flush', None, None, ), # 38 ) def __init__(self, keyspace=None, name=None, column_type=thrift_spec[3][4], comparator_type=thrift_spec[5][4], subcomparator_type=None, comment=None, read_repair_chance=None, column_metadata=None, gc_grace_seconds=None, default_validation_class=None, id=None, min_compaction_threshold=None, max_compaction_threshold=None, replicate_on_write=None, key_validation_class=None, key_alias=None, compaction_strategy=None, compaction_strategy_options=None, compression_options=None, bloom_filter_fp_chance=None, caching=thrift_spec[34][4], dclocal_read_repair_chance=thrift_spec[37][4], populate_io_cache_on_flush=None, row_cache_size=None, key_cache_size=None, row_cache_save_period_in_seconds=None, key_cache_save_period_in_seconds=None, memtable_flush_after_mins=None, memtable_throughput_in_mb=None, memtable_operations_in_millions=None, merge_shards_chance=None, row_cache_provider=None, row_cache_keys_to_save=None,): self.keyspace = keyspace self.name = name self.column_type = column_type self.comparator_type = comparator_type self.subcomparator_type = subcomparator_type self.comment = comment self.read_repair_chance = read_repair_chance self.column_metadata = column_metadata self.gc_grace_seconds = gc_grace_seconds self.default_validation_class = default_validation_class self.id = id self.min_compaction_threshold = min_compaction_threshold self.max_compaction_threshold = max_compaction_threshold self.replicate_on_write = replicate_on_write self.key_validation_class = key_validation_class self.key_alias = key_alias self.compaction_strategy = compaction_strategy self.compaction_strategy_options = compaction_strategy_options self.compression_options = compression_options self.bloom_filter_fp_chance = bloom_filter_fp_chance self.caching = caching self.dclocal_read_repair_chance = dclocal_read_repair_chance self.populate_io_cache_on_flush = populate_io_cache_on_flush self.row_cache_size = row_cache_size self.key_cache_size = key_cache_size self.row_cache_save_period_in_seconds = row_cache_save_period_in_seconds self.key_cache_save_period_in_seconds = key_cache_save_period_in_seconds self.memtable_flush_after_mins = memtable_flush_after_mins self.memtable_throughput_in_mb = memtable_throughput_in_mb self.memtable_operations_in_millions = memtable_operations_in_millions self.merge_shards_chance = merge_shards_chance self.row_cache_provider = row_cache_provider self.row_cache_keys_to_save = row_cache_keys_to_save def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.keyspace = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.name = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRING: self.column_type = iprot.readString(); else: iprot.skip(ftype) elif fid == 5: if ftype == TType.STRING: self.comparator_type = iprot.readString(); else: iprot.skip(ftype) elif fid == 6: if ftype == TType.STRING: self.subcomparator_type = iprot.readString(); else: iprot.skip(ftype) elif fid == 8: if ftype == TType.STRING: self.comment = iprot.readString(); else: iprot.skip(ftype) elif fid == 12: if ftype == TType.DOUBLE: self.read_repair_chance = iprot.readDouble(); else: iprot.skip(ftype) elif fid == 13: if ftype == TType.LIST: self.column_metadata = [] (_etype84, _size81) = iprot.readListBegin() for _i85 in xrange(_size81): _elem86 = ColumnDef() _elem86.read(iprot) self.column_metadata.append(_elem86) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 14: if ftype == TType.I32: self.gc_grace_seconds = iprot.readI32(); else: iprot.skip(ftype) elif fid == 15: if ftype == TType.STRING: self.default_validation_class = iprot.readString(); else: iprot.skip(ftype) elif fid == 16: if ftype == TType.I32: self.id = iprot.readI32(); else: iprot.skip(ftype) elif fid == 17: if ftype == TType.I32: self.min_compaction_threshold = iprot.readI32(); else: iprot.skip(ftype) elif fid == 18: if ftype == TType.I32: self.max_compaction_threshold = iprot.readI32(); else: iprot.skip(ftype) elif fid == 24: if ftype == TType.BOOL: self.replicate_on_write = iprot.readBool(); else: iprot.skip(ftype) elif fid == 26: if ftype == TType.STRING: self.key_validation_class = iprot.readString(); else: iprot.skip(ftype) elif fid == 28: if ftype == TType.STRING: self.key_alias = iprot.readString(); else: iprot.skip(ftype) elif fid == 29: if ftype == TType.STRING: self.compaction_strategy = iprot.readString(); else: iprot.skip(ftype) elif fid == 30: if ftype == TType.MAP: self.compaction_strategy_options = {} (_ktype88, _vtype89, _size87 ) = iprot.readMapBegin() for _i91 in xrange(_size87): _key92 = iprot.readString(); _val93 = iprot.readString(); self.compaction_strategy_options[_key92] = _val93 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 32: if ftype == TType.MAP: self.compression_options = {} (_ktype95, _vtype96, _size94 ) = iprot.readMapBegin() for _i98 in xrange(_size94): _key99 = iprot.readString(); _val100 = iprot.readString(); self.compression_options[_key99] = _val100 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 33: if ftype == TType.DOUBLE: self.bloom_filter_fp_chance = iprot.readDouble(); else: iprot.skip(ftype) elif fid == 34: if ftype == TType.STRING: self.caching = iprot.readString(); else: iprot.skip(ftype) elif fid == 37: if ftype == TType.DOUBLE: self.dclocal_read_repair_chance = iprot.readDouble(); else: iprot.skip(ftype) elif fid == 38: if ftype == TType.BOOL: self.populate_io_cache_on_flush = iprot.readBool(); else: iprot.skip(ftype) elif fid == 9: if ftype == TType.DOUBLE: self.row_cache_size = iprot.readDouble(); else: iprot.skip(ftype) elif fid == 11: if ftype == TType.DOUBLE: self.key_cache_size = iprot.readDouble(); else: iprot.skip(ftype) elif fid == 19: if ftype == TType.I32: self.row_cache_save_period_in_seconds = iprot.readI32(); else: iprot.skip(ftype) elif fid == 20: if ftype == TType.I32: self.key_cache_save_period_in_seconds = iprot.readI32(); else: iprot.skip(ftype) elif fid == 21: if ftype == TType.I32: self.memtable_flush_after_mins = iprot.readI32(); else: iprot.skip(ftype) elif fid == 22: if ftype == TType.I32: self.memtable_throughput_in_mb = iprot.readI32(); else: iprot.skip(ftype) elif fid == 23: if ftype == TType.DOUBLE: self.memtable_operations_in_millions = iprot.readDouble(); else: iprot.skip(ftype) elif fid == 25: if ftype == TType.DOUBLE: self.merge_shards_chance = iprot.readDouble(); else: iprot.skip(ftype) elif fid == 27: if ftype == TType.STRING: self.row_cache_provider = iprot.readString(); else: iprot.skip(ftype) elif fid == 31: if ftype == TType.I32: self.row_cache_keys_to_save = iprot.readI32(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('CfDef') if self.keyspace is not None: oprot.writeFieldBegin('keyspace', TType.STRING, 1) oprot.writeString(self.keyspace) oprot.writeFieldEnd() if self.name is not None: oprot.writeFieldBegin('name', TType.STRING, 2) oprot.writeString(self.name) oprot.writeFieldEnd() if self.column_type is not None: oprot.writeFieldBegin('column_type', TType.STRING, 3) oprot.writeString(self.column_type) oprot.writeFieldEnd() if self.comparator_type is not None: oprot.writeFieldBegin('comparator_type', TType.STRING, 5) oprot.writeString(self.comparator_type) oprot.writeFieldEnd() if self.subcomparator_type is not None: oprot.writeFieldBegin('subcomparator_type', TType.STRING, 6) oprot.writeString(self.subcomparator_type) oprot.writeFieldEnd() if self.comment is not None: oprot.writeFieldBegin('comment', TType.STRING, 8) oprot.writeString(self.comment) oprot.writeFieldEnd() if self.row_cache_size is not None: oprot.writeFieldBegin('row_cache_size', TType.DOUBLE, 9) oprot.writeDouble(self.row_cache_size) oprot.writeFieldEnd() if self.key_cache_size is not None: oprot.writeFieldBegin('key_cache_size', TType.DOUBLE, 11) oprot.writeDouble(self.key_cache_size) oprot.writeFieldEnd() if self.read_repair_chance is not None: oprot.writeFieldBegin('read_repair_chance', TType.DOUBLE, 12) oprot.writeDouble(self.read_repair_chance) oprot.writeFieldEnd() if self.column_metadata is not None: oprot.writeFieldBegin('column_metadata', TType.LIST, 13) oprot.writeListBegin(TType.STRUCT, len(self.column_metadata)) for iter101 in self.column_metadata: iter101.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.gc_grace_seconds is not None: oprot.writeFieldBegin('gc_grace_seconds', TType.I32, 14) oprot.writeI32(self.gc_grace_seconds) oprot.writeFieldEnd() if self.default_validation_class is not None: oprot.writeFieldBegin('default_validation_class', TType.STRING, 15) oprot.writeString(self.default_validation_class) oprot.writeFieldEnd() if self.id is not None: oprot.writeFieldBegin('id', TType.I32, 16) oprot.writeI32(self.id) oprot.writeFieldEnd() if self.min_compaction_threshold is not None: oprot.writeFieldBegin('min_compaction_threshold', TType.I32, 17) oprot.writeI32(self.min_compaction_threshold) oprot.writeFieldEnd() if self.max_compaction_threshold is not None: oprot.writeFieldBegin('max_compaction_threshold', TType.I32, 18) oprot.writeI32(self.max_compaction_threshold) oprot.writeFieldEnd() if self.row_cache_save_period_in_seconds is not None: oprot.writeFieldBegin('row_cache_save_period_in_seconds', TType.I32, 19) oprot.writeI32(self.row_cache_save_period_in_seconds) oprot.writeFieldEnd() if self.key_cache_save_period_in_seconds is not None: oprot.writeFieldBegin('key_cache_save_period_in_seconds', TType.I32, 20) oprot.writeI32(self.key_cache_save_period_in_seconds) oprot.writeFieldEnd() if self.memtable_flush_after_mins is not None: oprot.writeFieldBegin('memtable_flush_after_mins', TType.I32, 21) oprot.writeI32(self.memtable_flush_after_mins) oprot.writeFieldEnd() if self.memtable_throughput_in_mb is not None: oprot.writeFieldBegin('memtable_throughput_in_mb', TType.I32, 22) oprot.writeI32(self.memtable_throughput_in_mb) oprot.writeFieldEnd() if self.memtable_operations_in_millions is not None: oprot.writeFieldBegin('memtable_operations_in_millions', TType.DOUBLE, 23) oprot.writeDouble(self.memtable_operations_in_millions) oprot.writeFieldEnd() if self.replicate_on_write is not None: oprot.writeFieldBegin('replicate_on_write', TType.BOOL, 24) oprot.writeBool(self.replicate_on_write) oprot.writeFieldEnd() if self.merge_shards_chance is not None: oprot.writeFieldBegin('merge_shards_chance', TType.DOUBLE, 25) oprot.writeDouble(self.merge_shards_chance) oprot.writeFieldEnd() if self.key_validation_class is not None: oprot.writeFieldBegin('key_validation_class', TType.STRING, 26) oprot.writeString(self.key_validation_class) oprot.writeFieldEnd() if self.row_cache_provider is not None: oprot.writeFieldBegin('row_cache_provider', TType.STRING, 27) oprot.writeString(self.row_cache_provider) oprot.writeFieldEnd() if self.key_alias is not None: oprot.writeFieldBegin('key_alias', TType.STRING, 28) oprot.writeString(self.key_alias) oprot.writeFieldEnd() if self.compaction_strategy is not None: oprot.writeFieldBegin('compaction_strategy', TType.STRING, 29) oprot.writeString(self.compaction_strategy) oprot.writeFieldEnd() if self.compaction_strategy_options is not None: oprot.writeFieldBegin('compaction_strategy_options', TType.MAP, 30) oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.compaction_strategy_options)) for kiter102,viter103 in self.compaction_strategy_options.items(): oprot.writeString(kiter102) oprot.writeString(viter103) oprot.writeMapEnd() oprot.writeFieldEnd() if self.row_cache_keys_to_save is not None: oprot.writeFieldBegin('row_cache_keys_to_save', TType.I32, 31) oprot.writeI32(self.row_cache_keys_to_save) oprot.writeFieldEnd() if self.compression_options is not None: oprot.writeFieldBegin('compression_options', TType.MAP, 32) oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.compression_options)) for kiter104,viter105 in self.compression_options.items(): oprot.writeString(kiter104) oprot.writeString(viter105) oprot.writeMapEnd() oprot.writeFieldEnd() if self.bloom_filter_fp_chance is not None: oprot.writeFieldBegin('bloom_filter_fp_chance', TType.DOUBLE, 33) oprot.writeDouble(self.bloom_filter_fp_chance) oprot.writeFieldEnd() if self.caching is not None: oprot.writeFieldBegin('caching', TType.STRING, 34) oprot.writeString(self.caching) oprot.writeFieldEnd() if self.dclocal_read_repair_chance is not None: oprot.writeFieldBegin('dclocal_read_repair_chance', TType.DOUBLE, 37) oprot.writeDouble(self.dclocal_read_repair_chance) oprot.writeFieldEnd() if self.populate_io_cache_on_flush is not None: oprot.writeFieldBegin('populate_io_cache_on_flush', TType.BOOL, 38) oprot.writeBool(self.populate_io_cache_on_flush) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.keyspace is None: raise TProtocol.TProtocolException(message='Required field keyspace is unset!') if self.name is None: raise TProtocol.TProtocolException(message='Required field name is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class KsDef(object): """ Attributes: - name - strategy_class - strategy_options - replication_factor: @deprecated ignored - cf_defs - durable_writes """ thrift_spec = ( None, # 0 (1, TType.STRING, 'name', None, None, ), # 1 (2, TType.STRING, 'strategy_class', None, None, ), # 2 (3, TType.MAP, 'strategy_options', (TType.STRING,None,TType.STRING,None), None, ), # 3 (4, TType.I32, 'replication_factor', None, None, ), # 4 (5, TType.LIST, 'cf_defs', (TType.STRUCT,(CfDef, CfDef.thrift_spec)), None, ), # 5 (6, TType.BOOL, 'durable_writes', None, True, ), # 6 ) def __init__(self, name=None, strategy_class=None, strategy_options=None, replication_factor=None, cf_defs=None, durable_writes=thrift_spec[6][4],): self.name = name self.strategy_class = strategy_class self.strategy_options = strategy_options self.replication_factor = replication_factor self.cf_defs = cf_defs self.durable_writes = durable_writes def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.name = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.strategy_class = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.MAP: self.strategy_options = {} (_ktype107, _vtype108, _size106 ) = iprot.readMapBegin() for _i110 in xrange(_size106): _key111 = iprot.readString(); _val112 = iprot.readString(); self.strategy_options[_key111] = _val112 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 4: if ftype == TType.I32: self.replication_factor = iprot.readI32(); else: iprot.skip(ftype) elif fid == 5: if ftype == TType.LIST: self.cf_defs = [] (_etype116, _size113) = iprot.readListBegin() for _i117 in xrange(_size113): _elem118 = CfDef() _elem118.read(iprot) self.cf_defs.append(_elem118) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 6: if ftype == TType.BOOL: self.durable_writes = iprot.readBool(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('KsDef') if self.name is not None: oprot.writeFieldBegin('name', TType.STRING, 1) oprot.writeString(self.name) oprot.writeFieldEnd() if self.strategy_class is not None: oprot.writeFieldBegin('strategy_class', TType.STRING, 2) oprot.writeString(self.strategy_class) oprot.writeFieldEnd() if self.strategy_options is not None: oprot.writeFieldBegin('strategy_options', TType.MAP, 3) oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.strategy_options)) for kiter119,viter120 in self.strategy_options.items(): oprot.writeString(kiter119) oprot.writeString(viter120) oprot.writeMapEnd() oprot.writeFieldEnd() if self.replication_factor is not None: oprot.writeFieldBegin('replication_factor', TType.I32, 4) oprot.writeI32(self.replication_factor) oprot.writeFieldEnd() if self.cf_defs is not None: oprot.writeFieldBegin('cf_defs', TType.LIST, 5) oprot.writeListBegin(TType.STRUCT, len(self.cf_defs)) for iter121 in self.cf_defs: iter121.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.durable_writes is not None: oprot.writeFieldBegin('durable_writes', TType.BOOL, 6) oprot.writeBool(self.durable_writes) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.name is None: raise TProtocol.TProtocolException(message='Required field name is unset!') if self.strategy_class is None: raise TProtocol.TProtocolException(message='Required field strategy_class is unset!') if self.cf_defs is None: raise TProtocol.TProtocolException(message='Required field cf_defs is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class CqlRow(object): """ Row returned from a CQL query Attributes: - key - columns """ thrift_spec = ( None, # 0 (1, TType.STRING, 'key', None, None, ), # 1 (2, TType.LIST, 'columns', (TType.STRUCT,(Column, Column.thrift_spec)), None, ), # 2 ) def __init__(self, key=None, columns=None,): self.key = key self.columns = columns def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.key = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.LIST: self.columns = [] (_etype125, _size122) = iprot.readListBegin() for _i126 in xrange(_size122): _elem127 = Column() _elem127.read(iprot) self.columns.append(_elem127) iprot.readListEnd() else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('CqlRow') if self.key is not None: oprot.writeFieldBegin('key', TType.STRING, 1) oprot.writeString(self.key) oprot.writeFieldEnd() if self.columns is not None: oprot.writeFieldBegin('columns', TType.LIST, 2) oprot.writeListBegin(TType.STRUCT, len(self.columns)) for iter128 in self.columns: iter128.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.key is None: raise TProtocol.TProtocolException(message='Required field key is unset!') if self.columns is None: raise TProtocol.TProtocolException(message='Required field columns is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class CqlMetadata(object): """ Attributes: - name_types - value_types - default_name_type - default_value_type """ thrift_spec = ( None, # 0 (1, TType.MAP, 'name_types', (TType.STRING,None,TType.STRING,None), None, ), # 1 (2, TType.MAP, 'value_types', (TType.STRING,None,TType.STRING,None), None, ), # 2 (3, TType.STRING, 'default_name_type', None, None, ), # 3 (4, TType.STRING, 'default_value_type', None, None, ), # 4 ) def __init__(self, name_types=None, value_types=None, default_name_type=None, default_value_type=None,): self.name_types = name_types self.value_types = value_types self.default_name_type = default_name_type self.default_value_type = default_value_type def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.MAP: self.name_types = {} (_ktype130, _vtype131, _size129 ) = iprot.readMapBegin() for _i133 in xrange(_size129): _key134 = iprot.readString(); _val135 = iprot.readString(); self.name_types[_key134] = _val135 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 2: if ftype == TType.MAP: self.value_types = {} (_ktype137, _vtype138, _size136 ) = iprot.readMapBegin() for _i140 in xrange(_size136): _key141 = iprot.readString(); _val142 = iprot.readString(); self.value_types[_key141] = _val142 iprot.readMapEnd() else: iprot.skip(ftype) elif fid == 3: if ftype == TType.STRING: self.default_name_type = iprot.readString(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRING: self.default_value_type = iprot.readString(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('CqlMetadata') if self.name_types is not None: oprot.writeFieldBegin('name_types', TType.MAP, 1) oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.name_types)) for kiter143,viter144 in self.name_types.items(): oprot.writeString(kiter143) oprot.writeString(viter144) oprot.writeMapEnd() oprot.writeFieldEnd() if self.value_types is not None: oprot.writeFieldBegin('value_types', TType.MAP, 2) oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.value_types)) for kiter145,viter146 in self.value_types.items(): oprot.writeString(kiter145) oprot.writeString(viter146) oprot.writeMapEnd() oprot.writeFieldEnd() if self.default_name_type is not None: oprot.writeFieldBegin('default_name_type', TType.STRING, 3) oprot.writeString(self.default_name_type) oprot.writeFieldEnd() if self.default_value_type is not None: oprot.writeFieldBegin('default_value_type', TType.STRING, 4) oprot.writeString(self.default_value_type) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.name_types is None: raise TProtocol.TProtocolException(message='Required field name_types is unset!') if self.value_types is None: raise TProtocol.TProtocolException(message='Required field value_types is unset!') if self.default_name_type is None: raise TProtocol.TProtocolException(message='Required field default_name_type is unset!') if self.default_value_type is None: raise TProtocol.TProtocolException(message='Required field default_value_type is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class CqlResult(object): """ Attributes: - type - rows - num - schema """ thrift_spec = ( None, # 0 (1, TType.I32, 'type', None, None, ), # 1 (2, TType.LIST, 'rows', (TType.STRUCT,(CqlRow, CqlRow.thrift_spec)), None, ), # 2 (3, TType.I32, 'num', None, None, ), # 3 (4, TType.STRUCT, 'schema', (CqlMetadata, CqlMetadata.thrift_spec), None, ), # 4 ) def __init__(self, type=None, rows=None, num=None, schema=None,): self.type = type self.rows = rows self.num = num self.schema = schema def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.I32: self.type = iprot.readI32(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.LIST: self.rows = [] (_etype150, _size147) = iprot.readListBegin() for _i151 in xrange(_size147): _elem152 = CqlRow() _elem152.read(iprot) self.rows.append(_elem152) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I32: self.num = iprot.readI32(); else: iprot.skip(ftype) elif fid == 4: if ftype == TType.STRUCT: self.schema = CqlMetadata() self.schema.read(iprot) else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('CqlResult') if self.type is not None: oprot.writeFieldBegin('type', TType.I32, 1) oprot.writeI32(self.type) oprot.writeFieldEnd() if self.rows is not None: oprot.writeFieldBegin('rows', TType.LIST, 2) oprot.writeListBegin(TType.STRUCT, len(self.rows)) for iter153 in self.rows: iter153.write(oprot) oprot.writeListEnd() oprot.writeFieldEnd() if self.num is not None: oprot.writeFieldBegin('num', TType.I32, 3) oprot.writeI32(self.num) oprot.writeFieldEnd() if self.schema is not None: oprot.writeFieldBegin('schema', TType.STRUCT, 4) self.schema.write(oprot) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.type is None: raise TProtocol.TProtocolException(message='Required field type is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class CqlPreparedResult(object): """ Attributes: - itemId - count - variable_types - variable_names """ thrift_spec = ( None, # 0 (1, TType.I32, 'itemId', None, None, ), # 1 (2, TType.I32, 'count', None, None, ), # 2 (3, TType.LIST, 'variable_types', (TType.STRING,None), None, ), # 3 (4, TType.LIST, 'variable_names', (TType.STRING,None), None, ), # 4 ) def __init__(self, itemId=None, count=None, variable_types=None, variable_names=None,): self.itemId = itemId self.count = count self.variable_types = variable_types self.variable_names = variable_names def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.I32: self.itemId = iprot.readI32(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.I32: self.count = iprot.readI32(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.LIST: self.variable_types = [] (_etype157, _size154) = iprot.readListBegin() for _i158 in xrange(_size154): _elem159 = iprot.readString(); self.variable_types.append(_elem159) iprot.readListEnd() else: iprot.skip(ftype) elif fid == 4: if ftype == TType.LIST: self.variable_names = [] (_etype163, _size160) = iprot.readListBegin() for _i164 in xrange(_size160): _elem165 = iprot.readString(); self.variable_names.append(_elem165) iprot.readListEnd() else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('CqlPreparedResult') if self.itemId is not None: oprot.writeFieldBegin('itemId', TType.I32, 1) oprot.writeI32(self.itemId) oprot.writeFieldEnd() if self.count is not None: oprot.writeFieldBegin('count', TType.I32, 2) oprot.writeI32(self.count) oprot.writeFieldEnd() if self.variable_types is not None: oprot.writeFieldBegin('variable_types', TType.LIST, 3) oprot.writeListBegin(TType.STRING, len(self.variable_types)) for iter166 in self.variable_types: oprot.writeString(iter166) oprot.writeListEnd() oprot.writeFieldEnd() if self.variable_names is not None: oprot.writeFieldBegin('variable_names', TType.LIST, 4) oprot.writeListBegin(TType.STRING, len(self.variable_names)) for iter167 in self.variable_names: oprot.writeString(iter167) oprot.writeListEnd() oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.itemId is None: raise TProtocol.TProtocolException(message='Required field itemId is unset!') if self.count is None: raise TProtocol.TProtocolException(message='Required field count is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) class CfSplit(object): """ Represents input splits used by hadoop ColumnFamilyRecordReaders Attributes: - start_token - end_token - row_count """ thrift_spec = ( None, # 0 (1, TType.STRING, 'start_token', None, None, ), # 1 (2, TType.STRING, 'end_token', None, None, ), # 2 (3, TType.I64, 'row_count', None, None, ), # 3 ) def __init__(self, start_token=None, end_token=None, row_count=None,): self.start_token = start_token self.end_token = end_token self.row_count = row_count def read(self, iprot): if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None: fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec)) return iprot.readStructBegin() while True: (fname, ftype, fid) = iprot.readFieldBegin() if ftype == TType.STOP: break if fid == 1: if ftype == TType.STRING: self.start_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 2: if ftype == TType.STRING: self.end_token = iprot.readString(); else: iprot.skip(ftype) elif fid == 3: if ftype == TType.I64: self.row_count = iprot.readI64(); else: iprot.skip(ftype) else: iprot.skip(ftype) iprot.readFieldEnd() iprot.readStructEnd() def write(self, oprot): if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None: oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec))) return oprot.writeStructBegin('CfSplit') if self.start_token is not None: oprot.writeFieldBegin('start_token', TType.STRING, 1) oprot.writeString(self.start_token) oprot.writeFieldEnd() if self.end_token is not None: oprot.writeFieldBegin('end_token', TType.STRING, 2) oprot.writeString(self.end_token) oprot.writeFieldEnd() if self.row_count is not None: oprot.writeFieldBegin('row_count', TType.I64, 3) oprot.writeI64(self.row_count) oprot.writeFieldEnd() oprot.writeFieldStop() oprot.writeStructEnd() def validate(self): if self.start_token is None: raise TProtocol.TProtocolException(message='Required field start_token is unset!') if self.end_token is None: raise TProtocol.TProtocolException(message='Required field end_token is unset!') if self.row_count is None: raise TProtocol.TProtocolException(message='Required field row_count is unset!') return def __repr__(self): L = ['%s=%r' % (key, value) for key, value in self.__dict__.iteritems()] return '%s(%s)' % (self.__class__.__name__, ', '.join(L)) def __eq__(self, other): return isinstance(other, self.__class__) and self.__dict__ == other.__dict__ def __ne__(self, other): return not (self == other) pycassa-1.11.2.1/pycassa/columnfamily.py000066400000000000000000001366541303744607500201420ustar00rootroot00000000000000""" Provides an abstraction of Cassandra's data model to allow for easy manipulation of data inside Cassandra. .. seealso:: :mod:`pycassa.columnfamilymap` """ import time import struct from UserDict import DictMixin from pycassa.cassandra.ttypes import Column, ColumnOrSuperColumn,\ ColumnParent, ColumnPath, ConsistencyLevel, NotFoundException,\ SlicePredicate, SliceRange, SuperColumn, KeyRange,\ IndexExpression, IndexClause, CounterColumn, Mutation import pycassa.marshal as marshal import pycassa.types as types from pycassa.batch import CfMutator try: from collections import OrderedDict except ImportError: from pycassa.util import OrderedDict # NOQA __all__ = ['gm_timestamp', 'ColumnFamily', 'PooledColumnFamily'] class ColumnValidatorDict(DictMixin): def __init__(self, other_dict={}, name_packer=None, name_unpacker=None): self.name_packer = name_packer or (lambda x: x) self.name_unpacker = name_unpacker or (lambda x: x) self.type_map = {} self.packers = {} self.unpackers = {} for item, value in other_dict.items(): packed_item = self.name_packer(item) self[packed_item] = value def __getitem__(self, item): packed_item = self.name_packer(item) return self.type_map[packed_item] def __setitem__(self, item, value): packed_item = self.name_packer(item) if isinstance(value, types.CassandraType): self.type_map[packed_item] = value self.packers[packed_item] = value.pack self.unpackers[packed_item] = value.unpack else: self.type_map[packed_item] = marshal.extract_type_name(value) self.packers[packed_item] = marshal.packer_for(value) self.unpackers[packed_item] = marshal.unpacker_for(value) def __delitem__(self, item): packed_item = self.name_packer(item) del self.type_map[packed_item] del self.packers[packed_item] del self.unpackers[packed_item] def keys(self): return map(self.name_unpacker, self.type_map.keys()) def gm_timestamp(): """ Returns the number of microseconds since the Unix Epoch. """ return int(time.time() * 1e6) class ColumnFamily(object): """ An abstraction of a Cassandra column family or super column family. Operations on this, such as :meth:`get` or :meth:`insert` will get data from or insert data into the corresponding Cassandra column family. """ buffer_size = 1024 """ When calling :meth:`get_range()` or :meth:`get_indexed_slices()`, the intermediate results need to be buffered if we are fetching many rows, otherwise performance may suffer and the Cassandra server may overallocate memory and fail. This is the size of that buffer in number of rows. The default is 1024. """ column_buffer_size = 1024 """ The number of columns fetched at once for :meth:`xget()` """ read_consistency_level = ConsistencyLevel.ONE """ The default consistency level for every read operation, such as :meth:`get` or :meth:`get_range`. This may be overridden per-operation. This should be an instance of :class:`~pycassa.cassandra.ttypes.ConsistencyLevel`. The default level is ``ONE``. """ write_consistency_level = ConsistencyLevel.ONE """ The default consistency level for every write operation, such as :meth:`insert` or :meth:`remove`. This may be overridden per-operation. This should be an instance of :class:`.~pycassa.cassandra.ttypes.ConsistencyLevel`. The default level is ``ONE``. """ timestamp = gm_timestamp """ Each :meth:`insert()` or :meth:`remove` sends a timestamp with every column. This attribute is a function that is used to get this timestamp when needed. The default function is :meth:`gm_timestamp()`.""" dict_class = OrderedDict """ Results are returned as dictionaries. By default, python 2.7's :class:`collections.OrderedDict` is used if available, otherwise :class:`~pycassa.util.OrderedDict` is used so that order is maintained. A different class, such as :class:`dict`, may be instead by used setting this. """ autopack_names = True """ Controls whether column names are automatically converted to or from their natural type to the binary string format that Cassandra uses. The data type used is controlled by :attr:`column_name_class` for column names and :attr:`super_column_name_class` for super column names. By default, this is :const:`True`. """ autopack_values = True """ Whether column values are automatically converted to or from their natural type to the binary string format that Cassandra uses. The data type used is controlled by :attr:`default_validation_class` and :attr:`column_validators`. By default, this is :const:`True`. """ autopack_keys = True """ Whether row keys are automatically converted to or from their natural type to the binary string format that Cassandra uses. The data type used is controlled by :attr:`key_validation_class`. By default, this is :const:`True`. """ retry_counter_mutations = False """ Whether to retry failed counter mutations. Counter mutations are not idempotent so retrying could result in double counting. By default, this is :const:`False`. .. versionadded:: 1.5.0 """ def _set_column_name_class(self, t): if isinstance(t, types.CassandraType): self._column_name_class = t self._name_packer = t.pack self._name_unpacker = t.unpack else: self._column_name_class = marshal.extract_type_name(t) self._name_packer = marshal.packer_for(t) self._name_unpacker = marshal.unpacker_for(t) def _get_column_name_class(self): return self._column_name_class column_name_class = property(_get_column_name_class, _set_column_name_class) """ The data type of column names, which pycassa will use to determine how to pack and unpack them. This is set automatically by inspecting the column family's ``comparator_type``, but it may also be set manually if you want autopacking behavior without setting a ``comparator_type``. Options include an instance of any class in :mod:`pycassa.types`, such as ``LongType()``. """ def _set_super_column_name_class(self, t): if isinstance(t, types.CassandraType): self._super_column_name_class = t self._super_name_packer = t.pack self._super_name_unpacker = t.unpack else: self._super_column_name_class = marshal.extract_type_name(t) self._super_name_packer = marshal.packer_for(t) self._super_name_unpacker = marshal.unpacker_for(t) def _get_super_column_name_class(self): return self._super_column_name_class super_column_name_class = property(_get_super_column_name_class, _set_super_column_name_class) """ Like :attr:`column_name_class`, but for super column names. """ def _set_default_validation_class(self, t): if isinstance(t, types.CassandraType): self._default_validation_class = t self._default_value_packer = t.pack self._default_value_unpacker = t.unpack self._have_counters = isinstance(t, types.CounterColumnType) else: self._default_validation_class = marshal.extract_type_name(t) self._default_value_packer = marshal.packer_for(t) self._default_value_unpacker = marshal.unpacker_for(t) self._have_counters = self._default_validation_class == "CounterColumnType" if not self.super: if self._have_counters: def _make_counter_cosc(name, value, timestamp, ttl): return ColumnOrSuperColumn(counter_column=CounterColumn(name, value)) self._make_cosc = _make_counter_cosc else: def _make_normal_cosc(name, value, timestamp, ttl): return ColumnOrSuperColumn(Column(name, value, timestamp, ttl)) self._make_cosc = _make_normal_cosc else: if self._have_counters: def _make_column(name, value, timestamp, ttl): return CounterColumn(name, value) self._make_column = _make_column def _make_counter_super_cosc(scol_name, subcols): return ColumnOrSuperColumn(counter_super_column=(SuperColumn(scol_name, subcols))) self._make_cosc = _make_counter_super_cosc else: self._make_column = Column def _make_super_cosc(scol_name, subcols): return ColumnOrSuperColumn(super_column=(SuperColumn(scol_name, subcols))) self._make_cosc = _make_super_cosc def _get_default_validation_class(self): return self._default_validation_class default_validation_class = property(_get_default_validation_class, _set_default_validation_class) """ The default data type of column values, which pycassa will use to determine how to pack and unpack them. This is set automatically by inspecting the column family's ``default_validation_class``, but it may also be set manually if you want autopacking behavior without setting a ``default_validation_class``. Options include an instance of any class in :mod:`pycassa.types`, such as ``LongType()``. """ @property def _allow_retries(self): return not self._have_counters or self.retry_counter_mutations def _set_column_validators(self, other_dict): self._column_validators = ColumnValidatorDict(other_dict, self._pack_name, self._unpack_name) def _get_column_validators(self): return self._column_validators column_validators = property(_get_column_validators, _set_column_validators) """ Like :attr:`default_validation_class`, but is a :class:`dict` mapping individual columns to types. """ def _set_key_validation_class(self, t): if isinstance(t, types.CassandraType): self._key_validation_class = t self._key_packer = t.pack self._key_unpacker = t.unpack else: self._key_validation_class = marshal.extract_type_name(t) self._key_packer = marshal.packer_for(t) self._key_unpacker = marshal.unpacker_for(t) def _get_key_validation_class(self): return self._key_validation_class key_validation_class = property(_get_key_validation_class, _set_key_validation_class) """ The data type of row keys, which pycassa will use to determine how to pack and unpack them. This is set automatically by inspecting the column family's ``key_validation_class`` (which only exists in Cassandra 0.8 or greater), but may be set manually if you want the autopacking behavior without setting a ``key_validation_class`` or if you are using Cassandra 0.7. Options include an instance of any class in :mod:`pycassa.types`, such as ``LongType()``. """ def __init__(self, pool, column_family, **kwargs): """ `pool` is a :class:`~pycassa.pool.ConnectionPool` that the column family will use for all operations. A connection is drawn from the pool before each operations and is returned afterwards. `column_family` should be the name of the column family that you want to use in Cassandra. Note that the keyspace to be used is determined by the pool. """ self.pool = pool self.column_family = column_family self.timestamp = gm_timestamp self.load_schema() recognized_kwargs = ("buffer_size", "read_consistency_level", "write_consistency_level", "timestamp", "dict_class", "buffer_size", "autopack_names", "autopack_values", "autopack_keys", "retry_counter_mutations") for k, v in kwargs.iteritems(): if k in recognized_kwargs: setattr(self, k, v) else: raise TypeError( "ColumnFamily.__init__() got an unexpected keyword " "argument '%s'" % (k,)) def load_schema(self): """ Loads the schema definition for this column family from Cassandra and updates comparator and validation classes if neccessary. """ ksdef = self.pool.execute('get_keyspace_description', use_dict_for_col_metadata=True) try: self._cfdef = ksdef[self.column_family] except KeyError: nfe = NotFoundException() nfe.why = 'Column family %s not found.' % self.column_family raise nfe self.super = self._cfdef.column_type == 'Super' self._load_comparator_classes() self._load_validation_classes() self._load_key_class() def _load_comparator_classes(self): if not self.super: self.column_name_class = self._cfdef.comparator_type self.super_column_name_class = None else: self.column_name_class = self._cfdef.subcomparator_type self.super_column_name_class = self._cfdef.comparator_type def _load_validation_classes(self): self.default_validation_class = self._cfdef.default_validation_class self.column_validators = {} for name, coldef in self._cfdef.column_metadata.items(): unpacked_name = self._unpack_name(name) self.column_validators[unpacked_name] = coldef.validation_class def _load_key_class(self): if hasattr(self._cfdef, "key_validation_class"): self.key_validation_class = self._cfdef.key_validation_class else: self.key_validation_class = 'BytesType' def _col_to_dict(self, column, include_timestamp, include_ttl): value = self._unpack_value(column.value, column.name) if include_timestamp and include_ttl: return (value, column.timestamp, column.ttl) elif include_timestamp: return (value, column.timestamp) elif include_ttl: return (value, column.ttl) else: return value def _scol_to_dict(self, super_column, include_timestamp, include_ttl): ret = self.dict_class() for column in super_column.columns: ret[self._unpack_name(column.name)] = self._col_to_dict(column, include_timestamp, include_ttl) return ret def _scounter_to_dict(self, counter_super_column): ret = self.dict_class() for counter in counter_super_column.columns: ret[self._unpack_name(counter.name)] = counter.value return ret def _cosc_to_dict(self, list_col_or_super, include_timestamp, include_ttl): ret = self.dict_class() for cosc in list_col_or_super: if cosc.column: col = cosc.column ret[self._unpack_name(col.name)] = self._col_to_dict(col, include_timestamp, include_ttl) elif cosc.counter_column: counter = cosc.counter_column ret[self._unpack_name(counter.name)] = counter.value elif cosc.super_column: scol = cosc.super_column ret[self._unpack_name(scol.name, True)] = self._scol_to_dict(scol, include_timestamp, include_ttl) else: scounter = cosc.counter_super_column ret[self._unpack_name(scounter.name, True)] = self._scounter_to_dict(scounter) return ret def _column_path(self, super_column=None, column=None): return ColumnPath(self.column_family, self._pack_name(super_column, is_supercol_name=True), self._pack_name(column, False)) def _column_parent(self, super_column=None): return ColumnParent(column_family=self.column_family, super_column=self._pack_name(super_column, is_supercol_name=True)) def _slice_predicate(self, columns, column_start, column_finish, column_reversed, column_count, super_column=None, pack=True): is_supercol_name = self.super and super_column is None if columns is not None: packed_cols = [] for col in columns: packed_cols.append(self._pack_name(col, is_supercol_name=is_supercol_name)) return SlicePredicate(column_names=packed_cols) else: if column_start != '' and pack: column_start = self._pack_name(column_start, is_supercol_name=is_supercol_name, slice_start=(not column_reversed)) if column_finish != '' and pack: column_finish = self._pack_name(column_finish, is_supercol_name=is_supercol_name, slice_start=column_reversed) sr = SliceRange(start=column_start, finish=column_finish, reversed=column_reversed, count=column_count) return SlicePredicate(slice_range=sr) def _pack_name(self, value, is_supercol_name=False, slice_start=None): if value is None: return if not self.autopack_names: if not isinstance(value, basestring): raise TypeError("A str or unicode column name was expected, " + "but %s was received instead (%s)" % (value.__class__.__name__, str(value))) return value try: if is_supercol_name: return self._super_name_packer(value, slice_start) else: return self._name_packer(value, slice_start) except struct.error: if is_supercol_name: d_type = self.super_column_name_class else: d_type = self.column_name_class raise TypeError("%s is not a compatible type for %s" % (value.__class__.__name__, d_type)) def _unpack_name(self, b, is_supercol_name=False): if not self.autopack_names: return b try: if is_supercol_name: return self._super_name_unpacker(b) else: return self._name_unpacker(b) except struct.error: if is_supercol_name: d_type = self.super_column_name_class else: d_type = self.column_name_class raise TypeError("%s cannot be converted to a type matching %s" % (b, d_type)) def _pack_value(self, value, col_name): if value is None: return if not self.autopack_values: if not isinstance(value, basestring): raise TypeError("A str or unicode column value was expected for " + "column '%s', but %s was received instead (%s)" % (str(col_name), value.__class__.__name__, str(value))) return value packed_col_name = self._pack_name(col_name, False) packer = self._column_validators.packers.get(packed_col_name, self._default_value_packer) try: return packer(value) except struct.error: d_type = self.column_validators.get(col_name, self._default_validation_class) raise TypeError("%s is not a compatible type for %s" % (value.__class__.__name__, d_type)) def _unpack_value(self, value, col_name): if not self.autopack_values: return value unpacker = self._column_validators.unpackers.get(col_name, self._default_value_unpacker) try: return unpacker(value) except struct.error: d_type = self.column_validators.get(col_name, self.default_validation_class) raise TypeError("%s cannot be converted to a type matching %s" % (value, d_type)) def _pack_key(self, key): if not self.autopack_keys or key == '': return key try: return self._key_packer(key) except struct.error: d_type = self.key_validation_class raise TypeError("%s is not a compatible type for %s" % (key.__class__.__name__, d_type)) def _unpack_key(self, b): if not self.autopack_keys: return b try: return self._key_unpacker(b) except struct.error: d_type = self.key_validation_class raise TypeError("%s cannot be converted to a type matching %s" % (b, d_type)) def _make_mutation_list(self, columns, timestamp, ttl): _pack_name = self._pack_name _pack_value = self._pack_value if not self.super: return map(lambda (c, v): Mutation(self._make_cosc(_pack_name(c), _pack_value(v, c), timestamp, ttl)), columns.iteritems()) else: mut_list = [] for super_col, subcs in columns.items(): subcols = map(lambda (c, v): self._make_column(_pack_name(c), _pack_value(v, c), timestamp, ttl), subcs.iteritems()) mut_list.append(Mutation(self._make_cosc(_pack_name(super_col, True), subcols))) return mut_list def xget(self, key, column_start="", column_finish="", column_reversed=False, column_count=None, include_timestamp=False, read_consistency_level=None, buffer_size=None, include_ttl=False): """ Like :meth:`get()`, but creates a generator that pages over the columns automatically. The number of columns fetched at once can be controlled with the `buffer_size` parameter. The default is :attr:`column_buffer_size`. The generator returns `(name, value)` tuples. """ packed_key = self._pack_key(key) cp = self._column_parent(None) rcl = read_consistency_level or self.read_consistency_level if buffer_size is None: buffer_size = self.column_buffer_size count = i = 0 last_name = finish = "" if column_start != "": last_name = self._pack_name(column_start, is_supercol_name=self.super, slice_start=(not column_reversed)) if column_finish != "": finish = self._pack_name(column_finish, is_supercol_name=self.super, slice_start=column_reversed) while True: if column_count is not None: if i == 0 and column_count <= buffer_size: buffer_size = column_count else: buffer_size = min(column_count - count + 1, buffer_size) sp = self._slice_predicate(None, last_name, finish, column_reversed, buffer_size, None, pack=False) list_cosc = self.pool.execute('get_slice', packed_key, cp, sp, rcl) if not list_cosc: return for j, cosc in enumerate(list_cosc): if j == 0 and i != 0: continue if self.super: if self._have_counters: scol = cosc.counter_super_column else: scol = cosc.super_column yield (self._unpack_name(scol.name, True), self._scol_to_dict(scol, include_timestamp, include_ttl)) else: if self._have_counters: col = cosc.counter_column else: col = cosc.column yield (self._unpack_name(col.name, False), self._col_to_dict(col, include_timestamp, include_ttl)) count += 1 if column_count is not None and count >= column_count: return if len(list_cosc) != buffer_size: return if self.super: if self._have_counters: last_name = list_cosc[-1].counter_super_column.name else: last_name = list_cosc[-1].super_column.name else: if self._have_counters: last_name = list_cosc[-1].counter_column.name else: last_name = list_cosc[-1].column.name i += 1 def get(self, key, columns=None, column_start="", column_finish="", column_reversed=False, column_count=100, include_timestamp=False, super_column=None, read_consistency_level=None, include_ttl=False): """ Fetches all or part of the row with key `key`. The columns fetched may be limited to a specified list of column names using `columns`. Alternatively, you may fetch a slice of columns or super columns from a row using `column_start`, `column_finish`, and `column_count`. Setting these will cause columns or super columns to be fetched starting with `column_start`, continuing until `column_count` columns or super columns have been fetched or `column_finish` is reached. If `column_start` is left as the empty string, the slice will begin with the start of the row; leaving `column_finish` blank will cause the slice to extend to the end of the row. Note that `column_count` defaults to 100, so rows over this size will not be completely fetched by default. If `column_reversed` is ``True``, columns are fetched in reverse sorted order, beginning with `column_start`. In this case, if `column_start` is the empty string, the slice will begin with the end of the row. You may fetch all or part of only a single super column by setting `super_column`. If this is set, `column_start`, `column_finish`, `column_count`, and `column_reversed` will apply to the subcolumns of `super_column`. To include every column's timestamp in the result set, set `include_timestamp` to ``True``. Results will include a ``(value, timestamp)`` tuple for each column. To include every column's ttl in the result set, set `include_ttl` to ``True``. Results will include a ``(value, ttl)`` tuple for each column. If this is a standard column family, the return type is of the form ``{column_name: column_value}``. If this is a super column family and `super_column` is not specified, the results are of the form ``{super_column_name: {column_name, column_value}}``. If `super_column` is set, the super column name will be excluded and the results are of the form ``{column_name: column_value}``. """ packed_key = self._pack_key(key) single_column = columns is not None and len(columns) == 1 if (not self.super and single_column) or \ (self.super and super_column is not None and single_column): column = None if self.super and super_column is None: super_column = columns[0] else: column = columns[0] cp = self._column_path(super_column, column) col_or_super = self.pool.execute('get', packed_key, cp, read_consistency_level or self.read_consistency_level) return self._cosc_to_dict([col_or_super], include_timestamp, include_ttl) else: cp = self._column_parent(super_column) sp = self._slice_predicate(columns, column_start, column_finish, column_reversed, column_count, super_column) list_col_or_super = self.pool.execute('get_slice', packed_key, cp, sp, read_consistency_level or self.read_consistency_level) if len(list_col_or_super) == 0: raise NotFoundException() return self._cosc_to_dict(list_col_or_super, include_timestamp, include_ttl) def get_indexed_slices(self, index_clause, columns=None, column_start="", column_finish="", column_reversed=False, column_count=100, include_timestamp=False, read_consistency_level=None, buffer_size=None, include_ttl=False): """ Similar to :meth:`get_range()`, but an :class:`~pycassa.cassandra.ttypes.IndexClause` is used instead of a key range. `index_clause` limits the keys that are returned based on expressions that compare the value of a column to a given value. At least one of the expressions in the :class:`.IndexClause` must be on an indexed column. Note that Cassandra does not support secondary indexes or get_indexed_slices() for super column families. .. seealso:: :meth:`~pycassa.index.create_index_clause()` and :meth:`~pycassa.index.create_index_expression()` """ assert not self.super, "get_indexed_slices() is not " \ "supported by super column families" cl = read_consistency_level or self.read_consistency_level cp = self._column_parent() sp = self._slice_predicate(columns, column_start, column_finish, column_reversed, column_count) new_exprs = [] # Pack the values in the index clause expressions for expr in index_clause.expressions: value = self._pack_value(expr.value, expr.column_name) name = self._pack_name(expr.column_name) new_exprs.append(IndexExpression(name, expr.op, value)) packed_start_key = self._pack_key(index_clause.start_key) clause = IndexClause(new_exprs, packed_start_key, index_clause.count) # Figure out how we will chunk the request if buffer_size is None: buffer_size = self.buffer_size row_count = clause.count count = 0 i = 0 last_key = clause.start_key while True: if row_count is not None: if i == 0 and row_count <= buffer_size: # We don't need to chunk, grab exactly the number of rows buffer_size = row_count else: buffer_size = min(row_count - count + 1, buffer_size) clause.count = buffer_size clause.start_key = last_key key_slices = self.pool.execute('get_indexed_slices', cp, clause, sp, cl) if key_slices is None: return for j, key_slice in enumerate(key_slices): # Ignore the first element after the first iteration # because it will be a duplicate. if j == 0 and i != 0: continue unpacked_key = self._unpack_key(key_slice.key) yield (unpacked_key, self._cosc_to_dict(key_slice.columns, include_timestamp, include_ttl)) count += 1 if row_count is not None and count >= row_count: return if len(key_slices) != buffer_size: return last_key = key_slices[-1].key i += 1 def multiget(self, keys, columns=None, column_start="", column_finish="", column_reversed=False, column_count=100, include_timestamp=False, super_column=None, read_consistency_level=None, buffer_size=None, include_ttl=False): """ Fetch multiple rows from a Cassandra server. `keys` should be a list of keys to fetch. `buffer_size` is the number of rows from the total list to fetch at a time. If left as ``None``, the ColumnFamily's :attr:`buffer_size` will be used. All other parameters are the same as :meth:`get()`, except that a list of keys may be passed in. Results will be returned in the form: ``{key: {column_name: column_value}}``. If an OrderedDict is used, the rows will have the same order as `keys`. """ packed_keys = map(self._pack_key, keys) cp = self._column_parent(super_column) sp = self._slice_predicate(columns, column_start, column_finish, column_reversed, column_count, super_column) consistency = read_consistency_level or self.read_consistency_level buffer_size = buffer_size or self.buffer_size offset = 0 keymap = {} while offset < len(packed_keys): new_keymap = self.pool.execute('multiget_slice', packed_keys[offset:offset + buffer_size], cp, sp, consistency) keymap.update(new_keymap) offset += buffer_size ret = self.dict_class() # Keep the order of keys for key in keys: ret[key] = None empty_keys = [] for packed_key, columns in keymap.iteritems(): unpacked_key = self._unpack_key(packed_key) if len(columns) > 0: ret[unpacked_key] = self._cosc_to_dict(columns, include_timestamp, include_ttl) else: empty_keys.append(unpacked_key) for key in empty_keys: try: del ret[key] except KeyError: pass return ret MAX_COUNT = 2 ** 31 - 1 def get_count(self, key, super_column=None, read_consistency_level=None, columns=None, column_start="", column_finish="", column_reversed=False, max_count=None): """ Count the number of columns in the row with key `key`. You may limit the columns or super columns counted to those in `columns`. Additionally, you may limit the columns or super columns counted to only those between `column_start` and `column_finish`. You may also count only the number of subcolumns in a single super column using `super_column`. If this is set, `columns`, `column_start`, and `column_finish` only apply to the subcolumns of `super_column`. To put an upper bound on the number of columns that are counted, set `max_count`. """ if max_count is None: max_count = self.MAX_COUNT packed_key = self._pack_key(key) cp = self._column_parent(super_column) sp = self._slice_predicate(columns, column_start, column_finish, column_reversed, max_count, super_column) return self.pool.execute('get_count', packed_key, cp, sp, read_consistency_level or self.read_consistency_level) def multiget_count(self, keys, super_column=None, read_consistency_level=None, columns=None, column_start="", column_finish="", buffer_size=None, column_reversed=False, max_count=None): """ Perform a column count in parallel on a set of rows. The parameters are the same as for :meth:`multiget()`, except that a list of keys may be used. A dictionary of the form ``{key: int}`` is returned. `buffer_size` is the number of rows from the total list to count at a time. If left as ``None``, the ColumnFamily's :attr:`buffer_size` will be used. To put an upper bound on the number of columns that are counted, set `max_count`. """ if max_count is None: max_count = self.MAX_COUNT packed_keys = map(self._pack_key, keys) cp = self._column_parent(super_column) sp = self._slice_predicate(columns, column_start, column_finish, column_reversed, max_count, super_column) consistency = read_consistency_level or self.read_consistency_level buffer_size = buffer_size or self.buffer_size offset = 0 keymap = {} while offset < len(packed_keys): new_keymap = self.pool.execute('multiget_count', packed_keys[offset:offset + buffer_size], cp, sp, consistency) keymap.update(new_keymap) offset += buffer_size ret = self.dict_class() # Keep the order of keys for key in keys: ret[key] = None for packed_key, count in keymap.iteritems(): ret[self._unpack_key(packed_key)] = count return ret def get_range(self, start="", finish="", columns=None, column_start="", column_finish="", column_reversed=False, column_count=100, row_count=None, include_timestamp=False, super_column=None, read_consistency_level=None, buffer_size=None, filter_empty=True, include_ttl=False, start_token=None, finish_token=None): """ Get an iterator over rows in a specified key range. The key range begins with `start` and ends with `finish`. If left as empty strings, these extend to the beginning and end, respectively. Note that if RandomPartitioner is used, rows are stored in the order of the MD5 hash of their keys, so getting a lexicographical range of keys is not feasible. In place of `start` and `finish`, you may use `start_token` and `finish_token` or a combination of `start` and `finish_token`. In this case, you are specifying a token range to fetch instead of a key range. This can be useful for fetching all data owned by a node or for parallelizing a full data set scan. Otherwise, you should typically just use `start` and `finish`. When using RandomPartitioner or Murmur3Partitioner, `start_token` and `finish_token` should be string versions of the numeric tokens; for ByteOrderedPartitioner, they should be hex-encoded string versions of the token. The `row_count` parameter limits the total number of rows that may be returned. If left as ``None``, the number of rows that may be returned is unlimited (this is the default). When calling `get_range()`, the intermediate results need to be buffered if we are fetching many rows, otherwise the Cassandra server will overallocate memory and fail. `buffer_size` is the size of that buffer in number of rows. If left as ``None``, the ColumnFamily's :attr:`buffer_size` attribute will be used. When `filter_empty` is left as ``True``, empty rows (including `range ghosts `_) will be skipped and will not count towards `row_count`. All other parameters are the same as those of :meth:`get()`. A generator over ``(key, {column_name: column_value})`` is returned. To convert this to a list, use ``list()`` on the result. """ cl = read_consistency_level or self.read_consistency_level cp = self._column_parent(super_column) sp = self._slice_predicate(columns, column_start, column_finish, column_reversed, column_count, super_column) kr_args = {} count = 0 i = 0 if start_token is not None and (start not in ("", None) or finish not in ("", None)): raise ValueError( "ColumnFamily.get_range() received incompatible arguments: " "'start_token' may not be used with 'start' or 'finish'") if finish_token is not None and finish not in ("", None): raise ValueError( "ColumnFamily.get_range() received incompatible arguments: " "'finish_token' may not be used with 'finish'") if start_token is not None: kr_args['start_token'] = start_token kr_args['end_token'] = "" if finish_token is None else finish_token elif finish_token is not None: kr_args['start_key'] = self._pack_key(start) kr_args['end_token'] = finish_token else: kr_args['start_key'] = self._pack_key(start) kr_args['end_key'] = self._pack_key(finish) if buffer_size is None: buffer_size = self.buffer_size while True: if row_count is not None: if i == 0 and row_count <= buffer_size: # We don't need to chunk, grab exactly the number of rows buffer_size = row_count else: buffer_size = min(row_count - count + 1, buffer_size) kr_args['count'] = buffer_size key_range = KeyRange(**kr_args) key_slices = self.pool.execute('get_range_slices', cp, sp, key_range, cl) # This may happen if nothing was ever inserted if key_slices is None: return for j, key_slice in enumerate(key_slices): # Ignore the first element after the first iteration # because it will be a duplicate. if j == 0 and i != 0: continue if filter_empty and not key_slice.columns: continue yield (self._unpack_key(key_slice.key), self._cosc_to_dict(key_slice.columns, include_timestamp, include_ttl)) count += 1 if row_count is not None and count >= row_count: return if len(key_slices) != buffer_size: return if 'start_token' in kr_args: del kr_args['start_token'] kr_args['start_key'] = key_slices[-1].key i += 1 def insert(self, key, columns, timestamp=None, ttl=None, write_consistency_level=None): """ Insert or update columns in the row with key `key`. `columns` should be a dictionary of columns or super columns to insert or update. If this is a standard column family, `columns` should look like ``{column_name: column_value}``. If this is a super column family, `columns` should look like ``{super_column_name: {sub_column_name: value}}``. If this is a counter column family, you may use integers as values and those will be used as counter adjustments. A timestamp may be supplied for all inserted columns with `timestamp`. `ttl` sets the "time to live" in number of seconds for the inserted columns. After this many seconds, Cassandra will mark the columns as deleted. The timestamp Cassandra reports as being used for insert is returned. """ if timestamp is None: timestamp = self.timestamp() packed_key = self._pack_key(key) mut_list = self._make_mutation_list(columns, timestamp, ttl) mutations = {packed_key: {self.column_family: mut_list}} self.pool.execute('batch_mutate', mutations, write_consistency_level or self.write_consistency_level, allow_retries=self._allow_retries) return timestamp def batch_insert(self, rows, timestamp=None, ttl=None, write_consistency_level=None): """ Like :meth:`insert()`, but multiple rows may be inserted at once. The `rows` parameter should be of the form ``{key: {column_name: column_value}}`` if this is a standard column family or ``{key: {super_column_name: {column_name: column_value}}}`` if this is a super column family. """ if timestamp == None: timestamp = self.timestamp() cf = self.column_family mutations = {} for key, columns in rows.iteritems(): packed_key = self._pack_key(key) mut_list = self._make_mutation_list(columns, timestamp, ttl) mutations[packed_key] = {cf: mut_list} if mutations: self.pool.execute('batch_mutate', mutations, write_consistency_level or self.write_consistency_level, allow_retries=self._allow_retries) return timestamp def add(self, key, column, value=1, super_column=None, write_consistency_level=None): """ Increment or decrement a counter. `value` should be an integer, either positive or negative, to be added to a counter column. By default, `value` is 1. .. versionadded:: 1.1.0 Available in Cassandra 0.8.0 and later. """ packed_key = self._pack_key(key) cp = self._column_parent(super_column) column = self._pack_name(column) self.pool.execute('add', packed_key, cp, CounterColumn(column, value), write_consistency_level or self.write_consistency_level, allow_retries=self._allow_retries) def remove(self, key, columns=None, super_column=None, write_consistency_level=None, timestamp=None, counter=None): """ Remove a specified row or a set of columns within the row with key `key`. A set of columns or super columns to delete may be specified using `columns`. A single super column may be deleted by setting `super_column`. If `super_column` is specified, `columns` will apply to the subcolumns of `super_column`. If `columns` and `super_column` are both ``None``, the entire row is removed. The timestamp used for the mutation is returned. """ if timestamp is None: timestamp = self.timestamp() batch = self.batch(write_consistency_level=write_consistency_level) batch.remove(key, columns, super_column, timestamp) batch.send() return timestamp def remove_counter(self, key, column, super_column=None, write_consistency_level=None): """ Remove a counter at the specified location. Note that counters have limited support for deletes: if you remove a counter, you must wait to issue any following update until the delete has reached all the nodes and all of them have been fully compacted. .. versionadded:: 1.1.0 Available in Cassandra 0.8.0 and later. """ packed_key = self._pack_key(key) cp = self._column_path(super_column, column) self.pool.execute('remove_counter', packed_key, cp, write_consistency_level or self.write_consistency_level) def batch(self, queue_size=100, write_consistency_level=None, atomic=None): """ Create batch mutator for doing multiple insert, update, and remove operations using as few roundtrips as possible. The `queue_size` parameter sets the max number of mutations per request. A :class:`~pycassa.batch.CfMutator` is returned. """ return CfMutator(self, queue_size, write_consistency_level or self.write_consistency_level, allow_retries=self._allow_retries, atomic=atomic) def truncate(self): """ Marks the entire ColumnFamily as deleted. From the user's perspective, a successful call to ``truncate`` will result complete data deletion from this column family. Internally, however, disk space will not be immediately released, as with all deletes in Cassandra, this one only marks the data as deleted. The operation succeeds only if all hosts in the cluster at available and will throw an :exc:`.UnavailableException` if some hosts are down. """ self.pool.execute('truncate', self.column_family) PooledColumnFamily = ColumnFamily pycassa-1.11.2.1/pycassa/columnfamilymap.py000066400000000000000000000255211303744607500206260ustar00rootroot00000000000000""" Provides a way to map an existing class of objects to a column family. This can help to cut down boilerplate code related to converting objects to a row format and back again. ColumnFamilyMap is primarily useful when you have one "object" per row. .. seealso:: :mod:`pycassa.types` for selecting data types for object attributes and infomation about creating custom data types. """ from pycassa.types import CassandraType from pycassa.columnfamily import ColumnFamily import pycassa.util as util import inspect __all__ = ['ColumnFamilyMap'] def create_instance(cls, **kwargs): instance = cls() map(lambda (k,v): setattr(instance, k, v), kwargs.iteritems()) return instance class ColumnFamilyMap(ColumnFamily): """ Maps an existing class to a column family. Class fields become columns, and instances of that class can be represented as rows in standard column families or super columns in super column families. """ def __init__(self, cls, pool, column_family, raw_columns=False, **kwargs): """ Instances of `cls` are returned from :meth:`get()`, :meth:`multiget()`, :meth:`get_range()` and :meth:`get_indexed_slices()`. `pool` is a :class:`~pycassa.pool.ConnectionPool` that will be used in the same way a :class:`~.ColumnFamily` uses one. `column_family` is the name of a column family to tie to `cls`. If `raw_columns` is ``True``, all columns will be fetched into the `raw_columns` field in requests. """ ColumnFamily.__init__(self, pool, column_family, **kwargs) self.cls = cls self.autopack_names = False self.raw_columns = raw_columns self.dict_class = util.OrderedDict self.defaults = {} self.fields = [] for name, val_type in inspect.getmembers(self.cls): if name != 'key' and isinstance(val_type, CassandraType): self.fields.append(name) self.column_validators[name] = val_type self.defaults[name] = val_type.default if hasattr(self.cls, 'key') and isinstance(self.cls.key, CassandraType): self.key_validation_class = self.cls.key def combine_columns(self, columns): combined_columns = columns if self.raw_columns: combined_columns['raw_columns'] = columns for column, default in self.defaults.items(): combined_columns.setdefault(column, default) return combined_columns def get(self, key, *args, **kwargs): """ Creates one or more instances of `cls` from the row with key `key`. The fields that are retreived may be specified using `columns`, which should be a list of column names. If the column family is a super column family, a list of `cls` instances will be returned, one for each super column. If the `super_column` parameter is not supplied, then `columns` specifies which super columns will be used to create instances of `cls`. If the `super_column` parameter *is* supplied, only one instance of `cls` will be returned; if `columns` is specified in this case, only those attributes listed in `columns` will be fetched. All other parameters behave the same as in :meth:`.ColumnFamily.get()`. """ if 'columns' not in kwargs and not self.super and not self.raw_columns: kwargs['columns'] = self.fields columns = ColumnFamily.get(self, key, *args, **kwargs) if self.super: if 'super_column' not in kwargs: vals = self.dict_class() for super_column, subcols in columns.iteritems(): combined = self.combine_columns(subcols) vals[super_column] = create_instance(self.cls, key=key, super_column=super_column, **combined) return vals combined = self.combine_columns(columns) return create_instance(self.cls, key=key, super_column=kwargs['super_column'], **combined) combined = self.combine_columns(columns) return create_instance(self.cls, key=key, **combined) def multiget(self, *args, **kwargs): """ Like :meth:`get()`, but a list of keys may be specified. The result of multiget will be a dictionary where the keys are the keys from the `keys` argument, minus any missing rows. The value for each key in the dictionary will be the same as if :meth:`get()` were called on that individual key. """ if 'columns' not in kwargs and not self.super and not self.raw_columns: kwargs['columns'] = self.fields kcmap = ColumnFamily.multiget(self, *args, **kwargs) ret = self.dict_class() for key, columns in kcmap.iteritems(): if self.super: if 'super_column' not in kwargs: vals = self.dict_class() for super_column, subcols in columns.iteritems(): combined = self.combine_columns(subcols) vals[super_column] = create_instance(self.cls, key=key, super_column=super_column, **combined) ret[key] = vals else: combined = self.combine_columns(columns) ret[key] = create_instance(self.cls, key=key, super_column=kwargs['super_column'], **combined) else: combined = self.combine_columns(columns) ret[key] = create_instance(self.cls, key=key, **combined) return ret def get_range(self, *args, **kwargs): """ Get an iterator over instances in a specified key range. Like :meth:`multiget()`, whether a single instance or multiple instances are returned per-row when the column family is a super column family depends on what parameters are passed. For an explanation of how :meth:`get_range` works and a description of the parameters, see :meth:`.ColumnFamily.get_range()`. Example usage with a standard column family: .. code-block:: python >>> pool = pycassa.ConnectionPool('Keyspace1') >>> usercf = pycassa.ColumnFamily(pool, 'Users') >>> cfmap = pycassa.ColumnFamilyMap(MyClass, usercf) >>> users = cfmap.get_range(row_count=2, columns=['name', 'age']) >>> for key, user in users: ... print user.name, user.age Miles Davis 84 Winston Smith 42 """ if 'columns' not in kwargs and not self.super and not self.raw_columns: kwargs['columns'] = self.fields for key, columns in ColumnFamily.get_range(self, *args, **kwargs): if self.super: if 'super_column' not in kwargs: vals = self.dict_class() for super_column, subcols in columns.iteritems(): combined = self.combine_columns(subcols) vals[super_column] = create_instance(self.cls, key=key, super_column=super_column, **combined) yield vals else: combined = self.combine_columns(columns) yield create_instance(self.cls, key=key, super_column=kwargs['super_column'], **combined) else: combined = self.combine_columns(columns) yield create_instance(self.cls, key=key, **combined) def get_indexed_slices(self, *args, **kwargs): """ Fetches a list of instances that satisfy an index clause. Similar to :meth:`get_range()`, but uses an index clause instead of a key range. See :meth:`.ColumnFamily.get_indexed_slices()` for an explanation of the parameters. """ assert not self.super, "get_indexed_slices() is not " \ "supported by super column families" if 'columns' not in kwargs and not self.raw_columns: kwargs['columns'] = self.fields for key, columns in ColumnFamily.get_indexed_slices(self, *args, **kwargs): combined = self.combine_columns(columns) yield create_instance(self.cls, key=key, **combined) def _get_instance_as_dict(self, instance, columns=None): fields = columns or self.fields instance_dict = {} for field in fields: val = getattr(instance, field, None) if val is not None and not isinstance(val, CassandraType): instance_dict[field] = val if self.super: instance_dict = {instance.super_column: instance_dict} return instance_dict def insert(self, instance, columns=None, timestamp=None, ttl=None, write_consistency_level=None): """ Insert or update stored instances. `instance` should be an instance of `cls` to store. The `columns` parameter allows to you specify which attributes of `instance` should be inserted or updated. If left as ``None``, all attributes will be inserted. """ if columns is None: fields = self.fields else: fields = columns insert_dict = self._get_instance_as_dict(instance, columns=fields) return ColumnFamily.insert(self, instance.key, insert_dict, timestamp=timestamp, ttl=ttl, write_consistency_level=write_consistency_level) def batch_insert(self, instances, timestamp=None, ttl=None, write_consistency_level=None): """ Insert or update stored instances. `instances` should be a list containing instances of `cls` to store. """ insert_dict = dict( [(instance.key, self._get_instance_as_dict(instance)) for instance in instances] ) return ColumnFamily.batch_insert(self, insert_dict, timestamp=timestamp, ttl=ttl, write_consistency_level=write_consistency_level) def remove(self, instance, columns=None, write_consistency_level=None): """ Removes a stored instance. The `columns` parameter is a list of columns that should be removed. If this is left as the default value of ``None``, the entire stored instance will be removed. """ if self.super: return ColumnFamily.remove(self, instance.key, super_column=instance.super_column, columns=columns, write_consistency_level=write_consistency_level) else: return ColumnFamily.remove(self, instance.key, columns, write_consistency_level=write_consistency_level) pycassa-1.11.2.1/pycassa/connection.py000066400000000000000000000155721303744607500175750ustar00rootroot00000000000000import struct from cStringIO import StringIO from thrift.transport import TTransport, TSocket, TSSLSocket from thrift.transport.TTransport import (TTransportBase, CReadableTransport, TTransportException) from thrift.protocol import TBinaryProtocol from pycassa.cassandra import Cassandra from pycassa.cassandra.ttypes import AuthenticationRequest DEFAULT_SERVER = 'localhost:9160' DEFAULT_PORT = 9160 def default_socket_factory(host, port): """ Returns a normal :class:`TSocket` instance. """ return TSocket.TSocket(host, port) def default_transport_factory(tsocket, host, port): """ Returns a normal :class:`TFramedTransport` instance wrapping `tsocket`. """ return TTransport.TFramedTransport(tsocket) class Connection(Cassandra.Client): """Encapsulation of a client session.""" def __init__(self, keyspace, server, framed_transport=True, timeout=None, credentials=None, socket_factory=default_socket_factory, transport_factory=default_transport_factory): self.keyspace = None self.server = server server = server.split(':') if len(server) <= 1: port = 9160 else: port = server[1] host = server[0] socket = socket_factory(host, int(port)) if timeout is not None: socket.setTimeout(timeout * 1000.0) self.transport = transport_factory(socket, host, port) protocol = TBinaryProtocol.TBinaryProtocolAccelerated(self.transport) Cassandra.Client.__init__(self, protocol) self.transport.open() if credentials is not None: request = AuthenticationRequest(credentials=credentials) self.login(request) self.set_keyspace(keyspace) def set_keyspace(self, keyspace): if keyspace != self.keyspace: Cassandra.Client.set_keyspace(self, keyspace) self.keyspace = keyspace def close(self): self.transport.close() def make_ssl_socket_factory(ca_certs, validate=True): """ A convenience function for creating an SSL socket factory. `ca_certs` should contain the path to the certificate file, `validate` determines whether or not SSL certificate validation will be performed. """ def ssl_socket_factory(host, port): """ Returns a :class:`TSSLSocket` instance. """ return TSSLSocket.TSSLSocket(host, port, ca_certs=ca_certs, validate=validate) return ssl_socket_factory class TSaslClientTransport(TTransportBase, CReadableTransport): START = 1 OK = 2 BAD = 3 ERROR = 4 COMPLETE = 5 def __init__(self, transport, host, service, mechanism='GSSAPI', **sasl_kwargs): from puresasl.client import SASLClient self.transport = transport self.sasl = SASLClient(host, service, mechanism, **sasl_kwargs) self.__wbuf = StringIO() self.__rbuf = StringIO() def open(self): if not self.transport.isOpen(): self.transport.open() self.send_sasl_msg(self.START, self.sasl.mechanism) self.send_sasl_msg(self.OK, self.sasl.process()) while True: status, challenge = self.recv_sasl_msg() if status == self.OK: self.send_sasl_msg(self.OK, self.sasl.process(challenge)) elif status == self.COMPLETE: if not self.sasl.complete: raise TTransportException("The server erroneously indicated " "that SASL negotiation was complete") else: break else: raise TTransportException("Bad SASL negotiation status: %d (%s)" % (status, challenge)) def send_sasl_msg(self, status, body): header = struct.pack(">BI", status, len(body)) self.transport.write(header + body) self.transport.flush() def recv_sasl_msg(self): header = self.transport.readAll(5) status, length = struct.unpack(">BI", header) if length > 0: payload = self.transport.readAll(length) else: payload = "" return status, payload def write(self, data): self.__wbuf.write(data) def flush(self): data = self.__wbuf.getvalue() encoded = self.sasl.wrap(data) # Note stolen from TFramedTransport: # N.B.: Doing this string concatenation is WAY cheaper than making # two separate calls to the underlying socket object. Socket writes in # Python turn out to be REALLY expensive, but it seems to do a pretty # good job of managing string buffer operations without excessive copies self.transport.write(''.join((struct.pack("!i", len(encoded)), encoded))) self.transport.flush() self.__wbuf = StringIO() def read(self, sz): ret = self.__rbuf.read(sz) if len(ret) != 0: return ret self._read_frame() return self.__rbuf.read(sz) def _read_frame(self): header = self.transport.readAll(4) length, = struct.unpack('!i', header) encoded = self.transport.readAll(length) self.__rbuf = StringIO(self.sasl.unwrap(encoded)) def close(self): self.sasl.dispose() self.transport.close() # Implement the CReadableTransport interface. # Stolen shamelessly from TFramedTransport @property def cstringio_buf(self): return self.__rbuf def cstringio_refill(self, prefix, reqlen): # self.__rbuf will already be empty here because fastbinary doesn't # ask for a refill until the previous buffer is empty. Therefore, # we can start reading new frames immediately. while len(prefix) < reqlen: self._read_frame() prefix += self.__rbuf.getvalue() self.__rbuf = StringIO(prefix) return self.__rbuf def make_sasl_transport_factory(credential_factory): """ A convenience function for creating a SASL transport factory. `credential_factory` should be a function taking two args: `host` and `port`. It should return a ``dict`` of kwargs that will be passed to :func:`puresasl.client.SASLClient.__init__()`. Example usage:: >>> def make_credentials(host, port): ... return {'host': host, ... 'service': 'cassandra', ... 'principal': 'user/role@FOO.EXAMPLE.COM', ... 'mechanism': 'GSSAPI'} >>> >>> factory = make_sasl_transport_factory(make_credentials) >>> pool = ConnectionPool(..., transport_factory=factory) """ def sasl_transport_factory(tsocket, host, port): sasl_kwargs = credential_factory(host, port) sasl_transport = TSaslClientTransport(tsocket, **sasl_kwargs) return TTransport.TFramedTransport(sasl_transport) return sasl_transport_factory pycassa-1.11.2.1/pycassa/contrib/000077500000000000000000000000001303744607500165125ustar00rootroot00000000000000pycassa-1.11.2.1/pycassa/contrib/__init__.py000066400000000000000000000000001303744607500206110ustar00rootroot00000000000000pycassa-1.11.2.1/pycassa/contrib/stubs.py000066400000000000000000000177721303744607500202420ustar00rootroot00000000000000"""A functional set of stubs to be used for unit testing. Projects that use pycassa and need to run an automated unit test suite on a system like Jenkins can use these stubs to emulate interactions with Cassandra without spinning up a cluster locally. """ import operator from uuid import UUID from collections import MutableMapping from pycassa import NotFoundException from pycassa.util import OrderedDict from pycassa.columnfamily import gm_timestamp from pycassa.index import EQ, GT, GTE, LT, LTE __all__ = ['ConnectionPoolStub', 'ColumnFamilyStub', 'SystemManagerStub'] class DictWithTime(MutableMapping): def __init__(self, *args, **kwargs): self.__timestamp = kwargs.pop('timestamp', None) self.store = dict() self.update(dict(*args, **kwargs)) def __getitem__(self, key): return self.store[key] def __setitem__(self, key, value, timestamp=None): if timestamp is None: timestamp = self.__timestamp or gm_timestamp() self.store[key] = (value, timestamp) def __delitem__(self, key): del self.store[key] def __iter__(self): return iter(self.store) def __len__(self): return len(self.store) operator_dict = { EQ: operator.eq, GT: operator.gt, GTE: operator.ge, LT: operator.lt, LTE: operator.le, } class ConnectionPoolStub(object): """Connection pool stub. Notes created column families in :attr:`self.column_families`. """ def __init__(self, *args, **kwargs): self.column_families = {} def _register_mock_cf(self, name, cf): if name: self.column_families[name] = cf def dispose(self, *args, **kwargs): pass class SystemManagerStub(object): """Functional System Manager stub object. Records when column families, columns, and indexes have been created. To see what has been recorded, look at :attr:`self.column_families`. """ def __init__(self, *args, **kwargs): self.column_families = {} def create_column_family(self, keyspace, table_name, *args, **kwargs): """Create a column family and record its existence.""" self.column_families[table_name] = { 'keyspace': keyspace, 'columns': {}, 'indexes': {}, } def alter_column(self, keyspace, table_name, column_name, column_type): """Alter a column, recording its name and type.""" self.column_families[table_name]['columns'][column_name] = column_type def create_index(self, keyspace, table_name, column_name, column_type): """Create an index, recording its name and type.""" self.column_families[table_name]['indexes'][column_name] = column_type def _schema(self): ret = ','.join(self.column_families.keys()) for k in self.column_families: for v in ('columns', 'indexes'): ret += ','.join(self.column_families[k][v]) return hash(ret) def describe_schema_versions(self): """Describes the schema based on a hash of the stub system state.""" return {self._schema(): ['1.1.1.1']} class ColumnFamilyStub(object): """Functional ColumnFamily stub object. Acts very similar to a remote column family, supporting a basic version of the API. When instantiated, it registers itself with the supplied (stub) connection pool. """ def __init__(self, pool=None, column_family=None, rows=None, **kwargs): rows = rows or OrderedDict() for r in rows.itervalues(): if not isinstance(r, DictWithTime): r = DictWithTime(r) self.rows = rows if pool is not None: pool._register_mock_cf(column_family, self) def __len__(self): return len(self.rows) def __contains__(self, obj): return self.rows.__contains__(obj) def get(self, key, columns=None, column_start=None, column_finish=None, column_reversed=False, column_count=100, include_timestamp=False, **kwargs): """Get a value from the column family stub.""" my_columns = self.rows.get(key) if include_timestamp: get_value = lambda x: x else: get_value = lambda x: x[0] if not my_columns: raise NotFoundException() items = my_columns.items() if isinstance(items[0], UUID) and items[0].version == 1: items.sort(key=lambda uuid: uuid.time) elif isinstance(items[0], tuple) and any(isinstance(x, UUID) for x in items[0]): are_components_uuids = [isinstance(x, UUID) and x.version == 1 for x in items[0]] def sortuuid(tup): return [x.time if is_uuid else x for x, is_uuid in zip(tup, are_components_uuids)] items.sort(key=sortuuid) else: items.sort() if column_reversed: items.reverse() sliced_items = [(k, get_value(v)) for (k, v) in items if self._is_column_in_range(k, columns, column_start, column_finish, column_reversed)][:column_count] return OrderedDict(sliced_items) def _is_column_in_range(self, k, columns, column_start, column_finish, column_reversed): lower_bound = column_start if not column_reversed else column_finish upper_bound = column_finish if not column_reversed else column_start if columns: return k in columns return (not lower_bound or k >= lower_bound) and (not upper_bound or k <= upper_bound) def multiget(self, keys, columns=None, column_start=None, column_finish=None, column_reversed=False, column_count=100, include_timestamp=False, **kwargs): """Get multiple key values from the column family stub.""" return OrderedDict( (key, self.get( key, columns=columns, column_start=column_start, column_finish=column_finish, column_reversed=column_reversed, column_count=column_count, include_timestamp=include_timestamp, )) for key in keys if key in self.rows) def batch(self, **kwargs): """Returns itself.""" return self def send(self): pass def insert(self, key, columns, timestamp=None, **kwargs): """Insert data to the column family stub.""" if key not in self.rows: self.rows[key] = DictWithTime([], timestamp=timestamp) for column in columns: self.rows[key].__setitem__(column, columns[column], timestamp) return self.rows[key][columns.keys()[0]][1] def get_indexed_slices(self, index_clause, **kwargs): """Grabs rows that match a pycassa index clause. See :meth:`pycassa.index.create_index_clause()` for creating such an index clause.""" keys = [] for key, row in self.rows.iteritems(): for expr in index_clause.expressions: if ( expr.column_name in row and operator_dict[expr.op](row[expr.column_name][0], expr.value) ): keys.append(key) data = self.multiget(keys, **kwargs).items() return data def remove(self, key, columns=None): """Remove a key from the column family stub.""" if key not in self.rows: raise NotFoundException() if columns is None: del self.rows[key] else: for c in columns: if c in self.rows[key]: del self.rows[key][c] if not self.rows[key]: del self.rows[key] return gm_timestamp() def get_range(self, include_timestamp=False, columns=None, **kwargs): """Currently just gets all values from the column family.""" return [(key, self.get(key, columns, include_timestamp)) for key in self.rows] def truncate(self): """Clears all data from the column family stub.""" self.rows.clear() pycassa-1.11.2.1/pycassa/index.py000066400000000000000000000060671303744607500165440ustar00rootroot00000000000000""" Tools for using Cassandra's secondary indexes. Example Usage: .. code-block:: python >>> from pycassa.columnfamily import ColumnFamily >>> from pycassa.pool import ConnectionPool >>> from pycassa.index import * >>> >>> pool = ConnectionPool('Keyspace1') >>> users = ColumnFamily(pool, 'Users') >>> state_expr = create_index_expression('state', 'Utah') >>> bday_expr = create_index_expression('birthdate', 1970, GT) >>> clause = create_index_clause([state_expr, bday_expr], count=20) >>> for key, user in users.get_indexed_slices(clause): ... print user['name'] + ",", user['state'], user['birthdate'] John Smith, Utah 1971 Mike Scott, Utah 1980 Jeff Bird, Utah 1973 This gives you all of the rows (up to 20) which have a 'birthdate' value above 1970 and a state value of 'Utah'. .. seealso:: :class:`~pycassa.system_manager.SystemManager` methods :meth:`~pycassa.system_manager.SystemManager.create_index()` and :meth:`~pycassa.system_manager.SystemManager.drop_index()` """ from pycassa.cassandra.ttypes import IndexClause, IndexExpression,\ IndexOperator __all__ = ['create_index_clause', 'create_index_expression', 'EQ', 'GT', 'GTE', 'LT', 'LTE'] EQ = IndexOperator.EQ """ Equality (==) operator for index expressions """ GT = IndexOperator.GT """ Greater-than (>) operator for index expressions """ GTE = IndexOperator.GTE """ Greater-than-or-equal (>=) operator for index expressions """ LT = IndexOperator.LT """ Less-than (<) operator for index expressions """ LTE = IndexOperator.LTE """ Less-than-or-equal (<=) operator for index expressions """ def create_index_clause(expr_list, start_key='', count=100): """ Constructs an :class:`~pycassa.cassandra.ttypes.IndexClause` for use with :meth:`~pycassa.columnfamily.get_indexed_slices()` `expr_list` should be a list of :class:`~pycassa.cassandra.ttypes.IndexExpression` objects that must be matched for a row to be returned. At least one of these expressions must be on an indexed column. Cassandra will only return matching rows with keys after `start_key`. If this is the empty string, all rows will be considered. Keep in mind that this is not as meaningful unless an OrderPreservingPartitioner is used. The number of rows to return is limited by `count`, which defaults to 100. """ return IndexClause(expressions=expr_list, start_key=start_key, count=count) def create_index_expression(column_name, value, op=EQ): """ Constructs an :class:`~pycassa.cassandra.ttypes.IndexExpression` to use in an :class:`~pycassa.cassandra.ttypes.IndexClause` The expression will be applied to the column with name `column_name`. A match will only occur if the operator specified with `op` returns ``True`` when used on the actual column value and the `value` parameter. The default operator is :const:`~EQ`, which tests for equality. """ return IndexExpression(column_name=column_name, op=op, value=value) pycassa-1.11.2.1/pycassa/logging/000077500000000000000000000000001303744607500165005ustar00rootroot00000000000000pycassa-1.11.2.1/pycassa/logging/__init__.py000066400000000000000000000000001303744607500205770ustar00rootroot00000000000000pycassa-1.11.2.1/pycassa/logging/pool_logger.py000066400000000000000000000072151303744607500213670ustar00rootroot00000000000000import pycassa_logger import logging class PoolLogger(object): def __init__(self): self.root_logger = pycassa_logger.PycassaLogger() self.logger = self.root_logger.add_child_logger('pool', self.name_changed) def name_changed(self, new_logger): self.logger = new_logger def connection_created(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] conn = dic.get('connection') if level <= logging.INFO: self.logger.log(level, "Connection %s (%s) opened for pool %s", id(conn), conn.server, dic.get('pool_id')) else: self.logger.log(level, "Error opening connection (%s) for pool %s: %s", conn.server, dic.get('pool_id'), dic.get('error')) def connection_checked_out(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] conn = dic.get('connection') self.logger.log(level, "Connection %s (%s) was checked out from pool %s", id(conn), conn.server, dic.get('pool_id')) def connection_checked_in(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] conn = dic.get('connection') self.logger.log(level, "Connection %s (%s) was checked in to pool %s", id(conn), conn.server, dic.get('pool_id')) def connection_disposed(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] conn = dic.get('connection') if level <= logging.INFO: self.logger.log(level, "Connection %s (%s) was closed; pool %s, reason: %s", id(conn), conn.server, dic.get('pool_id'), dic.get('message')) else: error = dic.get('error') self.logger.log(level, "Error closing connection %s (%s) in pool %s, " "reason: %s, error: %s %s", id(conn), conn.server, dic.get('pool_id'), dic.get('message'), error.__class__, error) def connection_recycled(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] old_conn = dic.get('old_conn') new_conn = dic.get('new_conn') self.logger.log(level, "Connection %s (%s) is being recycled in pool %s " "after %d operations; it is replaced by connection %s (%s)", id(old_conn), old_conn.server, dic.get('pool_id'), old_conn.operation_count, id(new_conn), new_conn.server) def connection_failed(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] conn = dic.get('connection') self.logger.log(level, "Connection %s (%s) in pool %s failed: %s", id(conn), dic.get('server'), dic.get('pool_id'), str(dic.get('error'))) def obtained_server_list(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] self.logger.log(level, "Server list obtained for pool %s: [%s]", dic.get('pool_id'), ", ".join(dic.get('server_list'))) def pool_disposed(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] self.logger.log(level, "Pool %s was disposed", dic.get('pool_id')) def pool_at_max(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] self.logger.log(level, "Pool %s had a checkout request but was already " "at its max size (%s)", dic.get('pool_id'), dic.get('pool_max')) pycassa-1.11.2.1/pycassa/logging/pool_stats_logger.py000066400000000000000000000071741303744607500226110ustar00rootroot00000000000000import pycassa_logger import logging import threading import functools def sync(lock_name): def wrapper(f): @functools.wraps(f) def wrapped(self, *args, **kwargs): lock = getattr(self, lock_name) try: lock.acquire() return f(self, *args, **kwargs) finally: lock.release() return wrapped return wrapper class StatsLogger(object): """ Basic stats logger that increment counts. You can plot these as `COUNTER` or `DERIVED` (RRD) or apply derivative (graphite) except for ``opened``, which tracks the currently opened connections. Usage:: >>> pool = ConnectionPool(...) >>> stats_logger = StatsLogger() >>> pool.add_listener(stats_logger) >>> >>> # use the pool for a while... >>> import pprint >>> pprint.pprint(stats_logger.stats) {'at_max': 0, 'checked_in': 401, 'checked_out': 403, 'created': {'failure': 0, 'success': 0}, 'disposed': {'failure': 0, 'success': 0}, 'failed': 1, 'list': 0, 'opened': {'current': 2, 'max': 2}, 'recycled': 0} Get your stats as ``stats_logger.stats`` and push them to your metrics system. """ def __init__(self): #some callbacks are already locked by pool_lock, it's just simpler to have a global here for all operations self.lock = threading.Lock() self.reset() @sync('lock') def reset(self): """ Reset all counters to 0 """ self._stats = { 'created': { 'success': 0, 'failure': 0, }, 'checked_out': 0, 'checked_in': 0, 'opened': { 'current': 0, 'max': 0 }, 'disposed': { 'success': 0, 'failure': 0 }, 'recycled': 0, 'failed': 0, 'list': 0, 'at_max': 0 } def name_changed(self, new_logger): self.logger = new_logger @sync('lock') def connection_created(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] if level <= logging.INFO: self._stats['created']['success'] += 1 else: self._stats['created']['failure'] += 1 @sync('lock') def connection_checked_out(self, dic): self._stats['checked_out'] += 1 self._update_opened(1) @sync('lock') def connection_checked_in(self, dic): self._stats['checked_in'] += 1 self._update_opened(-1) def _update_opened(self, value): self._stats['opened']['current'] += value if self._stats['opened']['current'] > self._stats['opened']['max']: self._stats['opened']['max'] = self._stats['opened']['current'] @sync('lock') def connection_disposed(self, dic): level = pycassa_logger.levels[dic.get('level', 'info')] if level <= logging.INFO: self._stats['disposed']['success'] += 1 else: self._stats['disposed']['failure'] += 1 @sync('lock') def connection_recycled(self, dic): self._stats['recycled'] += 1 @sync('lock') def connection_failed(self, dic): self._stats['failed'] += 1 @sync('lock') def obtained_server_list(self, dic): self._stats['list'] += 1 @sync('lock') def pool_disposed(self, dic): pass @sync('lock') def pool_at_max(self, dic): self._stats['at_max'] += 1 @property def stats(self): return self._stats pycassa-1.11.2.1/pycassa/logging/pycassa_logger.py000066400000000000000000000065111303744607500220570ustar00rootroot00000000000000""" Logging facilities for pycassa. """ import logging __all__ = ['PycassaLogger'] levels = {'debug': logging.DEBUG, 'info': logging.INFO, 'warn': logging.WARN, 'error': logging.ERROR, 'critical': logging.CRITICAL} _DEFAULT_LOGGER_NAME = 'pycassa' _DEFAULT_LEVEL = 'info' class PycassaLogger: """ The root logger for pycassa. This uses a singleton-like pattern, so creating a new instance will always give you the same result. This means that you can adjust all of pycassa's logging by calling methods on any instance. pycassa does *not* automatically add a handler to the logger, so logs will not be captured by default. You *must* add a :class:`logging.Handler()` object to the root handler for logs to be captured. See the example usage below. By default, the root logger name is 'pycassa' and the logging level is 'info'. The available levels are: * debug * info * warn * error * critical Example Usage:: >>> import logging >>> log = pycassa.PycassaLogger() >>> log.set_logger_name('pycassa_library') >>> log.set_logger_level('debug') >>> log.get_logger().addHandler(logging.StreamHandler()) """ __shared_state = {} def __init__(self): self.__dict__ = self.__shared_state if not hasattr(self, '_has_been_initialized'): self._has_been_initialized = True self._root_logger = None self._logger_name = None self._level = None self._child_loggers = [] self.set_logger_name(_DEFAULT_LOGGER_NAME) self.set_logger_level(_DEFAULT_LEVEL) def get_logger(self): """ Returns the underlying :class:`logging.Logger` instance. """ return self._root_logger def set_logger_level(self, level): """ Sets the logging level for all pycassa logging. """ self._level = level self._root_logger.setLevel(levels[level]) def get_logger_level(self): """ Gets the logging level for all pycassa logging. """ return self._level def set_logger_name(self, logger_name): """ Sets the root logger name for pycassa and all of its children loggers. """ self._logger_name = logger_name self._root_logger = logging.getLogger(logger_name) h = NullHandler() self._root_logger.addHandler(h) for child_logger in self._child_loggers: # make the callback child_logger[2](logging.getLogger('%s.%s' % (logger_name, child_logger[1]))) if self._level is not None: self.set_logger_level(self._level) def get_logger_name(self): """ Gets the root logger name for pycassa. """ return self._logger_name def add_child_logger(self, child_logger_name, name_change_callback): """ Adds a child logger to pycassa that will be updated when the logger name changes. """ new_logger = logging.getLogger('%s.%s' % (self._logger_name, child_logger_name)) self._child_loggers.append((new_logger, child_logger_name, name_change_callback)) return new_logger class NullHandler(logging.Handler): """ For python pre 2.7 compatibility. """ def emit(self, record): pass # Initialize our "singleton" PycassaLogger() pycassa-1.11.2.1/pycassa/marshal.py000066400000000000000000000311411303744607500170530ustar00rootroot00000000000000""" Tools for marshalling and unmarshalling data stored in Cassandra. """ import uuid import struct import calendar from datetime import datetime from decimal import Decimal import pycassa.util as util _number_types = frozenset((int, long, float)) def make_packer(fmt_string): return struct.Struct(fmt_string) _bool_packer = make_packer('>B') _float_packer = make_packer('>f') _double_packer = make_packer('>d') _long_packer = make_packer('>q') _int_packer = make_packer('>i') _short_packer = make_packer('>H') _BASIC_TYPES = ('BytesType', 'LongType', 'IntegerType', 'UTF8Type', 'AsciiType', 'LexicalUUIDType', 'TimeUUIDType', 'CounterColumnType', 'FloatType', 'DoubleType', 'DateType', 'BooleanType', 'UUIDType', 'Int32Type', 'DecimalType', 'TimestampType') def extract_type_name(typestr): if typestr is None: return 'BytesType' if "DynamicCompositeType" in typestr: return _get_composite_name(typestr) if "CompositeType" in typestr: return _get_composite_name(typestr) if "ReversedType" in typestr: return _get_inner_type(typestr) index = typestr.rfind('.') if index != -1: typestr = typestr[index + 1:] if typestr not in _BASIC_TYPES: typestr = 'BytesType' return typestr def _get_inner_type(typestr): """ Given a str like 'org.apache...ReversedType(LongType)', return just 'LongType' """ first_paren = typestr.find('(') return typestr[first_paren + 1:-1] def _get_inner_types(typestr): """ Given a str like 'org.apache...CompositeType(LongType, DoubleType)', return a tuple of the inner types, like ('LongType', 'DoubleType') """ internal_str = _get_inner_type(typestr) return map(str.strip, internal_str.split(',')) def _get_composite_name(typestr): types = map(extract_type_name, _get_inner_types(typestr)) return "CompositeType(" + ", ".join(types) + ")" def _to_timestamp(v): # Expects Value to be either date or datetime try: converted = calendar.timegm(v.utctimetuple()) converted = converted * 1e3 + getattr(v, 'microsecond', 0) / 1e3 except AttributeError: # Ints and floats are valid timestamps too if type(v) not in _number_types: raise TypeError('DateType arguments must be a datetime or timestamp') converted = v * 1e3 return long(converted) def get_composite_packer(typestr=None, composite_type=None): assert (typestr or composite_type), "Must provide typestr or " + \ "CompositeType instance" if typestr: packers = map(packer_for, _get_inner_types(typestr)) elif composite_type: packers = [c.pack for c in composite_type.components] len_packer = _short_packer.pack def pack_composite(items, slice_start=None): last_index = len(items) - 1 s = '' for i, (item, packer) in enumerate(zip(items, packers)): eoc = '\x00' if isinstance(item, tuple): item, inclusive = item if inclusive: if slice_start: eoc = '\xff' elif slice_start is False: eoc = '\x01' else: if slice_start: eoc = '\x01' elif slice_start is False: eoc = '\xff' elif i == last_index: if slice_start: eoc = '\xff' elif slice_start is False: eoc = '\x01' packed = packer(item) s += ''.join((len_packer(len(packed)), packed, eoc)) return s return pack_composite def get_composite_unpacker(typestr=None, composite_type=None): assert (typestr or composite_type), "Must provide typestr or " + \ "CompositeType instance" if typestr: unpackers = map(unpacker_for, _get_inner_types(typestr)) elif composite_type: unpackers = [c.unpack for c in composite_type.components] len_unpacker = lambda v: _short_packer.unpack(v)[0] def unpack_composite(bytestr): # The composite format for each component is: # # 2 bytes | ? bytes | 1 byte components = [] i = iter(unpackers) while bytestr: unpacker = i.next() length = len_unpacker(bytestr[:2]) components.append(unpacker(bytestr[2:2 + length])) bytestr = bytestr[3 + length:] return tuple(components) return unpack_composite def get_dynamic_composite_packer(typestr): cassandra_types = {} for inner_type in _get_inner_types(typestr): alias, cassandra_type = inner_type.split('=>') cassandra_types[alias] = cassandra_type len_packer = _short_packer.pack def pack_dynamic_composite(items, slice_start=None): last_index = len(items) - 1 s = '' i = 0 for (alias, item) in items: eoc = '\x00' if isinstance(alias, tuple): inclusive = item alias, item = alias if inclusive: if slice_start: eoc = '\xff' elif slice_start is False: eoc = '\x01' else: if slice_start: eoc = '\x01' elif slice_start is False: eoc = '\xff' elif i == last_index: if slice_start: eoc = '\xff' elif slice_start is False: eoc = '\x01' if isinstance(alias, str) and len(alias) == 1: header = '\x80' + alias packer = packer_for(cassandra_types[alias]) else: cassandra_type = str(alias).split('(')[0] header = len_packer(len(cassandra_type)) + cassandra_type packer = packer_for(cassandra_type) i += 1 packed = packer(item) s += ''.join((header, len_packer(len(packed)), packed, eoc)) return s return pack_dynamic_composite def get_dynamic_composite_unpacker(typestr): cassandra_types = {} for inner_type in _get_inner_types(typestr): alias, cassandra_type = inner_type.split('=>') cassandra_types[alias] = cassandra_type len_unpacker = lambda v: _short_packer.unpack(v)[0] def unpack_dynamic_composite(bytestr): # The composite format for each component is: #
# ? bytes | 2 bytes | ? bytes | 1 byte types = [] components = [] while bytestr: header = len_unpacker(bytestr[:2]) if header & 0x8000: alias = bytestr[1] types.append(alias) unpacker = unpacker_for(cassandra_types[alias]) bytestr = bytestr[2:] else: cassandra_type = bytestr[2:2 + header] types.append(cassandra_type) unpacker = unpacker_for(cassandra_type) bytestr = bytestr[2 + header:] length = len_unpacker(bytestr[:2]) components.append(unpacker(bytestr[2:2 + length])) bytestr = bytestr[3 + length:] return tuple(zip(types, components)) return unpack_dynamic_composite def packer_for(typestr): if typestr is None: return lambda v: v if "DynamicCompositeType" in typestr: return get_dynamic_composite_packer(typestr) if "CompositeType" in typestr: return get_composite_packer(typestr) if "ReversedType" in typestr: return packer_for(_get_inner_type(typestr)) data_type = extract_type_name(typestr) if data_type in ('DateType', 'TimestampType'): def pack_date(v, _=None): return _long_packer.pack(_to_timestamp(v)) return pack_date elif data_type == 'BooleanType': def pack_bool(v, _=None): return _bool_packer.pack(bool(v)) return pack_bool elif data_type == 'DoubleType': def pack_double(v, _=None): return _double_packer.pack(v) return pack_double elif data_type == 'FloatType': def pack_float(v, _=None): return _float_packer.pack(v) return pack_float elif data_type == 'DecimalType': def pack_decimal(dec, _=None): sign, digits, exponent = dec.as_tuple() unscaled = int(''.join(map(str, digits))) if sign: unscaled *= -1 scale = _int_packer.pack(-exponent) unscaled = encode_int(unscaled) return scale + unscaled return pack_decimal elif data_type == 'LongType': def pack_long(v, _=None): return _long_packer.pack(v) return pack_long elif data_type == 'Int32Type': def pack_int32(v, _=None): return _int_packer.pack(v) return pack_int32 elif data_type == 'IntegerType': return encode_int elif data_type == 'UTF8Type': def pack_utf8(v, _=None): try: return v.encode('utf-8') except UnicodeDecodeError: # v is already utf-8 encoded return v return pack_utf8 elif 'UUIDType' in data_type: def pack_uuid(value, slice_start=None): if slice_start is None: value = util.convert_time_to_uuid(value, randomize=True) else: value = util.convert_time_to_uuid(value, lowest_val=slice_start, randomize=False) if not hasattr(value, 'bytes'): raise TypeError("%s is not valid for UUIDType" % value) return value.bytes return pack_uuid elif data_type == "CounterColumnType": def noop(value, slice_start=None): return value return noop else: # data_type == 'BytesType' or something unknown def pack_bytes(v, _=None): if not isinstance(v, basestring): raise TypeError("A str or unicode value was expected, " + "but %s was received instead (%s)" % (v.__class__.__name__, str(v))) return v return pack_bytes def unpacker_for(typestr): if typestr is None: return lambda v: v if "DynamicCompositeType" in typestr: return get_dynamic_composite_unpacker(typestr) if "CompositeType" in typestr: return get_composite_unpacker(typestr) if "ReversedType" in typestr: return unpacker_for(_get_inner_type(typestr)) data_type = extract_type_name(typestr) if data_type == 'BytesType': return lambda v: v elif data_type in ('DateType', 'TimestampType'): return lambda v: datetime.utcfromtimestamp( _long_packer.unpack(v)[0] / 1e3) elif data_type == 'BooleanType': return lambda v: bool(_bool_packer.unpack(v)[0]) elif data_type == 'DoubleType': return lambda v: _double_packer.unpack(v)[0] elif data_type == 'FloatType': return lambda v: _float_packer.unpack(v)[0] elif data_type == 'DecimalType': def unpack_decimal(v): scale = _int_packer.unpack(v[:4])[0] unscaled = decode_int(v[4:]) return Decimal('%de%d' % (unscaled, -scale)) return unpack_decimal elif data_type == 'LongType': return lambda v: _long_packer.unpack(v)[0] elif data_type == 'Int32Type': return lambda v: _int_packer.unpack(v)[0] elif data_type == 'IntegerType': return decode_int elif data_type == 'UTF8Type': return lambda v: v.decode('utf-8') elif 'UUIDType' in data_type: return lambda v: uuid.UUID(bytes=v) else: return lambda v: v def encode_int(x, *args): if x >= 0: out = [] while x >= 256: out.append(struct.pack('B', 0xff & x)) x >>= 8 out.append(struct.pack('B', 0xff & x)) if x > 127: out.append('\x00') else: x = -1 - x out = [] while x >= 256: out.append(struct.pack('B', 0xff & ~x)) x >>= 8 if x <= 127: out.append(struct.pack('B', 0xff & ~x)) else: out.append(struct.pack('>H', 0xffff & ~x)) return ''.join(reversed(out)) def decode_int(term, *args): if term != "": val = int(term.encode('hex'), 16) if (ord(term[0]) & 128) != 0: val = val - (1 << (len(term) * 8)) return val pycassa-1.11.2.1/pycassa/pool.py000066400000000000000000001001751303744607500164010ustar00rootroot00000000000000""" Connection pooling for Cassandra connections. """ from __future__ import with_statement import time import threading import random import socket import sys if 'gevent.monkey' in sys.modules: from gevent import queue as Queue else: import Queue # noqa from thrift import Thrift from thrift.transport.TTransport import TTransportException from connection import (Connection, default_socket_factory, default_transport_factory) from logging.pool_logger import PoolLogger from util import as_interface from cassandra.ttypes import TimedOutException, UnavailableException _BASE_BACKOFF = 0.01 __all__ = ['QueuePool', 'ConnectionPool', 'PoolListener', 'ConnectionWrapper', 'AllServersUnavailable', 'MaximumRetryException', 'NoConnectionAvailable', 'InvalidRequestError'] class ConnectionWrapper(Connection): """ Creates a wrapper for a :class:`~.pycassa.connection.Connection` object, adding pooling related functionality while still allowing access to the thrift API calls. These should not be created directly, only obtained through Pool's :meth:`~.ConnectionPool.get()` method. """ # These mark the state of the connection so that we can # check to see that they are not returned, checked out, # or disposed twice (or from the wrong state). _IN_QUEUE = 0 _CHECKED_OUT = 1 _DISPOSED = 2 def __init__(self, pool, max_retries, *args, **kwargs): self._pool = pool self._retry_count = 0 self.max_retries = max_retries self.info = {} self.starttime = time.time() self.operation_count = 0 self._state = ConnectionWrapper._CHECKED_OUT Connection.__init__(self, *args, **kwargs) self._pool._notify_on_connect(self) # For testing purposes only self._should_fail = False self._original_meth = self.send_batch_mutate def return_to_pool(self): """ Returns this to the pool. This has the same effect as calling :meth:`ConnectionPool.put()` on the wrapper. """ self._pool.put(self) def _checkin(self): if self._state == ConnectionWrapper._IN_QUEUE: raise InvalidRequestError("A connection has been returned to " "the connection pool twice.") elif self._state == ConnectionWrapper._DISPOSED: raise InvalidRequestError("A disposed connection has been returned " "to the connection pool.") self._state = ConnectionWrapper._IN_QUEUE def _checkout(self): if self._state != ConnectionWrapper._IN_QUEUE: raise InvalidRequestError("A connection has been checked " "out twice.") self._state = ConnectionWrapper._CHECKED_OUT def _is_in_queue_or_disposed(self): ret = self._state == ConnectionWrapper._IN_QUEUE or \ self._state == ConnectionWrapper._DISPOSED return ret def _dispose_wrapper(self, reason=None): if self._state == ConnectionWrapper._DISPOSED: raise InvalidRequestError("A connection has been disposed twice.") self._state = ConnectionWrapper._DISPOSED self.close() self._pool._notify_on_dispose(self, msg=reason) def _replace(self, new_conn_wrapper): """ Get another wrapper from the pool and replace our own contents with its contents. """ self.server = new_conn_wrapper.server self.transport = new_conn_wrapper.transport self._iprot = new_conn_wrapper._iprot self._oprot = new_conn_wrapper._oprot self.info = new_conn_wrapper.info self.starttime = new_conn_wrapper.starttime self.operation_count = new_conn_wrapper.operation_count self._state = ConnectionWrapper._CHECKED_OUT self._should_fail = new_conn_wrapper._should_fail @classmethod def _retry(cls, f): def new_f(self, *args, **kwargs): self.operation_count += 1 self.info['request'] = {'method': f.__name__, 'args': args, 'kwargs': kwargs} try: allow_retries = kwargs.pop('allow_retries', True) if kwargs.pop('reset', False): self._pool._replace_wrapper() # puts a new wrapper in the queue self._replace(self._pool.get()) # swaps out transport result = f(self, *args, **kwargs) self._retry_count = 0 # reset the count after a success return result except Thrift.TApplicationException: self.close() self._pool._decrement_overflow() self._pool._clear_current() raise except (TimedOutException, UnavailableException, TTransportException, socket.error, IOError, EOFError), exc: self._pool._notify_on_failure(exc, server=self.server, connection=self) self.close() self._pool._decrement_overflow() self._pool._clear_current() self._retry_count += 1 if (not allow_retries or (self.max_retries != -1 and self._retry_count > self.max_retries)): raise MaximumRetryException('Retried %d times. Last failure was %s: %s' % (self._retry_count, exc.__class__.__name__, exc)) # Exponential backoff time.sleep(_BASE_BACKOFF * (2 ** self._retry_count)) kwargs['reset'] = True return new_f(self, *args, **kwargs) new_f.__name__ = f.__name__ return new_f def _fail_once(self, *args, **kwargs): if self._should_fail: self._should_fail = False raise TimedOutException else: return self._original_meth(*args, **kwargs) def get_keyspace_description(self, keyspace=None, use_dict_for_col_metadata=False): """ Describes the given keyspace. If `use_dict_for_col_metadata` is ``True``, the column metadata will be stored as a dictionary instead of a list A dictionary of the form ``{column_family_name: CfDef}`` is returned. """ if keyspace is None: keyspace = self.keyspace ks_def = self.describe_keyspace(keyspace) cf_defs = dict() for cf_def in ks_def.cf_defs: cf_defs[cf_def.name] = cf_def if use_dict_for_col_metadata: old_metadata = cf_def.column_metadata new_metadata = dict() for datum in old_metadata: new_metadata[datum.name] = datum cf_def.column_metadata = new_metadata return cf_defs def __str__(self): return "" % (self.keyspace, self.server) retryable = ('get', 'get_slice', 'multiget_slice', 'get_count', 'multiget_count', 'get_range_slices', 'get_indexed_slices', 'batch_mutate', 'add', 'insert', 'remove', 'remove_counter', 'truncate', 'describe_keyspace', 'atomic_batch_mutate') for fname in retryable: new_f = ConnectionWrapper._retry(getattr(Connection, fname)) setattr(ConnectionWrapper, fname, new_f) class ConnectionPool(object): """A pool that maintains a queue of open connections.""" _max_overflow = 0 def _get_max_overflow(self): return self._max_overflow def _set_max_overflow(self, max_overflow): with self._pool_lock: self._max_overflow = max_overflow self._overflow_enabled = max_overflow > 0 or max_overflow == -1 if max_overflow == -1: self._max_conns = (2 ** 31) - 1 else: self._max_conns = self._pool_size + max_overflow max_overflow = property(_get_max_overflow, _set_max_overflow) """ Whether or not a new connection may be opened when the pool is empty is controlled by `max_overflow`. This specifies how many additional connections may be opened after the pool has reached `pool_size`; keep in mind that these extra connections will be discarded upon checkin until the pool is below `pool_size`. This may be set to -1 to indicate no overflow limit. The default value is 0, which does not allow for overflow. """ pool_timeout = 30 """ If ``pool_size + max_overflow`` connections have already been checked out, an attempt to retrieve a new connection from the pool will wait up to `pool_timeout` seconds for a connection to be returned to the pool before giving up. Note that this setting is only meaningful when you are accessing the pool concurrently, such as with multiple threads. This may be set to 0 to fail immediately or -1 to wait forever. The default value is 30. """ recycle = 10000 """ After performing `recycle` number of operations, connections will be replaced when checked back in to the pool. This may be set to -1 to disable connection recycling. The default value is 10,000. """ max_retries = 5 """ When an operation on a connection fails due to an :exc:`~.TimedOutException` or :exc:`~.UnavailableException`, which tend to indicate single or multiple node failure, the operation will be retried on different nodes up to `max_retries` times before an :exc:`~.MaximumRetryException` is raised. Setting this to 0 disables retries and setting to -1 allows unlimited retries. The default value is 5. """ logging_name = None """ By default, each pool identifies itself in the logs using ``id(self)``. If multiple pools are in use for different purposes, setting `logging_name` will help individual pools to be identified in the logs. """ socket_factory = default_socket_factory """ A function that creates the socket for each connection in the pool. This function should take two arguments: `host`, the host the connection is being made to, and `port`, the destination port. By default, this is function is :func:`~connection.default_socket_factory`. """ transport_factory = default_transport_factory """ A function that creates the transport for each connection in the pool. This function should take three arguments: `tsocket`, a TSocket object for the transport, `host`, the host the connection is being made to, and `port`, the destination port. By default, this is function is :func:`~connection.default_transport_factory`. """ def __init__(self, keyspace, server_list=['localhost:9160'], credentials=None, timeout=0.5, use_threadlocal=True, pool_size=5, prefill=True, socket_factory=default_socket_factory, transport_factory=default_transport_factory, **kwargs): """ All connections in the pool will be opened to `keyspace`. `server_list` is a sequence of servers in the form ``"host:port"`` that the pool will connect to. The port defaults to 9160 if excluded. The list will be randomly shuffled before being drawn from sequentially. `server_list` may also be a function that returns the sequence of servers. If authentication or authorization is required, `credentials` must be supplied. This should be a dictionary containing 'username' and 'password' keys with appropriate string values. `timeout` specifies in seconds how long individual connections will block before timing out. If set to ``None``, connections will never timeout. If `use_threadlocal` is set to ``True``, repeated calls to :meth:`get()` within the same application thread will return the same :class:`ConnectionWrapper` object if one is already checked out from the pool. Be careful when setting `use_threadlocal` to ``False`` in a multithreaded application, especially with retries enabled. Synchronization may be required to prevent the connection from changing while another thread is using it. The pool will keep up to `pool_size` open connections in the pool at any time. When a connection is returned to the pool, the connection will be discarded if the pool already contains `pool_size` connections. The total number of simultaneous connections the pool will allow is ``pool_size + max_overflow``, and the number of "sleeping" connections the pool will allow is ``pool_size``. A good choice for `pool_size` is a multiple of the number of servers passed to the Pool constructor. If a size less than this is chosen, the last ``(len(server_list) - pool_size)`` servers may not be used until either overflow occurs, a connection is recycled, or a connection fails. Similarly, if a multiple of ``len(server_list)`` is not chosen, those same servers would have a decreased load. By default, overflow is disabled. If `prefill` is set to ``True``, `pool_size` connections will be opened when the pool is created. Example Usage: .. code-block:: python >>> pool = pycassa.ConnectionPool(keyspace='Keyspace1', server_list=['10.0.0.4:9160', '10.0.0.5:9160'], prefill=False) >>> cf = pycassa.ColumnFamily(pool, 'Standard1') >>> cf.insert('key', {'col': 'val'}) 1287785685530679 """ self._pool_threadlocal = use_threadlocal self.keyspace = keyspace self.credentials = credentials self.timeout = timeout self.socket_factory = socket_factory self.transport_factory = transport_factory if use_threadlocal: self._tlocal = threading.local() self._pool_size = pool_size self._q = Queue.Queue(pool_size) self._pool_lock = threading.Lock() self._current_conns = 0 # Listener groups self.listeners = [] self._on_connect = [] self._on_checkout = [] self._on_checkin = [] self._on_dispose = [] self._on_recycle = [] self._on_failure = [] self._on_server_list = [] self._on_pool_dispose = [] self._on_pool_max = [] self.add_listener(PoolLogger()) if "listeners" in kwargs: listeners = kwargs["listeners"] for l in listeners: self.add_listener(l) self.logging_name = kwargs.get("logging_name", None) if not self.logging_name: self.logging_name = id(self) if "max_overflow" not in kwargs: self._set_max_overflow(0) recognized_kwargs = ["pool_timeout", "recycle", "max_retries", "max_overflow"] for kw in recognized_kwargs: if kw in kwargs: setattr(self, kw, kwargs[kw]) self.set_server_list(server_list) self._prefill = prefill if self._prefill: self.fill() def set_server_list(self, server_list): """ Sets the server list that the pool will make connections to. `server_list` should be sequence of servers in the form ``"host:port"`` that the pool will connect to. The list will be randomly permuted before being used. `server_list` may also be a function that returns the sequence of servers. """ if callable(server_list): self.server_list = list(server_list()) else: self.server_list = list(server_list) random.shuffle(self.server_list) self._list_position = 0 self._notify_on_server_list(self.server_list) def _get_next_server(self): """ Gets the next 'localhost:port' combination from the list of servers and increments the position. This is not thread-safe, but client-side load-balancing isn't so important that this is a problem. """ if self._list_position >= len(self.server_list): self._list_position = 0 server = self.server_list[self._list_position] self._list_position += 1 return server def _create_connection(self): """Creates a ConnectionWrapper, which opens a pycassa.connection.Connection.""" if not self.server_list: raise AllServersUnavailable('Cannot connect to any servers as server list is empty!') failure_count = 0 while failure_count < 2 * len(self.server_list): try: server = self._get_next_server() wrapper = self._get_new_wrapper(server) return wrapper except (TTransportException, socket.error, IOError, EOFError), exc: self._notify_on_failure(exc, server) failure_count += 1 raise AllServersUnavailable('An attempt was made to connect to each of the servers ' + 'twice, but none of the attempts succeeded. The last failure was %s: %s' % (exc.__class__.__name__, exc)) def fill(self): """ Adds connections to the pool until at least ``pool_size`` connections exist, whether they are currently checked out from the pool or not. .. versionadded:: 1.2.0 """ with self._pool_lock: while self._current_conns < self._pool_size: conn = self._create_connection() conn._checkin() self._q.put(conn, False) self._current_conns += 1 def _get_new_wrapper(self, server): return ConnectionWrapper(self, self.max_retries, self.keyspace, server, timeout=self.timeout, credentials=self.credentials, socket_factory=self.socket_factory, transport_factory=self.transport_factory) def _replace_wrapper(self): """Try to replace the connection.""" if not self._q.full(): conn = self._create_connection() conn._checkin() try: self._q.put(conn, False) except Queue.Full: conn._dispose_wrapper(reason="pool is already full") else: with self._pool_lock: self._current_conns += 1 def _clear_current(self): """ If using threadlocal, clear our threadlocal current conn. """ if self._pool_threadlocal: self._tlocal.current = None def put(self, conn): """ Returns a connection to the pool. """ if not conn.transport.isOpen(): return if self._pool_threadlocal: if hasattr(self._tlocal, 'current') and self._tlocal.current: conn = self._tlocal.current self._tlocal.current = None else: conn = None if conn: conn._retry_count = 0 if conn._is_in_queue_or_disposed(): raise InvalidRequestError("Connection was already checked in or disposed") if self.recycle > -1 and conn.operation_count > self.recycle: new_conn = self._create_connection() self._notify_on_recycle(conn, new_conn) conn._dispose_wrapper(reason="recyling connection") conn = new_conn conn._checkin() self._notify_on_checkin(conn) try: self._q.put_nowait(conn) except Queue.Full: conn._dispose_wrapper(reason="pool is already full") self._decrement_overflow() return_conn = put def _decrement_overflow(self): with self._pool_lock: self._current_conns -= 1 def _new_if_required(self, max_conns, check_empty_queue=False): """ Creates new connection if there is room """ with self._pool_lock: if (not check_empty_queue or self._q.empty()) and self._current_conns < max_conns: new_conn = True self._current_conns += 1 else: new_conn = False if new_conn: try: return self._create_connection() except: with self._pool_lock: self._current_conns -= 1 raise return None def get(self): """ Gets a connection from the pool. """ conn = None if self._pool_threadlocal: try: if self._tlocal.current: conn = self._tlocal.current if conn: return conn except AttributeError: pass conn = self._new_if_required(self._pool_size) if not conn: # if queue is empty and max_overflow is not reached, create new conn conn = self._new_if_required(self._max_conns, check_empty_queue=True) if not conn: # We will have to fetch from the queue, and maybe block timeout = self.pool_timeout if timeout == -1: timeout = None try: conn = self._q.get(timeout=timeout) except Queue.Empty: self._notify_on_pool_max(pool_max=self._max_conns) size_msg = "size %d" % (self._pool_size, ) if self._overflow_enabled: size_msg += "overflow %d" % (self._max_overflow) message = "ConnectionPool limit of %s reached, unable to obtain connection after %d seconds" \ % (size_msg, self.pool_timeout) raise NoConnectionAvailable(message) else: conn._checkout() if self._pool_threadlocal: self._tlocal.current = conn self._notify_on_checkout(conn) return conn def execute(self, f, *args, **kwargs): """ Get a connection from the pool, execute `f` on it with `*args` and `**kwargs`, return the connection to the pool, and return the result of `f`. """ conn = None try: conn = self.get() return getattr(conn, f)(*args, **kwargs) finally: if conn: conn.return_to_pool() def dispose(self): """ Closes all checked in connections in the pool. """ while True: try: conn = self._q.get(False) conn._dispose_wrapper( reason="Pool %s is being disposed" % id(self)) self._decrement_overflow() except Queue.Empty: break self._notify_on_pool_dispose() def size(self): """ Returns the capacity of the pool. """ return self._pool_size def checkedin(self): """ Returns the number of connections currently in the pool. """ return self._q.qsize() def overflow(self): """ Returns the number of overflow connections that are currently open. """ return max(self._current_conns - self._pool_size, 0) def checkedout(self): """ Returns the number of connections currently checked out from the pool. """ return self._current_conns - self.checkedin() def add_listener(self, listener): """ Add a :class:`PoolListener`-like object to this pool. `listener` may be an object that implements some or all of :class:`PoolListener`, or a dictionary of callables containing implementations of some or all of the named methods in :class:`PoolListener`. """ listener = as_interface(listener, methods=('connection_created', 'connection_checked_out', 'connection_checked_in', 'connection_disposed', 'connection_recycled', 'connection_failed', 'obtained_server_list', 'pool_disposed', 'pool_at_max')) self.listeners.append(listener) if hasattr(listener, 'connection_created'): self._on_connect.append(listener) if hasattr(listener, 'connection_checked_out'): self._on_checkout.append(listener) if hasattr(listener, 'connection_checked_in'): self._on_checkin.append(listener) if hasattr(listener, 'connection_disposed'): self._on_dispose.append(listener) if hasattr(listener, 'connection_recycled'): self._on_recycle.append(listener) if hasattr(listener, 'connection_failed'): self._on_failure.append(listener) if hasattr(listener, 'obtained_server_list'): self._on_server_list.append(listener) if hasattr(listener, 'pool_disposed'): self._on_pool_dispose.append(listener) if hasattr(listener, 'pool_at_max'): self._on_pool_max.append(listener) def _notify_on_pool_dispose(self): if self._on_pool_dispose: dic = {'pool_id': self.logging_name, 'level': 'info'} for l in self._on_pool_dispose: l.pool_disposed(dic) def _notify_on_pool_max(self, pool_max): if self._on_pool_max: dic = {'pool_id': self.logging_name, 'level': 'info', 'pool_max': pool_max} for l in self._on_pool_max: l.pool_at_max(dic) def _notify_on_dispose(self, conn_record, msg=""): if self._on_dispose: dic = {'pool_id': self.logging_name, 'level': 'debug', 'connection': conn_record} if msg: dic['message'] = msg for l in self._on_dispose: l.connection_disposed(dic) def _notify_on_server_list(self, server_list): dic = {'pool_id': self.logging_name, 'level': 'debug', 'server_list': server_list} if self._on_server_list: for l in self._on_server_list: l.obtained_server_list(dic) def _notify_on_recycle(self, old_conn, new_conn): if self._on_recycle: dic = {'pool_id': self.logging_name, 'level': 'debug', 'old_conn': old_conn, 'new_conn': new_conn} for l in self._on_recycle: l.connection_recycled(dic) def _notify_on_connect(self, conn_record, msg="", error=None): if self._on_connect: dic = {'pool_id': self.logging_name, 'level': 'debug', 'connection': conn_record} if msg: dic['message'] = msg if error: dic['error'] = error dic['level'] = 'warn' for l in self._on_connect: l.connection_created(dic) def _notify_on_checkin(self, conn_record): if self._on_checkin: dic = {'pool_id': self.logging_name, 'level': 'debug', 'connection': conn_record} for l in self._on_checkin: l.connection_checked_in(dic) def _notify_on_checkout(self, conn_record): if self._on_checkout: dic = {'pool_id': self.logging_name, 'level': 'debug', 'connection': conn_record} for l in self._on_checkout: l.connection_checked_out(dic) def _notify_on_failure(self, error, server, connection=None): if self._on_failure: dic = {'pool_id': self.logging_name, 'level': 'info', 'error': error, 'server': server, 'connection': connection} for l in self._on_failure: l.connection_failed(dic) QueuePool = ConnectionPool class PoolListener(object): """Hooks into the lifecycle of connections in a :class:`ConnectionPool`. Usage:: class MyListener(PoolListener): def connection_created(self, dic): '''perform connect operations''' # etc. # create a new pool with a listener p = ConnectionPool(..., listeners=[MyListener()]) # or add a listener after the fact p.add_listener(MyListener()) Listeners receive a dictionary that contains event information and is indexed by a string describing that piece of info. For example, all event dictionaries include 'level', so dic['level'] will return the prescribed logging level. There is no need to subclass :class:`PoolListener` to handle events. Any class that implements one or more of these methods can be used as a pool listener. The :class:`ConnectionPool` will inspect the methods provided by a listener object and add the listener to one or more internal event queues based on its capabilities. In terms of efficiency and function call overhead, you're much better off only providing implementations for the hooks you'll be using. Each of the :class:`PoolListener` methods wil be called with a :class:`dict` as the single parameter. This :class:`dict` may contain the following fields: * `connection`: The :class:`ConnectionWrapper` object that persistently manages the connection * `message`: The reason this event happened * `error`: The :class:`Exception` that caused this event * `pool_id`: The id of the :class:`ConnectionPool` that this event came from * `level`: The prescribed logging level for this event. Can be 'debug', 'info', 'warn', 'error', or 'critical' Entries in the :class:`dict` that are specific to only one event type are detailed with each method. """ def connection_created(self, dic): """Called once for each new Cassandra connection. Fields: `pool_id`, `level`, and `connection`. """ def connection_checked_out(self, dic): """Called when a connection is retrieved from the Pool. Fields: `pool_id`, `level`, and `connection`. """ def connection_checked_in(self, dic): """Called when a connection returns to the pool. Fields: `pool_id`, `level`, and `connection`. """ def connection_disposed(self, dic): """Called when a connection is closed. ``dic['message']``: A reason for closing the connection, if any. Fields: `pool_id`, `level`, `connection`, and `message`. """ def connection_recycled(self, dic): """Called when a connection is recycled. ``dic['old_conn']``: The :class:`ConnectionWrapper` that is being recycled ``dic['new_conn']``: The :class:`ConnectionWrapper` that is replacing it Fields: `pool_id`, `level`, `old_conn`, and `new_conn`. """ def connection_failed(self, dic): """Called when a connection to a single server fails. ``dic['server']``: The server the connection was made to. Fields: `pool_id`, `level`, `error`, `server`, and `connection`. """ def server_list_obtained(self, dic): """Called when the pool finalizes its server list. ``dic['server_list']``: The randomly permuted list of servers that the pool will choose from. Fields: `pool_id`, `level`, and `server_list`. """ def pool_disposed(self, dic): """Called when a pool is disposed. Fields: `pool_id`, and `level`. """ def pool_at_max(self, dic): """ Called when an attempt is made to get a new connection from the pool, but the pool is already at its max size. ``dic['pool_max']``: The max number of connections the pool will keep open at one time. Fields: `pool_id`, `pool_max`, and `level`. """ class AllServersUnavailable(Exception): """Raised when none of the servers given to a pool can be connected to.""" class NoConnectionAvailable(Exception): """Raised when there are no connections left in a pool.""" class MaximumRetryException(Exception): """ Raised when a :class:`ConnectionWrapper` has retried the maximum allowed times before being returned to the pool; note that all of the retries do not have to be on the same operation. """ class InvalidRequestError(Exception): """ Pycassa was asked to do something it can't do. This error generally corresponds to runtime state errors. """ pycassa-1.11.2.1/pycassa/system_manager.py000066400000000000000000000417161303744607500204530ustar00rootroot00000000000000import time from pycassa.connection import (Connection, default_socket_factory, default_transport_factory) from pycassa.cassandra.ttypes import IndexType, KsDef, CfDef, ColumnDef,\ SchemaDisagreementException import pycassa.marshal as marshal import pycassa.types as types _DEFAULT_TIMEOUT = 30 _SAMPLE_PERIOD = 0.25 SIMPLE_STRATEGY = 'SimpleStrategy' """ Replication strategy that simply chooses consecutive nodes in the ring for replicas """ NETWORK_TOPOLOGY_STRATEGY = 'NetworkTopologyStrategy' """ Replication strategy that puts a number of replicas in each datacenter """ OLD_NETWORK_TOPOLOGY_STRATEGY = 'OldNetworkTopologyStrategy' """ Original replication strategy for putting a number of replicas in each datacenter. This was originally called 'RackAwareStrategy'. """ KEYS_INDEX = IndexType.KEYS """ A secondary index type where each indexed value receives its own row """ BYTES_TYPE = types.BytesType() LONG_TYPE = types.LongType() INT_TYPE = types.IntegerType() ASCII_TYPE = types.AsciiType() UTF8_TYPE = types.UTF8Type() TIME_UUID_TYPE = types.TimeUUIDType() LEXICAL_UUID_TYPE = types.LexicalUUIDType() COUNTER_COLUMN_TYPE = types.CounterColumnType() DOUBLE_TYPE = types.DoubleType() FLOAT_TYPE = types.FloatType() DECIMAL_TYPE = types.DecimalType() BOOLEAN_TYPE = types.BooleanType() DATE_TYPE = types.DateType() class SystemManager(object): """ Lets you examine and modify schema definitions as well as get basic information about the cluster. This class is mainly designed to be used manually in a python shell, not as part of a program, although it can be used that way. All operations which modify a keyspace or column family definition will block until the cluster reports that all nodes have accepted the modification. Example Usage: .. code-block:: python >>> from pycassa.system_manager import * >>> sys = SystemManager('192.168.10.2:9160') >>> sys.create_keyspace('TestKeyspace', SIMPLE_STRATEGY, {'replication_factor': '1'}) >>> sys.create_column_family('TestKeyspace', 'TestCF', super=False, ... comparator_type=LONG_TYPE) >>> sys.alter_column_family('TestKeyspace', 'TestCF', key_cache_size=42, gc_grace_seconds=1000) >>> sys.drop_keyspace('TestKeyspace') >>> sys.close() """ def __init__(self, server='localhost:9160', credentials=None, framed_transport=True, timeout=_DEFAULT_TIMEOUT, socket_factory=default_socket_factory, transport_factory=default_transport_factory): self._conn = Connection(None, server, framed_transport, timeout, credentials, socket_factory, transport_factory) def close(self): """ Closes the underlying connection """ self._conn.close() def get_keyspace_column_families(self, keyspace, use_dict_for_col_metadata=False): """ Returns a raw description of the keyspace, which is more useful for use in programs than :meth:`describe_keyspace()`. If `use_dict_for_col_metadata` is ``True``, the CfDef's column_metadata will be stored as a dictionary where the keys are column names instead of a list. Returns a dictionary of the form ``{column_family_name: CfDef}`` """ if keyspace is None: keyspace = self._keyspace ks_def = self._conn.describe_keyspace(keyspace) cf_defs = dict() for cf_def in ks_def.cf_defs: cf_defs[cf_def.name] = cf_def if use_dict_for_col_metadata: old_metadata = cf_def.column_metadata new_metadata = dict() for datum in old_metadata: new_metadata[datum.name] = datum cf_def.column_metadata = new_metadata return cf_defs def get_keyspace_properties(self, keyspace): """ Gets a keyspace's properties. Returns a :class:`dict` with 'strategy_class' and 'strategy_options' as keys. """ if keyspace is None: keyspace = self._keyspace ks_def = self._conn.describe_keyspace(keyspace) return {'replication_strategy': ks_def.strategy_class, 'strategy_options': ks_def.strategy_options} def list_keyspaces(self): """ Returns a list of all keyspace names. """ return [ks.name for ks in self._conn.describe_keyspaces()] def describe_ring(self, keyspace): """ Describes the Cassandra cluster """ return self._conn.describe_ring(keyspace) def describe_token_map(self): """ List tokens and their node assignments. """ return self._conn.describe_token_map() def describe_cluster_name(self): """ Gives the cluster name """ return self._conn.describe_cluster_name() def describe_version(self): """ Gives the server's API version """ return self._conn.describe_version() def describe_schema_versions(self): """ Lists what schema version each node has """ return self._conn.describe_schema_versions() def describe_partitioner(self): """ Gives the partitioner that the cluster is using """ part = self._conn.describe_partitioner() return part[part.rfind('.') + 1:] def describe_snitch(self): """ Gives the snitch that the cluster is using """ snitch = self._conn.describe_snitch() return snitch[snitch.rfind('.') + 1:] def _system_add_keyspace(self, ksdef): return self._schema_update(self._conn.system_add_keyspace, ksdef) def _system_update_keyspace(self, ksdef): return self._schema_update(self._conn.system_update_keyspace, ksdef) def create_keyspace(self, name, replication_strategy=SIMPLE_STRATEGY, strategy_options=None, durable_writes=True, **ks_kwargs): """ Creates a new keyspace. Column families may be added to this keyspace after it is created using :meth:`create_column_family()`. `replication_strategy` determines how replicas are chosen for this keyspace. The strategies that Cassandra provides by default are available as :const:`SIMPLE_STRATEGY`, :const:`NETWORK_TOPOLOGY_STRATEGY`, and :const:`OLD_NETWORK_TOPOLOGY_STRATEGY`. `strategy_options` is a dictionary of strategy options. For NetworkTopologyStrategy, the dictionary should look like ``{'Datacenter1': '2', 'Datacenter2': '1'}``. This maps each datacenter (as defined in a Cassandra property file) to a replica count. For SimpleStrategy, you can specify the replication factor as follows: ``{'replication_factor': '1'}``. Example Usage: .. code-block:: python >>> from pycassa.system_manager import * >>> sys = SystemManager('192.168.10.2:9160') >>> # Create a SimpleStrategy keyspace >>> sys.create_keyspace('SimpleKS', SIMPLE_STRATEGY, {'replication_factor': '1'}) >>> # Create a NetworkTopologyStrategy keyspace >>> sys.create_keyspace('NTS_KS', NETWORK_TOPOLOGY_STRATEGY, {'DC1': '2', 'DC2': '1'}) >>> sys.close() """ if replication_strategy.find('.') == -1: strategy_class = 'org.apache.cassandra.locator.%s' % replication_strategy else: strategy_class = replication_strategy ksdef = KsDef(name, strategy_class=strategy_class, strategy_options=strategy_options, cf_defs=[], durable_writes=durable_writes) for k, v in ks_kwargs.iteritems(): setattr(ksdef, k, v) self._system_add_keyspace(ksdef) def alter_keyspace(self, keyspace, replication_strategy=None, strategy_options=None, durable_writes=None, **ks_kwargs): """ Alters an existing keyspace. .. warning:: Don't use this unless you know what you are doing. Parameters are the same as for :meth:`create_keyspace()`. """ old_ksdef = self._conn.describe_keyspace(keyspace) old_durable = getattr(old_ksdef, 'durable_writes', True) ksdef = KsDef(name=old_ksdef.name, strategy_class=old_ksdef.strategy_class, strategy_options=old_ksdef.strategy_options, cf_defs=[], durable_writes=old_durable) if replication_strategy is not None: if replication_strategy.find('.') == -1: ksdef.strategy_class = 'org.apache.cassandra.locator.%s' % replication_strategy else: ksdef.strategy_class = replication_strategy if strategy_options is not None: ksdef.strategy_options = strategy_options if durable_writes is not None: ksdef.durable_writes = durable_writes for k, v in ks_kwargs.iteritems(): setattr(ksdef, k, v) self._system_update_keyspace(ksdef) def drop_keyspace(self, keyspace): """ Drops a keyspace from the cluster. """ self._schema_update(self._conn.system_drop_keyspace, keyspace) def _system_add_column_family(self, cfdef): self._conn.set_keyspace(cfdef.keyspace) return self._schema_update(self._conn.system_add_column_family, cfdef) def create_column_family(self, keyspace, name, column_validation_classes=None, **cf_kwargs): """ Creates a new column family in a given keyspace. If a value is not supplied for any of optional parameters, Cassandra will use a reasonable default value. `keyspace` should be the name of the keyspace the column family will be created in. `name` gives the name of the column family. """ self._conn.set_keyspace(keyspace) cfdef = CfDef() cfdef.keyspace = keyspace cfdef.name = name if cf_kwargs.pop('super', False): cf_kwargs.setdefault('column_type', 'Super') for k, v in cf_kwargs.iteritems(): v = self._convert_class_attrs(k, v) setattr(cfdef, k, v) if column_validation_classes: for (colname, value_type) in column_validation_classes.items(): cfdef = self._alter_column_cfdef(cfdef, colname, value_type) self._system_add_column_family(cfdef) def _system_update_column_family(self, cfdef): return self._schema_update(self._conn.system_update_column_family, cfdef) def alter_column_family(self, keyspace, column_family, column_validation_classes=None, **cf_kwargs): """ Alters an existing column family. Parameter meanings are the same as for :meth:`create_column_family`. """ self._conn.set_keyspace(keyspace) cfdef = self.get_keyspace_column_families(keyspace)[column_family] for k, v in cf_kwargs.iteritems(): v = self._convert_class_attrs(k, v) setattr(cfdef, k, v) if column_validation_classes: for (colname, value_type) in column_validation_classes.items(): cfdef = self._alter_column_cfdef(cfdef, colname, value_type) self._system_update_column_family(cfdef) def drop_column_family(self, keyspace, column_family): """ Drops a column family from the keyspace. """ self._conn.set_keyspace(keyspace) self._schema_update(self._conn.system_drop_column_family, column_family) def _convert_class_attrs(self, attr, value): if attr in ('comparator_type', 'subcomparator_type', 'key_validation_class', 'default_validation_class'): return self._qualify_type_class(value) else: return value def _qualify_type_class(self, classname): if classname: if isinstance(classname, types.CassandraType): s = str(classname) elif isinstance(classname, basestring): s = classname else: raise TypeError( "Column family validators and comparators " \ "must be specified as instances of " \ "pycassa.types.CassandraType subclasses or strings.") if s.find('.') == -1: return 'org.apache.cassandra.db.marshal.%s' % s else: return s else: return None def _alter_column_cfdef(self, cfdef, column, value_type): if cfdef.column_type == 'Super': packer = marshal.packer_for(cfdef.subcomparator_type) else: packer = marshal.packer_for(cfdef.comparator_type) packed_column = packer(column) value_type = self._qualify_type_class(value_type) cfdef.column_metadata = cfdef.column_metadata or [] matched = False for c in cfdef.column_metadata: if c.name == packed_column: c.validation_class = value_type matched = True break if not matched: cfdef.column_metadata.append(ColumnDef(packed_column, value_type, None, None)) return cfdef def alter_column(self, keyspace, column_family, column, value_type): """ Sets a data type for the value of a specific column. `value_type` is a string that determines what type the column value will be. By default, :const:`LONG_TYPE`, :const:`INT_TYPE`, :const:`ASCII_TYPE`, :const:`UTF8_TYPE`, :const:`TIME_UUID_TYPE`, :const:`LEXICAL_UUID_TYPE` and :const:`BYTES_TYPE` are provided. Custom types may be used as well by providing the class name; if the custom comparator class is not in ``org.apache.cassandra.db.marshal``, the fully qualified class name must be given. For super column families, this sets the subcolumn value type for any subcolumn named `column`, regardless of the super column name. """ self._conn.set_keyspace(keyspace) cfdef = self.get_keyspace_column_families(keyspace)[column_family] self._system_update_column_family(self._alter_column_cfdef(cfdef, column, value_type)) def create_index(self, keyspace, column_family, column, value_type, index_type=KEYS_INDEX, index_name=None): """ Creates an index on a column. This allows efficient for index usage via :meth:`~pycassa.columnfamily.ColumnFamily.get_indexed_slices()` `column` specifies what column to index, and `value_type` is a string that describes that column's value's data type; see :meth:`alter_column()` for a full description of `value_type`. `index_type` determines how the index will be stored internally. Currently, :const:`KEYS_INDEX` is the only option. `index_name` is an optional name for the index. Example Usage: .. code-block:: python >>> from pycassa.system_manager import * >>> sys = SystemManager('192.168.2.10:9160') >>> sys.create_index('Keyspace1', 'Standard1', 'birthdate', LONG_TYPE, index_name='bday_index') >>> sys.close """ self._conn.set_keyspace(keyspace) cfdef = self.get_keyspace_column_families(keyspace)[column_family] packer = marshal.packer_for(cfdef.comparator_type) packed_column = packer(column) value_type = self._qualify_type_class(value_type) coldef = ColumnDef(packed_column, value_type, index_type, index_name) for c in cfdef.column_metadata: if c.name == packed_column: cfdef.column_metadata.remove(c) break cfdef.column_metadata.append(coldef) self._system_update_column_family(cfdef) def drop_index(self, keyspace, column_family, column): """ Drops an index on a column. """ self._conn.set_keyspace(keyspace) cfdef = self.get_keyspace_column_families(keyspace)[column_family] matched = False for c in cfdef.column_metadata: if c.name == column: c.index_type = None c.index_name = None matched = True break if matched: self._system_update_column_family(cfdef) def _wait_for_agreement(self): while True: versions = self._conn.describe_schema_versions() # ignore unreachable nodes live_versions = [key for key in versions.keys() if key != 'UNREACHABLE'] if len(live_versions) == 1: break else: time.sleep(_SAMPLE_PERIOD) def _schema_update(self, schema_func, *args): """ Call schema updates functions and properly waits for agreement if needed. """ while True: try: schema_version = schema_func(*args) except SchemaDisagreementException: self._wait_for_agreement() else: break return schema_version pycassa-1.11.2.1/pycassa/types.py000066400000000000000000000214241303744607500165730ustar00rootroot00000000000000""" Data type definitions that are used when converting data to and from the binary format that the data will be stored in. In addition to the default classes included here, you may also define custom types by creating a new class that extends :class:`~.CassandraType`. For example, IntString, which stores an arbitrary integer as a string, may be defined as follows: .. code-block:: python >>> class IntString(pycassa.types.CassandraType): ... ... @staticmethod ... def pack(intval): ... return str(intval) ... ... @staticmethod ... def unpack(strval): ... return int(strval) """ import calendar from datetime import datetime import pycassa.marshal as marshal __all__ = ('CassandraType', 'BytesType', 'LongType', 'IntegerType', 'AsciiType', 'UTF8Type', 'TimeUUIDType', 'LexicalUUIDType', 'CounterColumnType', 'DoubleType', 'FloatType', 'DecimalType', 'BooleanType', 'DateType', 'OldPycassaDateType', 'IntermediateDateType', 'CompositeType', 'UUIDType', 'DynamicCompositeType', 'TimestampType') class CassandraType(object): """ A data type that Cassandra is aware of and knows how to validate and sort. All of the other classes in this module are subclasses of this class. If `reversed` is true and this is used as a column comparator, the columns will be sorted in reverse order. The `default` parameter only applies to use of this with ColumnFamilyMap, where `default` is used if a row does not contain a column corresponding to this item. """ def __init__(self, reversed=False, default=None): self.reversed = reversed self.default = default if not hasattr(self.__class__, 'pack'): self.pack = marshal.packer_for(self.__class__.__name__) if not hasattr(self.__class__, 'unpack'): self.unpack = marshal.unpacker_for(self.__class__.__name__) def __str__(self): return self.__class__.__name__ + "(reversed=" + str(self.reversed).lower() + ")" class BytesType(CassandraType): """ Stores data as a byte array """ pass class LongType(CassandraType): """ Stores data as an 8 byte integer """ pass class IntegerType(CassandraType): """ Stores data as a variable-length integer. This is a more compact format for storing small integers than :class:`~.LongType`, and the limits on the size of the integer are much higher. .. versionchanged:: 1.2.0 Prior to 1.2.0, this was always stored as a 4 byte integer. """ pass class Int32Type(CassandraType): """ Stores data as a 4 byte integer """ pass class AsciiType(CassandraType): """ Stores data as ASCII text """ pass class UTF8Type(CassandraType): """ Stores data as UTF8 encoded text """ pass class UUIDType(CassandraType): """ Stores data as a type 1 or type 4 UUID """ pass class TimeUUIDType(CassandraType): """ Stores data as a version 1 UUID """ pass class LexicalUUIDType(CassandraType): """ Stores data as a non-version 1 UUID """ pass class CounterColumnType(CassandraType): """ A 64bit counter column """ pass class DoubleType(CassandraType): """ Stores data as an 8 byte double """ pass class FloatType(CassandraType): """ Stores data as a 4 byte float """ pass class DecimalType(CassandraType): """ Stores an unlimited precision decimal number. `decimal.Decimal` objects are used by pycassa to represent these objects. """ pass class BooleanType(CassandraType): """ Stores data as a 1 byte boolean """ pass class DateType(CassandraType): """ An 8 byte timestamp. This will be returned as a :class:`datetime.datetime` instance by pycassa. Either :class:`datetime` instances or timestamps will be accepted. .. versionchanged:: 1.7.0 Prior to 1.7.0, datetime objects were expected to be in local time. In 1.7.0 and beyond, naive datetimes are assumed to be in UTC and tz-aware objects will be automatically converted to UTC for storage in Cassandra. """ pass TimestampType = DateType def _to_timestamp(v, use_micros=False): # Expects Value to be either date or datetime if use_micros: scale = 1e6 micro_scale = 1.0 else: scale = 1e3 micro_scale = 1e3 try: converted = calendar.timegm(v.utctimetuple()) converted = (converted * scale) + \ (getattr(v, 'microsecond', 0) / micro_scale) except AttributeError: # Ints and floats are valid timestamps too if type(v) not in marshal._number_types: raise TypeError('DateType arguments must be a datetime or timestamp') converted = v * scale return long(converted) class OldPycassaDateType(CassandraType): """ This class can only read and write the DateType format used by pycassa versions 1.2.0 to 1.5.0. This formats store the number of microseconds since the unix epoch, rather than the number of milliseconds, which is what cassandra-cli and other clients supporting DateType use. .. versionchanged:: 1.7.0 Prior to 1.7.0, datetime objects were expected to be in local time. In 1.7.0 and beyond, naive datetimes are assumed to be in UTC and tz-aware objects will be automatically converted to UTC for storage in Cassandra. """ @staticmethod def pack(v, *args, **kwargs): ts = _to_timestamp(v, use_micros=True) return marshal._long_packer.pack(ts) @staticmethod def unpack(v): ts = marshal._long_packer.unpack(v)[0] / 1e6 return datetime.utcfromtimestamp(ts) class IntermediateDateType(CassandraType): """ This class is capable of reading either the DateType format by pycassa versions 1.2.0 to 1.5.0 or the correct format used in pycassa 1.5.1+. It will only write the new, correct format. This type is a good choice when you are using DateType as the validator for non-indexed column values and you are in the process of converting from thee old format to the new format. It almost certainly *should not be used* for row keys, column names (if you care about the sorting), or column values that have a secondary index on them. .. versionchanged:: 1.7.0 Prior to 1.7.0, datetime objects were expected to be in local time. In 1.7.0 and beyond, naive datetimes are assumed to be in UTC and tz-aware objects will be automatically converted to UTC for storage in Cassandra. """ @staticmethod def pack(v, *args, **kwargs): ts = _to_timestamp(v, use_micros=False) return marshal._long_packer.pack(ts) @staticmethod def unpack(v): raw_ts = marshal._long_packer.unpack(v)[0] / 1e3 try: return datetime.utcfromtimestamp(raw_ts) except ValueError: # convert from bad microsecond format to millis corrected_ts = raw_ts / 1e3 return datetime.utcfromtimestamp(corrected_ts) class CompositeType(CassandraType): """ A type composed of one or more components, each of which have their own type. When sorted, items are primarily sorted by their first component, secondarily by their second component, and so on. Each of `*components` should be an instance of a subclass of :class:`CassandraType`. .. seealso:: :ref:`composite-types` """ def __init__(self, *components): self.components = components def __str__(self): return "CompositeType(" + ", ".join(map(str, self.components)) + ")" @property def pack(self): return marshal.get_composite_packer(composite_type=self) @property def unpack(self): return marshal.get_composite_unpacker(composite_type=self) class DynamicCompositeType(CassandraType): """ A type composed of one or more components, each of which have their own type. When sorted, items are primarily sorted by their first component, secondarily by their second component, and so on. Unlike CompositeType, DynamicCompositeType columns need not all be of the same structure. Each column can be composed of different component types. Components are specified using a 2-tuple made up of a comparator type and value. Aliases for comparator types can optionally be specified with a dictionary during instantiation. """ def __init__(self, *aliases): self.aliases = {} for alias in aliases: if isinstance(alias, dict): self.aliases.update(alias) def __str__(self): aliases = [] for k, v in self.aliases.iteritems(): aliases.append(k + '=>' + str(v)) return "DynamicCompositeType(" + ", ".join(aliases) + ")" pycassa-1.11.2.1/pycassa/util.py000066400000000000000000000312371303744607500164070ustar00rootroot00000000000000""" A combination of utilities used internally by pycassa and utilities available for use by others working with pycassa. """ import random import uuid import calendar __all__ = ['convert_time_to_uuid', 'convert_uuid_to_time', 'OrderedDict'] _number_types = frozenset((int, long, float)) LOWEST_TIME_UUID = uuid.UUID('00000000-0000-1000-8080-808080808080') """ The lowest possible TimeUUID, as sorted by Cassandra. """ HIGHEST_TIME_UUID = uuid.UUID('ffffffff-ffff-1fff-bf7f-7f7f7f7f7f7f') """ The highest possible TimeUUID, as sorted by Cassandra. """ def convert_time_to_uuid(time_arg, lowest_val=True, randomize=False): """ Converts a datetime or timestamp to a type 1 :class:`uuid.UUID`. This is to assist with getting a time slice of columns or creating columns when column names are ``TimeUUIDType``. Note that this is done automatically in most cases if name packing and value packing are enabled. Also, be careful not to rely on this when specifying a discrete set of columns to fetch, as the non-timestamp portions of the UUID will be generated randomly. This problem does not matter with slice arguments, however, as the non-timestamp portions can be set to their lowest or highest possible values. :param datetime: The time to use for the timestamp portion of the UUID. Expected inputs to this would either be a :class:`datetime` object or a timestamp with the same precision produced by :meth:`time.time()`. That is, sub-second precision should be below the decimal place. :type datetime: :class:`datetime` or timestamp :param lowest_val: Whether the UUID produced should be the lowest possible value UUID with the same timestamp as datetime or the highest possible value. :type lowest_val: bool :param randomize: Whether the clock and node bits of the UUID should be randomly generated. The `lowest_val` argument will be ignored if this is true. :type randomize: bool :rtype: :class:`uuid.UUID` .. versionchanged:: 1.7.0 Prior to 1.7.0, datetime objects were expected to be in local time. In 1.7.0 and beyond, naive datetimes are assumed to be in UTC and tz-aware objects will be automatically converted to UTC. """ if isinstance(time_arg, uuid.UUID): return time_arg if hasattr(time_arg, 'utctimetuple'): seconds = int(calendar.timegm(time_arg.utctimetuple())) microseconds = (seconds * 1e6) + time_arg.time().microsecond elif type(time_arg) in _number_types: microseconds = int(time_arg * 1e6) else: raise ValueError('Argument for a v1 UUID column name or value was ' + 'neither a UUID, a datetime, or a number') # 0x01b21dd213814000 is the number of 100-ns intervals between the # UUID epoch 1582-10-15 00:00:00 and the Unix epoch 1970-01-01 00:00:00. timestamp = int(microseconds * 10) + 0x01b21dd213814000L time_low = timestamp & 0xffffffffL time_mid = (timestamp >> 32L) & 0xffffL time_hi_version = (timestamp >> 48L) & 0x0fffL if randomize: rand_bits = random.getrandbits(8 + 8 + 48) clock_seq_low = rand_bits & 0xffL # 8 bits, no offset # keep the first two bits as 10 for the uuid variant clock_seq_hi_variant = 0b10000000 | (0b00111111 & ((rand_bits & 0xff00L) >> 8)) # 8 bits, 8 offset node = (rand_bits & 0xffffffffffff0000L) >> 16 # 48 bits, 16 offset else: # In the event of a timestamp tie, Cassandra compares the two # byte arrays directly. This is a *signed* comparison of each byte # in the two arrays. So, we have to make each byte -128 or +127 for # this to work correctly. # # For the clock_seq_hi_variant, we don't get to pick the two most # significant bits (they're always 10), so we are dealing with a # positive byte range for this particular byte. if lowest_val: # Make the lowest value UUID with the same timestamp clock_seq_low = 0x80L clock_seq_hi_variant = 0 & 0x80L # The two most significant bits # will be 10 for the variant node = 0x808080808080L # 48 bits else: # Make the highest value UUID with the same timestamp # uuid timestamps have 100ns precision, while the timestamp # we have only has microsecond precision; to create the highest # uuid for the same microsecond, add 900ns timestamp = int(timestamp + 9) clock_seq_low = 0x7fL clock_seq_hi_variant = 0xbfL # The two most significant bits will # 10 for the variant node = 0x7f7f7f7f7f7fL # 48 bits return uuid.UUID(fields=(time_low, time_mid, time_hi_version, clock_seq_hi_variant, clock_seq_low, node), version=1) def convert_uuid_to_time(uuid_arg): """ Converts a version 1 :class:`uuid.UUID` to a timestamp with the same precision as :meth:`time.time()` returns. This is useful for examining the results of queries returning a v1 :class:`~uuid.UUID`. :param uuid_arg: a version 1 :class:`~uuid.UUID` :rtype: timestamp """ ts = uuid_arg.get_time() return (ts - 0x01b21dd213814000L)/1e7 # Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010 Michael Bayer mike_mp@zzzcomputing.com # # The 'as_interface' method is part of SQLAlchemy and is released under # the MIT License: http://www.opensource.org/licenses/mit-license.php import operator def as_interface(obj, cls=None, methods=None, required=None): """Ensure basic interface compliance for an instance or dict of callables. Checks that ``obj`` implements public methods of ``cls`` or has members listed in ``methods``. If ``required`` is not supplied, implementing at least one interface method is sufficient. Methods present on ``obj`` that are not in the interface are ignored. If ``obj`` is a dict and ``dict`` does not meet the interface requirements, the keys of the dictionary are inspected. Keys present in ``obj`` that are not in the interface will raise TypeErrors. Raises TypeError if ``obj`` does not meet the interface criteria. In all passing cases, an object with callable members is returned. In the simple case, ``obj`` is returned as-is; if dict processing kicks in then an anonymous class is returned. obj A type, instance, or dictionary of callables. cls Optional, a type. All public methods of cls are considered the interface. An ``obj`` instance of cls will always pass, ignoring ``required``.. methods Optional, a sequence of method names to consider as the interface. required Optional, a sequence of mandatory implementations. If omitted, an ``obj`` that provides at least one interface method is considered sufficient. As a convenience, required may be a type, in which case all public methods of the type are required. """ if not cls and not methods: raise TypeError('a class or collection of method names are required') if isinstance(cls, type) and isinstance(obj, cls): return obj interface = set(methods or [m for m in dir(cls) if not m.startswith('_')]) implemented = set(dir(obj)) complies = operator.ge if isinstance(required, type): required = interface elif not required: required = set() complies = operator.gt else: required = set(required) if complies(implemented.intersection(interface), required): return obj # No dict duck typing here. if not type(obj) is dict: qualifier = complies is operator.gt and 'any of' or 'all of' raise TypeError("%r does not implement %s: %s" % ( obj, qualifier, ', '.join(interface))) class AnonymousInterface(object): """A callable-holding shell.""" if cls: AnonymousInterface.__name__ = 'Anonymous' + cls.__name__ found = set() for method, impl in dictlike_iteritems(obj): if method not in interface: raise TypeError("%r: unknown in this interface" % method) if not callable(impl): raise TypeError("%r=%r is not callable" % (method, impl)) setattr(AnonymousInterface, method, staticmethod(impl)) found.add(method) if complies(found, required): return AnonymousInterface raise TypeError("dictionary does not contain required keys %s" % ', '.join(required - found)) # Copyright (c) 2009 Raymond Hettinger # # Permission is hereby granted, free of charge, to any person # obtaining a copy of this software and associated documentation files # (the "Software"), to deal in the Software without restriction, # including without limitation the rights to use, copy, modify, merge, # publish, distribute, sublicense, and/or sell copies of the Software, # and to permit persons to whom the Software is furnished to do so, # subject to the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES # OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR # OTHER DEALINGS IN THE SOFTWARE. from UserDict import DictMixin class OrderedDict(dict, DictMixin): """ A dictionary which maintains the insertion order of keys. """ def __init__(self, *args, **kwds): """ A dictionary which maintains the insertion order of keys. """ if len(args) > 1: raise TypeError('expected at most 1 arguments, got %d' % len(args)) try: self.__end except AttributeError: self.clear() self.update(*args, **kwds) def clear(self): self.__end = end = [] end += [None, end, end] # sentinel node for doubly linked list self.__map = {} # key --> [key, prev, next] dict.clear(self) def __setitem__(self, key, value): if key not in self: end = self.__end curr = end[1] curr[2] = end[1] = self.__map[key] = [key, curr, end] dict.__setitem__(self, key, value) def __delitem__(self, key): dict.__delitem__(self, key) key, prev, next = self.__map.pop(key) prev[2] = next next[1] = prev def __iter__(self): end = self.__end curr = end[2] while curr is not end: yield curr[0] curr = curr[2] def __reversed__(self): end = self.__end curr = end[1] while curr is not end: yield curr[0] curr = curr[1] def popitem(self, last=True): if not self: raise KeyError('dictionary is empty') if last: key = reversed(self).next() else: key = iter(self).next() value = self.pop(key) return key, value def __reduce__(self): items = [[k, self[k]] for k in self] tmp = self.__map, self.__end del self.__map, self.__end inst_dict = vars(self).copy() self.__map, self.__end = tmp if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) def keys(self): return list(self) setdefault = DictMixin.setdefault update = DictMixin.update pop = DictMixin.pop values = DictMixin.values items = DictMixin.items iterkeys = DictMixin.iterkeys itervalues = DictMixin.itervalues iteritems = DictMixin.iteritems def __repr__(self): if not self: return '%s()' % (self.__class__.__name__,) return '%s(%r)' % (self.__class__.__name__, self.items()) def copy(self): return self.__class__(self) @classmethod def fromkeys(cls, iterable, value=None): d = cls() for key in iterable: d[key] = value return d def __eq__(self, other): if isinstance(other, OrderedDict): if len(self) != len(other): return False for p, q in zip(self.items(), other.items()): if p != q: return False return True return dict.__eq__(self, other) def __ne__(self, other): return not self == other pycassa-1.11.2.1/pycassaShell000077500000000000000000000253721303744607500160010ustar00rootroot00000000000000#!/usr/bin/env python """ interactive Cassandra Python shell """ #try: # from IPython.Shell import IPShellEmbed #except ImportError: # print "[I]: IPython not found, falling back to default interpreter." try: import IPython if hasattr(IPython, "embed"): def runshell(): IPython.embed(banner1="") else: import IPython.Shell def runshell(): IPython.Shell.IPShellEmbed([])() except ImportError: print "[I]: IPython not found, falling back to default interpreter." def runshell(): import os os.environ['PYTHONINSPECT'] = '1' import pycassa, optparse from pycassa.system_manager import * from sys import stdout, stderr, exit def _print_line(label, value): spaces = " " * (35 - len(label + ':')) print "%s:%s%s" % (label, spaces, str(value)) def _make_line(label, value): spaces = " " * (35 - len(label + ':')) return "%s:%s%s\n" % (label, spaces, str(value)) def describe_keyspace(keyspace): try: ks = SYSTEM_MANAGER.get_keyspace_properties(keyspace) except pycassa.NotFoundException: print "\nKeyspace %s does not exist." % keyspace print _print_line("Name", keyspace) print s = ks['replication_strategy'] _print_line('Replication Strategy', s[s.rfind('.') + 1: ]) print if ks['strategy_options']: _print_line("Strategy Options", ks['strategy_options']) cfs = SYSTEM_MANAGER.get_keyspace_column_families(keyspace).keys() cfs.sort() print "Column Families:" for cf in cfs: print " " + cf def describe_column_family(arg1, arg2=None): """ Prints a description of a column family. A keyspace and column family name may be passed in, or a :class:`~pycassa.columnfamil.ColumnFamily` object may be passed in. """ if arg2 is None: # This is a ColumnFamily object column_family = arg1.column_family keyspace = arg1.pool.keyspace else: keyspace, column_family = (arg1, arg2) try: cfdef = SYSTEM_MANAGER.get_keyspace_column_families(keyspace)[column_family] except KeyError: print "Column family %s does not exist in keyspace %s" % (column_family, keyspace) print _print_line('Name', cfdef.name) _print_line('Description', cfdef.comment) _print_line('Column Type', cfdef.column_type) print s = cfdef.comparator_type _print_line('Comparator Type', s[s.rfind('.') + 1:]) if cfdef.column_type == 'Super': s = cfdef.subcomparator_type _print_line('Subcomparator Type', s[s.rfind('.') + 1:]) s = cfdef.default_validation_class _print_line('Default Validation Class', s[s.rfind('.') + 1:]) print print "Cache Sizes" if cfdef.row_cache_size == 0: s = 'Disabled' elif cfdef.row_cache_size >= 1: s = str(int(cfdef.row_cache_size)) + " rows" else: s = str(cfdef.key_cache_size) + "%" _print_line(" Row Cache", s) if cfdef.key_cache_size == 0: s = 'Disabled' elif cfdef.key_cache_size >= 1: s = str(int(cfdef.key_cache_size)) + " keys" else: s = str(cfdef.key_cache_size) + "%" _print_line(" Key Cache", s) print if cfdef.read_repair_chance == 0: s = 'Disabled' else: s = str(cfdef.read_repair_chance * 100) + '%' _print_line("Read Repair Chance", s) print _print_line("GC Grace Seconds", cfdef.gc_grace_seconds) print compact_disabled = cfdef.min_compaction_threshold == 0 or cfdef.max_compaction_threshold == 0 print "Compaction Thresholds" if compact_disabled: _print_line(" Min", "Minor Compactions Disabled") else: _print_line(" Min", cfdef.min_compaction_threshold) if compact_disabled: _print_line(" Max", "Minor Compactions Disabled") else: _print_line(" Max", cfdef.max_compaction_threshold) print if hasattr(cfdef, 'memtable_throughput_in_mb'): print 'Memtable Flush After Thresholds' _print_line(" Throughput", str(cfdef.memtable_throughput_in_mb) + " MiB") s = str(int(cfdef.memtable_operations_in_millions * 1000000)) _print_line(" Operations", s + " operations") _print_line(" Time", str(cfdef.memtable_flush_after_mins) + " minutes") print if getattr(cfdef, 'row_cache_save_period_in_seconds', None) is not None: print "Cache Save Periods" if cfdef.row_cache_save_period_in_seconds == 0: s = 'Disabled' else: s = str(cfdef.row_cache_save_period_in_seconds) + ' seconds' _print_line(" Row Cache", s) if getattr(cfdef, 'key_cache_save_period_in_seconds', None) is not None: if cfdef.key_cache_save_period_in_seconds == 0: s = 'Disabled' else: s = str(cfdef.key_cache_save_period_in_seconds) + ' seconds' _print_line(" Key Cache", s) if cfdef.column_metadata: print "\nColumn Metadata" for coldef in cfdef.column_metadata: print _print_line(" - Name", coldef.name) s = coldef.validation_class _print_line(" Value Type", s[s.rfind('.') + 1: ]) if coldef.index_type is not None: s = IndexType._VALUES_TO_NAMES[coldef.index_type] _print_line(" Index Type", s[s.rfind('.') + 1: ]) _print_line(" Index Name", coldef.index_name) _pool = None def _update_cf(ks, cf, delete=False): if ks == options.keyspace and _pool is not None: if not delete: existed = cf.upper() in globals() globals()[cf.upper()] = pycassa.ColumnFamily(_pool, cf) if not existed: print "\nLoaded %s as %s" % (cf, cf.upper()) else: print "\nReloaded %s" % cf.upper() else: globals().pop(cf.upper()) print "\nDropped %s" % cf.upper() class InteractiveSystemManager(pycassa.SystemManager): """ Allows ColumnFamily instances to be passed directly to SystemManager methods instead of specifying a keyspace and column family as strings. """ def create_column_family(self, keyspace, name, *args, **kwargs): super(InteractiveSystemManager, self).create_column_family(keyspace, name, *args, **kwargs) _update_cf(keyspace, name) def alter_column_family(self, arg1, arg2, *args, **kwargs): if isinstance(arg1, pycassa.ColumnFamily): keyspace = arg1.pool.keyspace column_family = arg1.column_family super(InteractiveSystemManager, self).alter_column_family(keyspace, column_family, arg2, *args, **kwargs) _update_cf(keyspace, name) else: super(InteractiveSystemManager, self).alter_column_family(arg1, arg2, *args, **kwargs) _update_cf(arg1, arg2) def drop_column_family(self, arg1, arg2=None): if isinstance(arg1, pycassa.ColumnFamily): keyspace = arg1.pool.keyspace column_family = arg1.column_family else: keyspace = arg1 column_family = arg2 super(InteractiveSystemManager, self).drop_column_family(keyspace, column_family) _update_cf(keyspace, column_family, delete=True) def alter_column(self, arg1, arg2, *args, **kwargs): if isinstance(arg1, pycassa.ColumnFamily): keyspace = arg1.pool.keyspace column_family = arg1.column_family super(InteractiveSystemManager, self).alter_column(keyspace, column_family, arg2, *args, **kwargs) _update_cf(keyspace, column_family) else: super(InteractiveSystemManager, self).alter_column(arg1, arg2, *args, **kwargs) _update_cf(arg1, arg2) def create_index(self, arg1, arg2, *args, **kwargs): if isinstance(arg1, pycassa.ColumnFamily): keyspace = arg1.pool.keyspace column_family = arg1.column_family super(InteractiveSystemManager, self).create_index(keyspace, column_family, arg2, *args, **kwargs) _update_cf(keyspace, column_family) else: super(InteractiveSystemManager, self).create_index(arg1, arg2, *args, **kwargs) _update_cf(arg1, arg2) parser = optparse.OptionParser(usage='Usage: %prog [OPTIONS]') parser.add_option('-k', '--keyspace', help='Cassandra keyspace name.') parser.add_option('-H', '--host', help='Hostname.') parser.add_option('-p', '--port', type="int", help='Thrift port number.') parser.add_option('-u', '--user', help='Username (for simple auth).') parser.add_option('-P', '--passwd', help='Password (for simple auth).') parser.add_option('-S', '--streaming', help='Using streaming transport.', action="store_false", dest='framed') parser.add_option('-F', '--framed', help='Use framed transport. Default transport.', action="store_true", dest='framed') parser.add_option('-f', '--file', help='Run a script after startup') (options, args) = parser.parse_args() hostname = options.host and options.host or 'localhost' port = options.port and options.port or 9160 framed = True if options.framed is None else options.framed credentials = None if options.user or options.passwd: if options.user and (not options.passwd): print >>stderr, "You must supply a password for username", options.user exit(1) if options.passwd and (not options.user): print >>stderr, "You need a user to go with that password!" exit(1) credentials = {'username': options.user, 'password': options.passwd} SYSTEM_MANAGER = InteractiveSystemManager('%s:%d' % (hostname, port), credentials, framed) print "----------------------------------" print "Cassandra Interactive Python Shell" print "----------------------------------" print "Keyspace: %s" % options.keyspace print "Host: %s:%d" % (hostname, port) if options.keyspace: _pool = pycassa.QueuePool(keyspace=options.keyspace, server_list=['%s:%d' % (hostname, port)], credentials=credentials, framed_transport=framed, timeout=5) print "\nAvailable ColumnFamily instances:" for cfname in sorted(SYSTEM_MANAGER.get_keyspace_column_families(options.keyspace).keys()): cfinstance = pycassa.ColumnFamily(_pool, cfname) exec('%s = cfinstance' % cfname.upper()) spaces = " " * (25 - len(cfname)) print " *", cfname.upper(), spaces, "(", cfname, ")" else: print "\nColumnFamily instances are only available if a keyspace is specified with -k/--keyspace" print "\nSchema definition tools and cluster information are available through SYSTEM_MANAGER." if (options.file): print "\nExecuting script ...", execfile(options.file) print " done." runshell() pycassa-1.11.2.1/rpm-install-script.sh000066400000000000000000000005011303744607500175030ustar00rootroot00000000000000# Add .pyo files generated by CentOS. # This is a workaround until someone comes with a better fix, like avoiding # /usr/lib/rpm/brp-python-bytecompile to run or convincing people that CentOS is pure evil. python setup.py install -O1 --single-version-externally-managed --root="$RPM_BUILD_ROOT" --record=INSTALLED_FILES pycassa-1.11.2.1/setup.py000066400000000000000000000074051303744607500151270ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os try: import subprocess has_subprocess = True except: has_subprocess = False try: from ez_setup import use_setuptools use_setuptools() except ImportError: pass try: from setuptools import setup except ImportError: from distutils.core import setup from distutils.cmd import Command __version__ = "1.11.2" long_description = """pycassa is a python client library for Apache Cassandra with the following features: 1. Auto-failover single or thread-local connections 2. Connection pooling 3. A batch interface 4. Simplified version of the Thrift interface 5. A method to map an existing class to a Cassandra column family """ class rpm(Command): description = "builds a RPM package" user_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): if has_subprocess: status = subprocess.call(["python", "setup.py", "bdist_rpm", "--install-script", "rpm-install-script.sh"]) if status: raise RuntimeError("RPM build failed") print "" print "RPM built" else: print """ `setup.py rpm` is not supported for this version of Python. Please ask in the user forums for help. """ class doc(Command): description = "generate or test documentation" user_options = [("test", "t", "run doctests instead of generating documentation")] boolean_options = ["test"] def initialize_options(self): self.test = False def finalize_options(self): pass def run(self): if self.test: path = "doc/_build/doctest" mode = "doctest" else: path = "doc/_build/%s" % __version__ mode = "html" try: os.makedirs(path) except: pass if has_subprocess: status = subprocess.call(["sphinx-build", "-b", mode, "doc", path]) if status: raise RuntimeError("documentation step '%s' failed" % mode) print "" print "Documentation step '%s' performed, results here:" % mode print " %s/" % path else: print """ `setup.py doc` is not supported for this version of Python. Please ask in the user forums for help. """ setup( name = 'pycassa', version = __version__, author = 'Jonathan Hseu', author_email = 'vomjom AT vomjom.net', maintainer = 'Tyler Hobbs', maintainer_email = 'pycassa.maintainer@gmail.com', description = 'Python client library for Apache Cassandra', long_description = long_description, url = 'http://github.com/pycassa/pycassa', keywords = ['pycassa', 'cassandra', 'client', 'driver', 'db', 'distributed', 'thrift'], packages = ['pycassa', 'pycassa.cassandra', 'pycassa.logging', 'pycassa.contrib'], tests_require = ['nose'], install_requires = ['thrift==0.9.3'], py_modules=['ez_setup'], scripts=['pycassaShell'], cmdclass={"doc": doc, "rpm": rpm}, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 2 :: Only', 'Topic :: Software Development :: Libraries :: Python Modules' ] ) pycassa-1.11.2.1/tests/000077500000000000000000000000001303744607500145515ustar00rootroot00000000000000pycassa-1.11.2.1/tests/README000066400000000000000000000006171303744607500154350ustar00rootroot00000000000000To run the tests: 1. Install python-nose (easy_install nose) 2. Change Cassandra's cassandra.yaml to use ByteOrderedPartitioner 3. If you want to test authN/authZ, use SimpleAuthenticator and SimpleAuthority in cassandra.yaml and start Cassandra with: bin/cassandra -f -Dpasswd.properties=conf/passwd.properties -Daccess.properties=conf/access.properties 4. Run nosetests in the top directory pycassa-1.11.2.1/tests/__init__.py000066400000000000000000000021451303744607500166640ustar00rootroot00000000000000from pycassa.system_manager import SystemManager TEST_KS = 'PycassaTestKeyspace' def setup_package(): sys = SystemManager() if TEST_KS in sys.list_keyspaces(): sys.drop_keyspace(TEST_KS) try: sys.create_keyspace(TEST_KS, 'SimpleStrategy', {'replication_factor': '1'}) sys.create_column_family(TEST_KS, 'Standard1') sys.create_column_family(TEST_KS, 'Super1', super=True) sys.create_column_family(TEST_KS, 'Indexed1') sys.create_index(TEST_KS, 'Indexed1', 'birthdate', 'LongType') sys.create_column_family(TEST_KS, 'Counter1', default_validation_class='CounterColumnType') sys.create_column_family(TEST_KS, 'SuperCounter1', super=True, default_validation_class='CounterColumnType') except Exception, e: print e try: sys.drop_keyspace(TEST_KS) except: pass raise e sys.close() def teardown_package(): sys = SystemManager() if TEST_KS in sys.list_keyspaces(): sys.drop_keyspace(TEST_KS) sys.close() pycassa-1.11.2.1/tests/contrib/000077500000000000000000000000001303744607500162115ustar00rootroot00000000000000pycassa-1.11.2.1/tests/contrib/__init__.py000066400000000000000000000000001303744607500203100ustar00rootroot00000000000000pycassa-1.11.2.1/tests/contrib/stubs.py000066400000000000000000000232131303744607500177240ustar00rootroot00000000000000import unittest import time from nose.tools import assert_raises, assert_equal, assert_true from pycassa import index, ColumnFamily, ConnectionPool,\ NotFoundException from pycassa.contrib.stubs import ColumnFamilyStub, ConnectionPoolStub from pycassa.util import convert_time_to_uuid pool = cf = indexed_cf = None pool_stub = cf_stub = indexed_cf_stub = None def setup_module(): global pool, cf, indexed_cf, pool_stub, indexed_cf_stub, cf_stub credentials = {'username': 'jsmith', 'password': 'havebadpass'} pool = ConnectionPool(keyspace='PycassaTestKeyspace', credentials=credentials, timeout=1.0) cf = ColumnFamily(pool, 'Standard1', dict_class=TestDict) indexed_cf = ColumnFamily(pool, 'Indexed1') pool_stub = ConnectionPoolStub(keyspace='PycassaTestKeyspace', credentials=credentials, timeout=1.0) cf_stub = ColumnFamilyStub(pool_stub, 'Standard1', dict_class=TestDict) indexed_cf_stub = ColumnFamilyStub(pool_stub, 'Indexed1') def teardown_module(): cf.truncate() cf_stub.truncate() indexed_cf.truncate() indexed_cf_stub.truncate() pool.dispose() class TestDict(dict): pass class TestColumnFamilyStub(unittest.TestCase): def setUp(self): pass def tearDown(self): for test_cf in (cf, cf_stub): for key, columns in test_cf.get_range(): test_cf.remove(key) def test_empty(self): key = 'TestColumnFamily.test_empty' for test_cf in (cf, cf_stub): assert_raises(NotFoundException, test_cf.get, key) assert_equal(len(test_cf.multiget([key])), 0) for key, columns in test_cf.get_range(): assert_equal(len(columns), 0) def test_insert_get(self): key = 'TestColumnFamily.test_insert_get' columns = {'1': 'val1', '2': 'val2'} for test_cf in (cf, cf_stub): assert_raises(NotFoundException, test_cf.get, key) ts = test_cf.insert(key, columns) assert_true(isinstance(ts, (int, long))) assert_equal(test_cf.get(key), columns) def test_insert_get_column_start_and_finish_reversed(self): key = 'TestColumnFamily.test_insert_get_reversed' columns = {'1': 'val1', '2': 'val2'} for test_cf in (cf, cf_stub): assert_raises(NotFoundException, test_cf.get, key) ts = test_cf.insert(key, columns) assert_true(isinstance(ts, (int, long))) test_cf.get(key, column_reversed=True) def test_insert_get_column_start_and_finish(self): key = 'TestColumnFamily.test_insert_get_column_start_and_finish' columns = {'a': 'val1', 'b': 'val2', 'c': 'val3', 'd': 'val4'} for test_cf in (cf, cf_stub): assert_raises(NotFoundException, test_cf.get, key) ts = test_cf.insert(key, columns) assert_true(isinstance(ts, (int, long))) assert_equal(test_cf.get(key, column_start='b', column_finish='c'), {'b': 'val2', 'c': 'val3'}) def test_insert_get_column_start_and_reversed(self): key = 'TestColumnFamily.test_insert_get_column_start_and_finish_reversed' columns = {'a': 'val1', 'b': 'val2', 'c': 'val3', 'd': 'val4'} for test_cf in (cf, cf_stub): assert_raises(NotFoundException, test_cf.get, key) ts = test_cf.insert(key, columns) assert_true(isinstance(ts, (int, long))) assert_equal(test_cf.get(key, column_start='b', column_reversed=True), {'b': 'val2', 'a': 'val1'}) def test_insert_get_column_count(self): key = 'TestColumnFamily.test_insert_get_column_count' columns = {'a': 'val1', 'b': 'val2', 'c': 'val3', 'd': 'val4'} for test_cf in (cf, cf_stub): assert_raises(NotFoundException, test_cf.get, key) ts = test_cf.insert(key, columns) assert_true(isinstance(ts, (int, long))) assert_equal(test_cf.get(key, column_count=3), {'a': 'val1', 'b': 'val2', 'c': 'val3'}) def test_insert_get_default_column_count(self): keys = [str(i) for i in range(1000)] keys.sort() keys_and_values = [(key, key) for key in keys] key = 'TestColumnFamily.test_insert_get_default_column_count' for test_cf in (cf, cf_stub): assert_raises(NotFoundException, test_cf.get, key) test_cf.insert(key, dict(key_value for key_value in keys_and_values)) assert_equal(test_cf.get(key), dict([key_value for key_value in keys_and_values][:100])) def test_insert_multiget(self): key1 = 'TestColumnFamily.test_insert_multiget1' columns1 = {'1': 'val1', '2': 'val2'} key2 = 'test_insert_multiget1' columns2 = {'3': 'val1', '4': 'val2'} missing_key = 'key3' for test_cf in (cf, cf_stub): test_cf.insert(key1, columns1) test_cf.insert(key2, columns2) rows = test_cf.multiget([key1, key2, missing_key]) assert_equal(len(rows), 2) assert_equal(rows[key1], columns1) assert_equal(rows[key2], columns2) assert_true(missing_key not in rows) def test_insert_multiget_column_start_and_finish(self): key1 = 'TestColumnFamily.test_insert_multiget_column_start_and_finish1' columns1 = {'1': 'val1', '2': 'val2'} key2 = 'TestColumnFamily.test_insert_multiget_column_start_and_finish2' columns2 = {'3': 'val1', '4': 'val2'} missing_key = 'key3' for test_cf in (cf, cf_stub): test_cf.insert(key1, columns1) test_cf.insert(key2, columns2) rows = test_cf.multiget([key1, key2, missing_key], column_start='2', column_finish='3') assert_equal(len(rows), 2) assert_equal(rows[key1], {'2': 'val2'}) assert_equal(rows[key2], {'3': 'val1'}) assert_true(missing_key not in rows) def test_insert_multiget_column_finish_and_reversed(self): key1 = 'TestColumnFamily.test_insert_multiget_column_finish_and_reversed1' columns1 = {'1': 'val1', '3': 'val2'} key2 = 'TestColumnFamily.test_insert_multiget_column_finish_and_reversed2' columns2 = {'5': 'val1', '7': 'val2'} missing_key = 'key3' for test_cf in (cf, cf_stub): test_cf.insert(key1, columns1) test_cf.insert(key2, columns2) rows = test_cf.multiget([key1, key2, missing_key], column_finish='3', column_reversed=True) assert_equal(len(rows), 2) assert_equal(rows[key1], {'3': 'val2'}) assert_equal(rows[key2], {'5': 'val1', '7': 'val2'}) assert_true(missing_key not in rows) def test_insert_multiget_column_start_column_count(self): key1 = 'TestColumnFamily.test_insert_multiget_column_start_column_count' columns1 = {'1': 'val1', '2': 'val2'} key2 = 'test_insert_multiget1' columns2 = {'3': 'val1', '4': 'val2'} missing_key = 'key3' for test_cf in (cf, cf_stub): test_cf.insert(key1, columns1) test_cf.insert(key2, columns2) rows = test_cf.multiget([key1, key2, missing_key], column_count=1, column_start='2') assert_equal(len(rows), 2) assert_equal(rows[key1], {'2': 'val2'}) assert_equal(rows[key2], {'3': 'val1'}) assert_true(missing_key not in rows) def test_insert_multiget_default_column_count(self): keys = [str(i) for i in range(1000)] keys.sort() keys_and_values = [(key, key) for key in keys] key = 'TestColumnFamily.test_insert_multiget_default_column_count' for test_cf in (cf, cf_stub): test_cf.insert(key, dict(key_value for key_value in keys_and_values)) rows = test_cf.multiget([key]) assert_equal(len(rows), 1) assert_equal(rows[key], dict([key_value for key_value in keys_and_values][:100])) def insert_insert_get_indexed_slices(self): columns = {'birthdate': 1L} keys = set() for i in range(1, 4): indexed_cf.insert('key%d' % i, columns) indexed_cf_stub.insert('key%d' % i, columns) keys.add('key%d' % i) expr = index.create_index_expression(column_name='birthdate', value=1L) clause = index.create_index_clause([expr]) for test_indexed_cf in (indexed_cf, indexed_cf_stub): count = 0 for key, cols in test_indexed_cf.get_indexed_slices(clause): assert_equal(cols, columns) assert key in keys count += 1 assert_equal(count, 3) def test_remove(self): key = 'TestColumnFamily.test_remove' for test_cf in (cf, cf_stub): columns = {'1': 'val1', '2': 'val2'} test_cf.insert(key, columns) # An empty list for columns shouldn't delete anything test_cf.remove(key, columns=[]) assert_equal(test_cf.get(key), columns) test_cf.remove(key, columns=['2']) del columns['2'] assert_equal(test_cf.get(key), {'1': 'val1'}) test_cf.remove(key) assert_raises(NotFoundException, test_cf.get, key) def test_insert_get_tuuids(self): key = 'TestColumnFamily.test_insert_get' columns = ((convert_time_to_uuid(time.time() - 1000, randomize=True), 'val1'), (convert_time_to_uuid(time.time(), randomize=True), 'val2')) for test_cf in (cf, cf_stub): assert_raises(NotFoundException, test_cf.get, key) ts = test_cf.insert(key, dict(columns)) assert_true(isinstance(ts, (int, long))) assert_equal(test_cf.get(key).keys(), [x[0] for x in columns]) pycassa-1.11.2.1/tests/test_autopacking.py000066400000000000000000001562531303744607500205030ustar00rootroot00000000000000from pycassa import NotFoundException from pycassa.pool import ConnectionPool from pycassa.columnfamily import ColumnFamily from pycassa.util import OrderedDict, convert_uuid_to_time from pycassa.system_manager import SystemManager from pycassa.types import (LongType, IntegerType, TimeUUIDType, LexicalUUIDType, AsciiType, UTF8Type, BytesType, CompositeType, OldPycassaDateType, IntermediateDateType, DateType, BooleanType, CassandraType, DecimalType, FloatType, Int32Type, UUIDType, DoubleType, DynamicCompositeType) from pycassa.index import create_index_expression, create_index_clause import pycassa.marshal as marshal from nose import SkipTest from nose.tools import (assert_raises, assert_equal, assert_almost_equal, assert_true) from datetime import date, datetime from uuid import uuid1 from decimal import Decimal import uuid import unittest import time from collections import namedtuple TIME1 = uuid.UUID(hex='ddc6118e-a003-11df-8abf-00234d21610a') TIME2 = uuid.UUID(hex='40ad6d4c-a004-11df-8abf-00234d21610a') TIME3 = uuid.UUID(hex='dc3d5234-a00b-11df-8abf-00234d21610a') VALS = ['val1', 'val2', 'val3'] KEYS = ['key1', 'key2', 'key3'] pool = None TEST_KS = 'PycassaTestKeyspace' def setup_module(): global pool credentials = {'username': 'jsmith', 'password': 'havebadpass'} pool = ConnectionPool(TEST_KS, pool_size=10, credentials=credentials, timeout=1.0) def teardown_module(): pool.dispose() class TestCFs(unittest.TestCase): @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family(TEST_KS, 'StdLong', comparator_type=LongType()) sys.create_column_family(TEST_KS, 'StdInteger', comparator_type=IntegerType()) sys.create_column_family(TEST_KS, 'StdBigInteger', comparator_type=IntegerType()) sys.create_column_family(TEST_KS, 'StdDecimal', comparator_type=DecimalType()) sys.create_column_family(TEST_KS, 'StdTimeUUID', comparator_type=TimeUUIDType()) sys.create_column_family(TEST_KS, 'StdLexicalUUID', comparator_type=LexicalUUIDType()) sys.create_column_family(TEST_KS, 'StdAscii', comparator_type=AsciiType()) sys.create_column_family(TEST_KS, 'StdUTF8', comparator_type=UTF8Type()) sys.create_column_family(TEST_KS, 'StdBytes', comparator_type=BytesType()) sys.create_column_family(TEST_KS, 'StdComposite', comparator_type=CompositeType(LongType(), BytesType())) sys.create_column_family(TEST_KS, 'StdDynamicComposite', comparator_type=DynamicCompositeType({'a': AsciiType(), 'b': BytesType(), 'c': DecimalType(), 'd': DateType(), 'f': FloatType(), 'i': IntegerType(), 'l': LongType(), 'n': Int32Type(), 's': UTF8Type(), 't': TimeUUIDType(), 'u': UUIDType(), 'w': DoubleType(), 'x': LexicalUUIDType(), 'y': BooleanType()})) sys.close() cls.cf_long = ColumnFamily(pool, 'StdLong') cls.cf_int = ColumnFamily(pool, 'StdInteger') cls.cf_big_int = ColumnFamily(pool, 'StdBigInteger') cls.cf_decimal = ColumnFamily(pool, 'StdDecimal') cls.cf_time = ColumnFamily(pool, 'StdTimeUUID') cls.cf_lex = ColumnFamily(pool, 'StdLexicalUUID') cls.cf_ascii = ColumnFamily(pool, 'StdAscii') cls.cf_utf8 = ColumnFamily(pool, 'StdUTF8') cls.cf_bytes = ColumnFamily(pool, 'StdBytes') cls.cf_composite = ColumnFamily(pool, 'StdComposite') cls.cf_dynamic_composite = ColumnFamily(pool, 'StdDynamicComposite') cls.cfs = [cls.cf_long, cls.cf_int, cls.cf_time, cls.cf_lex, cls.cf_ascii, cls.cf_utf8, cls.cf_bytes, cls.cf_composite, cls.cf_dynamic_composite] def tearDown(self): for cf in TestCFs.cfs: for key, cols in cf.get_range(): cf.remove(key) def make_group(self, cf, cols): diction = OrderedDict([(cols[0], VALS[0]), (cols[1], VALS[1]), (cols[2], VALS[2])]) return {'cf': cf, 'cols': cols, 'dict': diction} def test_standard_column_family(self): # For each data type, create a group that includes its column family, # a set of column names, and a dictionary that maps from the column # names to values. type_groups = [] long_cols = [1111111111111111L, 2222222222222222L, 3333333333333333L] type_groups.append(self.make_group(TestCFs.cf_long, long_cols)) int_cols = [1, 2, 3] type_groups.append(self.make_group(TestCFs.cf_int, int_cols)) big_int_cols = [1 + int(time.time() * 10 ** 6), 2 + int(time.time() * 10 ** 6), 3 + int(time.time() * 10 ** 6)] type_groups.append(self.make_group(TestCFs.cf_big_int, big_int_cols)) decimal_cols = [Decimal('1.123456789123456789'), Decimal('2.123456789123456789'), Decimal('3.123456789123456789')] type_groups.append(self.make_group(TestCFs.cf_decimal, decimal_cols)) time_cols = [TIME1, TIME2, TIME3] type_groups.append(self.make_group(TestCFs.cf_time, time_cols)) lex_cols = [uuid.UUID(bytes='aaa aaa aaa aaaa'), uuid.UUID(bytes='bbb bbb bbb bbbb'), uuid.UUID(bytes='ccc ccc ccc cccc')] type_groups.append(self.make_group(TestCFs.cf_lex, lex_cols)) ascii_cols = ['aaaa', 'bbbb', 'cccc'] type_groups.append(self.make_group(TestCFs.cf_ascii, ascii_cols)) utf8_cols = [u'a\u0020', u'b\u0020', u'c\u0020'] type_groups.append(self.make_group(TestCFs.cf_utf8, utf8_cols)) bytes_cols = ['aaaa', 'bbbb', 'cccc'] type_groups.append(self.make_group(TestCFs.cf_bytes, bytes_cols)) composite_cols = [(1, 'foo'), (2, 'bar'), (3, 'baz')] type_groups.append(self.make_group(TestCFs.cf_composite, composite_cols)) dynamic_composite_cols = [(('LongType', 1), ('BytesType', 'foo')), (('LongType', 2), ('BytesType', 'bar')), (('LongType', 3), ('BytesType', 'baz'))] type_groups.append(self.make_group(TestCFs.cf_dynamic_composite, dynamic_composite_cols)) dynamic_composite_alias_cols = [(('l', 1), ('b', 'foo')), (('l', 2), ('b', 'bar')), (('l', 3), ('b', 'baz'))] type_groups.append(self.make_group(TestCFs.cf_dynamic_composite, dynamic_composite_alias_cols)) # Begin the actual inserting and getting for group in type_groups: cf = group.get('cf') gdict = group.get('dict') gcols = group.get('cols') cf.insert(KEYS[0], gdict) assert_equal(cf.get(KEYS[0]), gdict) # Check each column individually for i in range(3): assert_equal(cf.get(KEYS[0], columns=[gcols[i]]), {gcols[i]: VALS[i]}) # Check that if we list all columns, we get the full dict assert_equal(cf.get(KEYS[0], columns=gcols[:]), gdict) # The same thing with a start and end instead assert_equal(cf.get(KEYS[0], column_start=gcols[0], column_finish=gcols[2]), gdict) # A start and end that are the same assert_equal(cf.get(KEYS[0], column_start=gcols[0], column_finish=gcols[0]), {gcols[0]: VALS[0]}) assert_equal(cf.get_count(KEYS[0]), 3) # Test xget paging assert_equal(list(cf.xget(KEYS[0], buffer_size=2)), gdict.items()) assert_equal(list(cf.xget(KEYS[0], column_reversed=True, buffer_size=2)), list(reversed(gdict.items()))) assert_equal(list(cf.xget(KEYS[0], column_start=gcols[0], buffer_size=2)), gdict.items()) assert_equal(list(cf.xget(KEYS[0], column_finish=gcols[2], buffer_size=2)), gdict.items()) assert_equal(list(cf.xget(KEYS[0], column_start=gcols[2], column_finish=gcols[0], column_reversed=True, buffer_size=2)), list(reversed(gdict.items()))) assert_equal(list(cf.xget(KEYS[0], column_start=gcols[1], column_finish=gcols[1], column_reversed=True, buffer_size=2)), [(gcols[1], VALS[1])]) # Test removing rows cf.remove(KEYS[0], columns=gcols[:1]) assert_equal(cf.get_count(KEYS[0]), 2) cf.remove(KEYS[0], columns=gcols[1:]) assert_equal(cf.get_count(KEYS[0]), 0) # Insert more than one row now cf.insert(KEYS[0], gdict) cf.insert(KEYS[1], gdict) cf.insert(KEYS[2], gdict) ### multiget() tests ### res = cf.multiget(KEYS[:]) for i in range(3): assert_equal(res.get(KEYS[i]), gdict) res = cf.multiget(KEYS[2:]) assert_equal(res.get(KEYS[2]), gdict) # Check each column individually for i in range(3): res = cf.multiget(KEYS[:], columns=[gcols[i]]) for j in range(3): assert_equal(res.get(KEYS[j]), {gcols[i]: VALS[i]}) # Check that if we list all columns, we get the full dict res = cf.multiget(KEYS[:], columns=gcols[:]) for j in range(3): assert_equal(res.get(KEYS[j]), gdict) # The same thing with a start and end instead res = cf.multiget(KEYS[:], column_start=gcols[0], column_finish=gcols[2]) for j in range(3): assert_equal(res.get(KEYS[j]), gdict) # A start and end that are the same res = cf.multiget(KEYS[:], column_start=gcols[0], column_finish=gcols[0]) for j in range(3): assert_equal(res.get(KEYS[j]), {gcols[0]: VALS[0]}) ### get_range() tests ### res = cf.get_range(start=KEYS[0]) for sub_res in res: assert_equal(sub_res[1], gdict) res = cf.get_range(start=KEYS[0], column_start=gcols[0], column_finish=gcols[2]) for sub_res in res: assert_equal(sub_res[1], gdict) res = cf.get_range(start=KEYS[0], columns=gcols[:]) for sub_res in res: assert_equal(sub_res[1], gdict) class TestSuperCFs(unittest.TestCase): @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family(TEST_KS, 'SuperLong', super=True, comparator_type=LongType()) sys.create_column_family(TEST_KS, 'SuperInt', super=True, comparator_type=IntegerType()) sys.create_column_family(TEST_KS, 'SuperBigInt', super=True, comparator_type=IntegerType()) sys.create_column_family(TEST_KS, 'SuperTime', super=True, comparator_type=TimeUUIDType()) sys.create_column_family(TEST_KS, 'SuperLex', super=True, comparator_type=LexicalUUIDType()) sys.create_column_family(TEST_KS, 'SuperAscii', super=True, comparator_type=AsciiType()) sys.create_column_family(TEST_KS, 'SuperUTF8', super=True, comparator_type=UTF8Type()) sys.create_column_family(TEST_KS, 'SuperBytes', super=True, comparator_type=BytesType()) sys.close() cls.cf_suplong = ColumnFamily(pool, 'SuperLong') cls.cf_supint = ColumnFamily(pool, 'SuperInt') cls.cf_supbigint = ColumnFamily(pool, 'SuperBigInt') cls.cf_suptime = ColumnFamily(pool, 'SuperTime') cls.cf_suplex = ColumnFamily(pool, 'SuperLex') cls.cf_supascii = ColumnFamily(pool, 'SuperAscii') cls.cf_suputf8 = ColumnFamily(pool, 'SuperUTF8') cls.cf_supbytes = ColumnFamily(pool, 'SuperBytes') cls.cfs = [cls.cf_suplong, cls.cf_supint, cls.cf_suptime, cls.cf_suplex, cls.cf_supascii, cls.cf_suputf8, cls.cf_supbytes] def tearDown(self): for cf in TestSuperCFs.cfs: for key, cols in cf.get_range(): cf.remove(key) def make_super_group(self, cf, cols): diction = OrderedDict([(cols[0], {'bytes': VALS[0]}), (cols[1], {'bytes': VALS[1]}), (cols[2], {'bytes': VALS[2]})]) return {'cf': cf, 'cols': cols, 'dict': diction} def test_super_column_families(self): # For each data type, create a group that includes its column family, # a set of column names, and a dictionary that maps from the column # names to values. type_groups = [] long_cols = [1111111111111111L, 2222222222222222L, 3333333333333333L] type_groups.append(self.make_super_group(TestSuperCFs.cf_suplong, long_cols)) int_cols = [1, 2, 3] type_groups.append(self.make_super_group(TestSuperCFs.cf_supint, int_cols)) big_int_cols = [1 + int(time.time() * 10 ** 6), 2 + int(time.time() * 10 ** 6), 3 + int(time.time() * 10 ** 6)] type_groups.append(self.make_super_group(TestSuperCFs.cf_supbigint, big_int_cols)) time_cols = [TIME1, TIME2, TIME3] type_groups.append(self.make_super_group(TestSuperCFs.cf_suptime, time_cols)) lex_cols = [uuid.UUID(bytes='aaa aaa aaa aaaa'), uuid.UUID(bytes='bbb bbb bbb bbbb'), uuid.UUID(bytes='ccc ccc ccc cccc')] type_groups.append(self.make_super_group(TestSuperCFs.cf_suplex, lex_cols)) ascii_cols = ['aaaa', 'bbbb', 'cccc'] type_groups.append(self.make_super_group(TestSuperCFs.cf_supascii, ascii_cols)) utf8_cols = [u'a\u0020', u'b\u0020', u'c\u0020'] type_groups.append(self.make_super_group(TestSuperCFs.cf_suputf8, utf8_cols)) bytes_cols = ['aaaa', 'bbbb', 'cccc'] type_groups.append(self.make_super_group(TestSuperCFs.cf_supbytes, bytes_cols)) # Begin the actual inserting and getting for group in type_groups: cf = group.get('cf') gdict = group.get('dict') gcols = group.get('cols') cf.insert(KEYS[0], gdict) assert_equal(cf.get(KEYS[0]), gdict) # Check each supercolumn individually for i in range(3): res = cf.get(KEYS[0], columns=[gcols[i]]) assert_equal(res, {gcols[i]: {'bytes': VALS[i]}}) # Check that if we list all columns, we get the full dict assert_equal(cf.get(KEYS[0], columns=gcols[:]), gdict) # The same thing with a start and end instead assert_equal(cf.get(KEYS[0], column_start=gcols[0], column_finish=gcols[2]), gdict) # A start and end that are the same assert_equal(cf.get(KEYS[0], column_start=gcols[0], column_finish=gcols[0]), {gcols[0]: {'bytes': VALS[0]}}) # test xget paging assert_equal(list(cf.xget(KEYS[0], buffer_size=2)), gdict.items()) assert_equal(cf.get_count(KEYS[0]), 3) # Test removing rows cf.remove(KEYS[0], columns=gcols[:1]) assert_equal(cf.get_count(KEYS[0]), 2) cf.remove(KEYS[0], columns=gcols[1:]) assert_equal(cf.get_count(KEYS[0]), 0) # Insert more than one row now cf.insert(KEYS[0], gdict) cf.insert(KEYS[1], gdict) cf.insert(KEYS[2], gdict) ### multiget() tests ### res = cf.multiget(KEYS[:]) for i in range(3): assert_equal(res.get(KEYS[i]), gdict) res = cf.multiget(KEYS[2:]) assert_equal(res.get(KEYS[2]), gdict) # Check each column individually for i in range(3): res = cf.multiget(KEYS[:], columns=[gcols[i]]) for j in range(3): assert_equal(res.get(KEYS[j]), {gcols[i]: {'bytes': VALS[i]}}) # Check that if we list all columns, we get the full dict res = cf.multiget(KEYS[:], columns=gcols[:]) for j in range(3): assert_equal(res.get(KEYS[j]), gdict) # The same thing with a start and end instead res = cf.multiget(KEYS[:], column_start=gcols[0], column_finish=gcols[2]) for j in range(3): assert_equal(res.get(KEYS[j]), gdict) # A start and end that are the same res = cf.multiget(KEYS[:], column_start=gcols[0], column_finish=gcols[0]) for j in range(3): assert_equal(res.get(KEYS[j]), {gcols[0]: {'bytes': VALS[0]}}) ### get_range() tests ### res = cf.get_range(start=KEYS[0]) for sub_res in res: assert_equal(sub_res[1], gdict) res = cf.get_range(start=KEYS[0], column_start=gcols[0], column_finish=gcols[2]) for sub_res in res: assert_equal(sub_res[1], gdict) res = cf.get_range(start=KEYS[0], columns=gcols[:]) for sub_res in res: assert_equal(sub_res[1], gdict) class TestSuperSubCFs(unittest.TestCase): @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family(TEST_KS, 'SuperLongSubLong', super=True, comparator_type=LongType(), subcomparator_type=LongType()) sys.create_column_family(TEST_KS, 'SuperLongSubInt', super=True, comparator_type=LongType(), subcomparator_type=IntegerType()) sys.create_column_family(TEST_KS, 'SuperLongSubBigInt', super=True, comparator_type=LongType(), subcomparator_type=IntegerType()) sys.create_column_family(TEST_KS, 'SuperLongSubTime', super=True, comparator_type=LongType(), subcomparator_type=TimeUUIDType()) sys.create_column_family(TEST_KS, 'SuperLongSubLex', super=True, comparator_type=LongType(), subcomparator_type=LexicalUUIDType()) sys.create_column_family(TEST_KS, 'SuperLongSubAscii', super=True, comparator_type=LongType(), subcomparator_type=AsciiType()) sys.create_column_family(TEST_KS, 'SuperLongSubUTF8', super=True, comparator_type=LongType(), subcomparator_type=UTF8Type()) sys.create_column_family(TEST_KS, 'SuperLongSubBytes', super=True, comparator_type=LongType(), subcomparator_type=BytesType()) sys.close() cls.cf_suplong_sublong = ColumnFamily(pool, 'SuperLongSubLong') cls.cf_suplong_subint = ColumnFamily(pool, 'SuperLongSubInt') cls.cf_suplong_subbigint = ColumnFamily(pool, 'SuperLongSubBigInt') cls.cf_suplong_subtime = ColumnFamily(pool, 'SuperLongSubTime') cls.cf_suplong_sublex = ColumnFamily(pool, 'SuperLongSubLex') cls.cf_suplong_subascii = ColumnFamily(pool, 'SuperLongSubAscii') cls.cf_suplong_subutf8 = ColumnFamily(pool, 'SuperLongSubUTF8') cls.cf_suplong_subbytes = ColumnFamily(pool, 'SuperLongSubBytes') cls.cfs = [cls.cf_suplong_subint, cls.cf_suplong_subint, cls.cf_suplong_subtime, cls.cf_suplong_sublex, cls.cf_suplong_subascii, cls.cf_suplong_subutf8, cls.cf_suplong_subbytes] def tearDown(self): for cf in TestSuperSubCFs.cfs: for key, cols in cf.get_range(): cf.remove(key) def make_sub_group(self, cf, cols): diction = {123L: {cols[0]: VALS[0], cols[1]: VALS[1], cols[2]: VALS[2]}} return {'cf': cf, 'cols': cols, 'dict': diction} def test_super_column_family_subs(self): # For each data type, create a group that includes its column family, # a set of column names, and a dictionary that maps from the column # names to values. type_groups = [] long_cols = [1111111111111111L, 2222222222222222L, 3333333333333333L] type_groups.append(self.make_sub_group(TestSuperSubCFs.cf_suplong_sublong, long_cols)) int_cols = [1, 2, 3] type_groups.append(self.make_sub_group(TestSuperSubCFs.cf_suplong_subint, int_cols)) big_int_cols = [1 + int(time.time() * 10 ** 6), 2 + int(time.time() * 10 ** 6), 3 + int(time.time() * 10 ** 6)] type_groups.append(self.make_sub_group(TestSuperSubCFs.cf_suplong_subbigint, big_int_cols)) time_cols = [TIME1, TIME2, TIME3] type_groups.append(self.make_sub_group(TestSuperSubCFs.cf_suplong_subtime, time_cols)) lex_cols = [uuid.UUID(bytes='aaa aaa aaa aaaa'), uuid.UUID(bytes='bbb bbb bbb bbbb'), uuid.UUID(bytes='ccc ccc ccc cccc')] type_groups.append(self.make_sub_group(TestSuperSubCFs.cf_suplong_sublex, lex_cols)) ascii_cols = ['aaaa', 'bbbb', 'cccc'] type_groups.append(self.make_sub_group(TestSuperSubCFs.cf_suplong_subascii, ascii_cols)) utf8_cols = [u'a\u0020', u'b\u0020', u'c\u0020'] type_groups.append(self.make_sub_group(TestSuperSubCFs.cf_suplong_subutf8, utf8_cols)) bytes_cols = ['aaaa', 'bbbb', 'cccc'] type_groups.append(self.make_sub_group(TestSuperSubCFs.cf_suplong_subbytes, bytes_cols)) # Begin the actual inserting and getting for group in type_groups: cf = group.get('cf') gdict = group.get('dict') cf.insert(KEYS[0], gdict) assert_equal(cf.get(KEYS[0]), gdict) assert_equal(cf.get(KEYS[0], columns=[123L]), gdict) # A start and end that are the same assert_equal(cf.get(KEYS[0], column_start=123L, column_finish=123L), gdict) res = cf.get(KEYS[0], super_column=123L, column_start=group.get('cols')[0]) assert_equal(res, gdict.get(123L)) res = cf.get(KEYS[0], super_column=123L, column_finish=group.get('cols')[-1]) assert_equal(res, gdict.get(123L)) assert_equal(cf.get_count(KEYS[0]), 1) # Test removing rows cf.remove(KEYS[0], super_column=123L) assert_equal(cf.get_count(KEYS[0]), 0) # Insert more than one row now cf.insert(KEYS[0], gdict) cf.insert(KEYS[1], gdict) cf.insert(KEYS[2], gdict) ### multiget() tests ### res = cf.multiget(KEYS[:]) for i in range(3): assert_equal(res.get(KEYS[i]), gdict) res = cf.multiget(KEYS[2:]) assert_equal(res.get(KEYS[2]), gdict) res = cf.multiget(KEYS[:], columns=[123L]) for i in range(3): assert_equal(res.get(KEYS[i]), gdict) res = cf.multiget(KEYS[:], super_column=123L) for i in range(3): assert_equal(res.get(KEYS[i]), gdict.get(123L)) res = cf.multiget(KEYS[:], super_column=123L, column_start=group.get('cols')[0]) for i in range(3): assert_equal(res.get(KEYS[i]), gdict.get(123L)) res = cf.multiget(KEYS[:], column_start=123L, column_finish=123L) for j in range(3): assert_equal(res.get(KEYS[j]), gdict) ### get_range() tests ### res = cf.get_range(start=KEYS[0]) for sub_res in res: assert_equal(sub_res[1], gdict) res = cf.get_range(start=KEYS[0], column_start=123L, column_finish=123L) for sub_res in res: assert_equal(sub_res[1], gdict) res = cf.get_range(start=KEYS[0], columns=[123L]) for sub_res in res: assert_equal(sub_res[1], gdict) res = cf.get_range(start=KEYS[0], super_column=123L) for sub_res in res: assert_equal(sub_res[1], gdict.get(123L)) res = cf.get_range(start=KEYS[0], super_column=123L, column_start=group.get('cols')[0]) for sub_res in res: assert_equal(sub_res[1], gdict.get(123L)) res = cf.get_range(start=KEYS[0], super_column=123L, column_finish=group.get('cols')[-1]) for sub_res in res: assert_equal(sub_res[1], gdict.get(123L)) class TestValidators(unittest.TestCase): def test_validation_with_packed_names(self): """ Make sure that validated columns are packed correctly when the column names themselves must be packed """ sys = SystemManager() sys.create_column_family(TEST_KS, 'Validators2', comparator_type=LongType(), default_validation_class=LongType()) sys.alter_column(TEST_KS, 'Validators2', 1, TimeUUIDType()) sys.close() my_uuid = uuid.uuid1() cf = ColumnFamily(pool, 'Validators2') cf.insert('key', {0: 0}) assert_equal(cf.get('key'), {0: 0}) cf.insert('key', {1: my_uuid}) assert_equal(cf.get('key'), {0: 0, 1: my_uuid}) cf.insert('key', {0: 0, 1: my_uuid}) assert_equal(cf.get('key'), {0: 0, 1: my_uuid}) def test_validated_columns(self): sys = SystemManager() sys.create_column_family(TEST_KS, 'Validators',) sys.alter_column(TEST_KS, 'Validators', 'long', LongType()) sys.alter_column(TEST_KS, 'Validators', 'int', IntegerType()) sys.alter_column(TEST_KS, 'Validators', 'time', TimeUUIDType()) sys.alter_column(TEST_KS, 'Validators', 'lex', LexicalUUIDType()) sys.alter_column(TEST_KS, 'Validators', 'ascii', AsciiType()) sys.alter_column(TEST_KS, 'Validators', 'utf8', UTF8Type()) sys.alter_column(TEST_KS, 'Validators', 'bytes', BytesType()) sys.close() cf = ColumnFamily(pool, 'Validators') key = 'key1' col = {'long': 1L} cf.insert(key, col) assert_equal(cf.get(key)['long'], 1L) col = {'int': 1} cf.insert(key, col) assert_equal(cf.get(key)['int'], 1) col = {'time': TIME1} cf.insert(key, col) assert_equal(cf.get(key)['time'], TIME1) col = {'lex': uuid.UUID(bytes='aaa aaa aaa aaaa')} cf.insert(key, col) assert_equal(cf.get(key)['lex'], uuid.UUID(bytes='aaa aaa aaa aaaa')) col = {'ascii': 'aaa'} cf.insert(key, col) assert_equal(cf.get(key)['ascii'], 'aaa') col = {'utf8': u'a\u0020'} cf.insert(key, col) assert_equal(cf.get(key)['utf8'], u'a\u0020') col = {'bytes': 'aaa'} cf.insert(key, col) assert_equal(cf.get(key)['bytes'], 'aaa') cf.remove(key) class TestDefaultValidators(unittest.TestCase): def test_default_validated_columns(self): sys = SystemManager() sys.create_column_family(TEST_KS, 'DefaultValidator', default_validation_class=LongType()) sys.alter_column(TEST_KS, 'DefaultValidator', 'subcol', TimeUUIDType()) sys.close() cf = ColumnFamily(pool, 'DefaultValidator') key = 'key1' col_cf = {'aaaaaa': 1L} col_cm = {'subcol': TIME1} col_ncf = {'aaaaaa': TIME1} # Both of these inserts work, as cf allows # longs and cm for 'subcol' allows TIMEUUIDs. cf.insert(key, col_cf) cf.insert(key, col_cm) assert_equal(cf.get(key), {'aaaaaa': 1L, 'subcol': TIME1}) # Insert multiple columns at once col_cf.update(col_cm) cf.insert(key, col_cf) assert_equal(cf.get(key), {'aaaaaa': 1L, 'subcol': TIME1}) assert_raises(TypeError, cf.insert, key, col_ncf) cf.remove(key) class TestKeyValidators(unittest.TestCase): @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family(TEST_KS, 'KeyLong', key_validation_class=LongType()) sys.create_column_family(TEST_KS, 'KeyInteger', key_validation_class=IntegerType()) sys.create_column_family(TEST_KS, 'KeyTimeUUID', key_validation_class=TimeUUIDType()) sys.create_column_family(TEST_KS, 'KeyLexicalUUID', key_validation_class=LexicalUUIDType()) sys.create_column_family(TEST_KS, 'KeyAscii', key_validation_class=AsciiType()) sys.create_column_family(TEST_KS, 'KeyUTF8', key_validation_class=UTF8Type()) sys.create_column_family(TEST_KS, 'KeyBytes', key_validation_class=BytesType()) sys.close() cls.cf_long = ColumnFamily(pool, 'KeyLong') cls.cf_int = ColumnFamily(pool, 'KeyInteger') cls.cf_time = ColumnFamily(pool, 'KeyTimeUUID') cls.cf_lex = ColumnFamily(pool, 'KeyLexicalUUID') cls.cf_ascii = ColumnFamily(pool, 'KeyAscii') cls.cf_utf8 = ColumnFamily(pool, 'KeyUTF8') cls.cf_bytes = ColumnFamily(pool, 'KeyBytes') cls.cfs = [cls.cf_long, cls.cf_int, cls.cf_time, cls.cf_lex, cls.cf_ascii, cls.cf_utf8, cls.cf_bytes] def tearDown(self): for cf in TestKeyValidators.cfs: for key, cols in cf.get_range(): cf.remove(key) def setUp(self): self.type_groups = [] long_keys = [1111111111111111L, 2222222222222222L, 3333333333333333L] self.type_groups.append((TestKeyValidators.cf_long, long_keys)) int_keys = [1, 2, 3] self.type_groups.append((TestKeyValidators.cf_int, int_keys)) time_keys = [TIME1, TIME2, TIME3] self.type_groups.append((TestKeyValidators.cf_time, time_keys)) lex_keys = [uuid.UUID(bytes='aaa aaa aaa aaaa'), uuid.UUID(bytes='bbb bbb bbb bbbb'), uuid.UUID(bytes='ccc ccc ccc cccc')] self.type_groups.append((TestKeyValidators.cf_lex, lex_keys)) ascii_keys = ['aaaa', 'bbbb', 'cccc'] self.type_groups.append((TestKeyValidators.cf_ascii, ascii_keys)) utf8_keys = [u'a\u0020', u'b\u0020', u'c\u0020'] self.type_groups.append((TestKeyValidators.cf_utf8, utf8_keys)) bytes_keys = ['aaaa', 'bbbb', 'cccc'] self.type_groups.append((TestKeyValidators.cf_bytes, bytes_keys)) def test_inserts(self): for cf, keys in self.type_groups: for key in keys: cf.insert(key, {str(key): 'val'}) results = cf.get(key) assert_equal(results, {str(key): 'val'}) col1 = str(key) + "1" col2 = str(key) + "2" cols = {col1: "val1", col2: "val2"} cf.insert(key, cols) results = cf.get(key) cols.update({str(key): 'val'}) assert_equal(results, cols) def test_batch_insert(self): for cf, keys in self.type_groups: rows = dict([(key, {str(key): 'val'}) for key in keys]) cf.batch_insert(rows) for key in keys: results = cf.get(key) assert_equal(results, {str(key): 'val'}) def test_multiget(self): for cf, keys in self.type_groups: for key in keys: cf.insert(key, {str(key): 'val'}) results = cf.multiget(keys) for key in keys: assert_true(key in results) assert_equal(results[key], {str(key): 'val'}) def test_get_count(self): for cf, keys in self.type_groups: for key in keys: cf.insert(key, {str(key): 'val'}) results = cf.get_count(key) assert_equal(results, 1) def test_multiget_count(self): for cf, keys in self.type_groups: for key in keys: cf.insert(key, {str(key): 'val'}) results = cf.multiget_count(keys) for key in keys: assert_true(key in results, "%s should be in %r" % (key, results)) assert_equal(results[key], 1) def test_get_range(self): for cf, keys in self.type_groups: for key in keys: cf.insert(key, {str(key): 'val'}) rows = list(cf.get_range()) assert_equal(len(rows), len(keys)) for k, c in rows: assert_true(k in keys) assert_equal(c, {str(k): 'val'}) def test_get_indexed_slices(self): sys = SystemManager() for cf, keys in self.type_groups: sys.create_index(TEST_KS, cf.column_family, 'birthdate', LongType()) cf = ColumnFamily(pool, cf.column_family) for key in keys: cf.insert(key, {'birthdate': 1}) expr = create_index_expression('birthdate', 1) clause = create_index_clause([expr]) rows = list(cf.get_indexed_slices(clause)) assert_equal(len(rows), len(keys)) for k, c in rows: assert_true(k in keys) assert_equal(c, {'birthdate': 1}) def test_remove(self): for cf, keys in self.type_groups: for key in keys: cf.insert(key, {str(key): 'val'}) assert_equal(cf.get(key), {str(key): 'val'}) cf.remove(key) assert_raises(NotFoundException, cf.get, key) def test_add_remove_counter(self): sys = SystemManager() sys.create_column_family(TEST_KS, 'KeyLongCounter', key_validation_class=LongType(), default_validation_class='CounterColumnType') sys.close() cf_long = ColumnFamily(pool, 'KeyLongCounter') key = 1111111111111111L cf_long.add(key, 'col') assert_equal(cf_long.get(key), {'col': 1}) cf_long.remove_counter(key, 'col') time.sleep(0.1) assert_raises(NotFoundException, cf_long.get, key) class TestComposites(unittest.TestCase): @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family(TEST_KS, 'StaticComposite', comparator_type=CompositeType(LongType(), IntegerType(), TimeUUIDType(reversed=True), LexicalUUIDType(reversed=False), AsciiType(), UTF8Type(), BytesType())) @classmethod def teardown_class(cls): sys = SystemManager() sys.drop_column_family(TEST_KS, 'StaticComposite') def test_static_composite_basic(self): cf = ColumnFamily(pool, 'StaticComposite') colname = (127312831239123123, 1, uuid.uuid1(), uuid.uuid4(), 'foo', u'ba\u0254r', 'baz') cf.insert('key', {colname: 'val'}) assert_equal(cf.get('key'), {colname: 'val'}) def test_static_composite_slicing(self): cf = ColumnFamily(pool, 'StaticComposite') u1 = uuid.uuid1() u4 = uuid.uuid4() col0 = (0, 1, u1, u4, '', '', '') col1 = (1, 1, u1, u4, '', '', '') col2 = (1, 2, u1, u4, '', '', '') col3 = (1, 3, u1, u4, '', '', '') col4 = (2, 1, u1, u4, '', '', '') cf.insert('key2', {col0: '', col1: '', col2: '', col3: '', col4: ''}) result = cf.get('key2', column_start=((1, True),), column_finish=((1, True),)) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(1,), column_finish=((2, False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=((1, True),), column_finish=((2, False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(1, ), column_finish=((2, False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=((0, False), ), column_finish=((2, False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(1, 1), column_finish=(1, 3)) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(1, 1), column_finish=(1, (3, True))) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(1, (1, True)), column_finish=((2, False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) def test_static_composite_get_partial_composite(self): cf = ColumnFamily(pool, 'StaticComposite') cf.insert('key3', {(123123, 1): 'val'}) assert_equal(cf.get('key3'), {(123123, 1): 'val'}) def test_uuid_composites(self): sys = SystemManager() sys.create_column_family(TEST_KS, 'UUIDComposite', comparator_type=CompositeType(IntegerType(reversed=True), TimeUUIDType()), key_validation_class=TimeUUIDType(), default_validation_class=UTF8Type()) key, u1, u2 = uuid.uuid1(), uuid.uuid1(), uuid.uuid1() cf = ColumnFamily(pool, 'UUIDComposite') cf.insert(key, {(123123, u1): 'foo'}) cf.insert(key, {(123123, u1): 'foo', (-1, u2): 'bar', (-123123123, u1): 'baz'}) assert_equal(cf.get(key), {(123123, u1): 'foo', (-1, u2): 'bar', (-123123123, u1): 'baz'}) def test_single_component_composite(self): sys = SystemManager() sys.create_column_family(TEST_KS, 'SingleComposite', comparator_type=CompositeType(IntegerType())) cf = ColumnFamily(pool, 'SingleComposite') cf.insert('key', {(123456,): 'val'}) assert_equal(cf.get('key'), {(123456,): 'val'}) class TestDynamicComposites(unittest.TestCase): @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family(TEST_KS, 'StaticDynamicComposite', comparator_type=DynamicCompositeType({'l': LongType(), 'i': IntegerType(), 'T': TimeUUIDType(reversed=True), 'x': LexicalUUIDType(reversed=False), 'a': AsciiType(), 's': UTF8Type(), 'b': BytesType()})) @classmethod def teardown_class(cls): sys = SystemManager() sys.drop_column_family(TEST_KS, 'StaticDynamicComposite') def setUp(self): global a, b, i, I, x, l, t, T, s component = namedtuple('DynamicComponent', ['type','value']) ascii_alias = component('a', None) bytes_alias = component('b', None) integer_alias = component('i', None) integer_rev_alias = component('I', None) lexicaluuid_alias = component('x', None) long_alias = component('l', None) timeuuid_alias = component('t', None) timeuuid_rev_alias = component('T', None) utf8_alias = component('s', None) _r = lambda t, v: t._replace(value=v) a = lambda v: _r(ascii_alias, v) b = lambda v: _r(bytes_alias, v) i = lambda v: _r(integer_alias, v) I = lambda v: _r(integer_rev_alias, v) x = lambda v: _r(lexicaluuid_alias, v) l = lambda v: _r(long_alias, v) t = lambda v: _r(timeuuid_alias, v) T = lambda v: _r(timeuuid_rev_alias, v) s = lambda v: _r(utf8_alias, v) def test_static_composite_basic(self): cf = ColumnFamily(pool, 'StaticDynamicComposite') colname = (l(127312831239123123), i(1), T(uuid.uuid1()), x(uuid.uuid4()), a('foo'), s(u'ba\u0254r'), b('baz')) cf.insert('key', {colname: 'val'}) assert_equal(cf.get('key'), {colname: 'val'}) def test_static_composite_slicing(self): cf = ColumnFamily(pool, 'StaticDynamicComposite') u1 = uuid.uuid1() u4 = uuid.uuid4() col0 = (l(0), i(1), T(u1), x(u4), a(''), s(''), b('')) col1 = (l(1), i(1), T(u1), x(u4), a(''), s(''), b('')) col2 = (l(1), i(2), T(u1), x(u4), a(''), s(''), b('')) col3 = (l(1), i(3), T(u1), x(u4), a(''), s(''), b('')) col4 = (l(2), i(1), T(u1), x(u4), a(''), s(''), b('')) cf.insert('key2', {col0: '', col1: '', col2: '', col3: '', col4: ''}) result = cf.get('key2', column_start=((l(1), True),), column_finish=((l(1), True),)) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(l(1),), column_finish=((l(2), False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=((l(1), True),), column_finish=((l(2), False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(l(1), ), column_finish=((l(2), False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=((l(0), False), ), column_finish=((l(2), False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(l(1), i(1)), column_finish=(l(1), i(3))) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(l(1), i(1)), column_finish=(l(1), (i(3), True))) assert_equal(result, {col1: '', col2: '', col3: ''}) result = cf.get('key2', column_start=(l(1), (i(1), True)), column_finish=((l(2), False), )) assert_equal(result, {col1: '', col2: '', col3: ''}) def test_static_composite_get_partial_composite(self): cf = ColumnFamily(pool, 'StaticDynamicComposite') cf.insert('key3', {(l(123123), i(1)): 'val'}) assert_equal(cf.get('key3'), {(l(123123), i(1)): 'val'}) def test_uuid_composites(self): sys = SystemManager() sys.create_column_family(TEST_KS, 'UUIDDynamicComposite', comparator_type=DynamicCompositeType({'I': IntegerType(reversed=True), 't': TimeUUIDType()}), key_validation_class=TimeUUIDType(), default_validation_class=UTF8Type()) key, u1, u2 = uuid.uuid1(), uuid.uuid1(), uuid.uuid1() cf = ColumnFamily(pool, 'UUIDDynamicComposite') cf.insert(key, {(I(123123), t(u1)): 'foo'}) cf.insert(key, {(I(123123), t(u1)): 'foo', (I(-1), t(u2)): 'bar', (I(-123123123), t(u1)): 'baz'}) assert_equal(cf.get(key), {(I(123123), t(u1)): 'foo', (I(-1), t(u2)): 'bar', (I(-123123123), t(u1)): 'baz'}) def test_single_component_composite(self): sys = SystemManager() sys.create_column_family(TEST_KS, 'SingleDynamicComposite', comparator_type=DynamicCompositeType({'i': IntegerType()})) cf = ColumnFamily(pool, 'SingleDynamicComposite') cf.insert('key', {(i(123456),): 'val'}) assert_equal(cf.get('key'), {(i(123456),): 'val'}) class TestBigInt(unittest.TestCase): @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family(TEST_KS, 'StdInteger', comparator_type=IntegerType()) @classmethod def teardown_class(cls): sys = SystemManager() sys.drop_column_family(TEST_KS, 'StdInteger') def setUp(self): self.key = 'TestBigInt' self.cf = ColumnFamily(pool, 'StdInteger') def tearDown(self): self.cf.remove(self.key) def test_negative_integers(self): self.cf.insert(self.key, {-1: '-1'}) self.cf.insert(self.key, {-12342390: '-12342390'}) self.cf.insert(self.key, {-255: '-255'}) self.cf.insert(self.key, {-256: '-256'}) self.cf.insert(self.key, {-257: '-257'}) for key, cols in self.cf.get_range(): self.assertEquals(str(cols.keys()[0]), cols.values()[0]) class TestTimeUUIDs(unittest.TestCase): @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family(TEST_KS, 'TestTimeUUIDs', comparator_type=TimeUUIDType()) sys.close() cls.cf_time = ColumnFamily(pool, 'TestTimeUUIDs') def test_datetime_to_uuid(self): cf_time = TestTimeUUIDs.cf_time key = 'key1' timeline = [] timeline.append(datetime.utcnow()) time1 = uuid1() col1 = {time1: '0'} cf_time.insert(key, col1) time.sleep(1) timeline.append(datetime.utcnow()) time2 = uuid1() col2 = {time2: '1'} cf_time.insert(key, col2) time.sleep(1) timeline.append(datetime.utcnow()) cols = {time1: '0', time2: '1'} assert_equal(cf_time.get(key, column_start=timeline[0]) , cols) assert_equal(cf_time.get(key, column_finish=timeline[2]) , cols) assert_equal(cf_time.get(key, column_start=timeline[0], column_finish=timeline[2]) , cols) assert_equal(cf_time.get(key, column_start=timeline[0], column_finish=timeline[2]) , cols) assert_equal(cf_time.get(key, column_start=timeline[0], column_finish=timeline[1]) , col1) assert_equal(cf_time.get(key, column_start=timeline[1], column_finish=timeline[2]) , col2) cf_time.remove(key) def test_time_to_uuid(self): cf_time = TestTimeUUIDs.cf_time key = 'key1' timeline = [] timeline.append(time.time()) time1 = uuid1() col1 = {time1: '0'} cf_time.insert(key, col1) time.sleep(0.1) timeline.append(time.time()) time2 = uuid1() col2 = {time2: '1'} cf_time.insert(key, col2) time.sleep(0.1) timeline.append(time.time()) cols = {time1:'0', time2: '1'} assert_equal(cf_time.get(key, column_start=timeline[0]) , cols) assert_equal(cf_time.get(key, column_finish=timeline[2]) , cols) assert_equal(cf_time.get(key, column_start=timeline[0], column_finish=timeline[2]) , cols) assert_equal(cf_time.get(key, column_start=timeline[0], column_finish=timeline[2]) , cols) assert_equal(cf_time.get(key, column_start=timeline[0], column_finish=timeline[1]) , col1) assert_equal(cf_time.get(key, column_start=timeline[1], column_finish=timeline[2]) , col2) cf_time.remove(key) def test_auto_time_to_uuid1(self): cf_time = TestTimeUUIDs.cf_time key = 'key1' t = time.time() col = {t: 'foo'} cf_time.insert(key, col) uuid_res = cf_time.get(key).keys()[0] timestamp = convert_uuid_to_time(uuid_res) assert_almost_equal(timestamp, t, places=3) cf_time.remove(key) class TestTypeErrors(unittest.TestCase): def test_packing_enabled(self): self.cf = ColumnFamily(pool, 'Standard1') self.cf.insert('key', {'col': 'val'}) assert_raises(TypeError, self.cf.insert, args=('key', {123: 'val'})) assert_raises(TypeError, self.cf.insert, args=('key', {'col': 123})) assert_raises(TypeError, self.cf.insert, args=('key', {123: 123})) self.cf.remove('key') def test_packing_disabled(self): self.cf = ColumnFamily(pool, 'Standard1', autopack_names=False, autopack_values=False) self.cf.insert('key', {'col': 'val'}) assert_raises(TypeError, self.cf.insert, args=('key', {123: 'val'})) assert_raises(TypeError, self.cf.insert, args=('key', {'col': 123})) assert_raises(TypeError, self.cf.insert, args=('key', {123: 123})) self.cf.remove('key') class TestDateTypes(unittest.TestCase): def _compare_dates(self, d1, d2): self.assertEquals(d1.timetuple(), d2.timetuple()) self.assertEquals(int(d1.microsecond/1e3), int(d2.microsecond/1e3)) def test_compatibility(self): self.cf = ColumnFamily(pool, 'Standard1') self.cf.column_validators['date'] = OldPycassaDateType() d = datetime.utcnow() self.cf.insert('key1', {'date': d}) self._compare_dates(self.cf.get('key1')['date'], d) self.cf.column_validators['date'] = IntermediateDateType() self._compare_dates(self.cf.get('key1')['date'], d) self.cf.insert('key1', {'date': d}) self._compare_dates(self.cf.get('key1')['date'], d) self.cf.column_validators['date'] = DateType() self._compare_dates(self.cf.get('key1')['date'], d) self.cf.insert('key1', {'date': d}) self._compare_dates(self.cf.get('key1')['date'], d) self.cf.remove('key1') class TestPackerOverride(unittest.TestCase): @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family(TEST_KS, 'CompositeOverrideCF', comparator_type=CompositeType(AsciiType(), AsciiType()), default_validation_class=AsciiType()) @classmethod def teardown_class(cls): sys = SystemManager() sys.drop_column_family(TEST_KS, 'CompositeOverrideCF') def test_column_validator(self): cf = ColumnFamily(pool, 'CompositeOverrideCF') cf.column_validators[('a', 'b')] = BooleanType() cf.insert('key', {('a', 'a'): 'foo', ('a', 'b'): True}) assert_equal(cf.get('key'), {('a', 'a'): 'foo', ('a', 'b'): True}) assert_equal(cf.column_validators[('a', 'b')].__class__, BooleanType) keys = cf.column_validators.keys() assert_equal(keys, [('a', 'b')]) del cf.column_validators[('a', 'b')] assert_raises(KeyError, cf.column_validators.__getitem__, ('a', 'b')) class TestCustomTypes(unittest.TestCase): class IntString(CassandraType): @staticmethod def pack(intval): return str(intval) @staticmethod def unpack(strval): return int(strval) class IntString2(CassandraType): def __init__(self, *args, **kwargs): self.pack = lambda val: str(val) self.unpack = lambda val: int(val) def test_staticmethod_funcs(self): self.cf = ColumnFamily(pool, 'Standard1') self.cf.key_validation_class = TestCustomTypes.IntString() self.cf.insert(1234, {'col': 'val'}) assert_equal(self.cf.get(1234), {'col': 'val'}) def test_constructor_lambdas(self): self.cf = ColumnFamily(pool, 'Standard1') self.cf.key_validation_class = TestCustomTypes.IntString2() self.cf.insert(1234, {'col': 'val'}) assert_equal(self.cf.get(1234), {'col': 'val'}) class TestCustomComposite(unittest.TestCase): """ Test CompositeTypes with custom inner types. """ # Some contrived scenarios class IntDateType(CassandraType): """ Represent a date as an integer. E.g.: March 05, 2012 = 20120305 """ @staticmethod def pack(v, *args, **kwargs): assert type(v) in (datetime, date), "Invalid arg" str_date = v.strftime("%Y%m%d") return marshal.encode_int(int(str_date)) @staticmethod def unpack(v, *args, **kwargs): int_date = marshal.decode_int(v) return date(*time.strptime(str(int_date), "%Y%m%d")[0:3]) class IntString(CassandraType): @staticmethod def pack(intval): return str(intval) @staticmethod def unpack(strval): return int(strval) class IntString2(CassandraType): def __init__(self, *args, **kwargs): self.pack = lambda val: str(val) self.unpack = lambda val: int(val) @classmethod def setup_class(cls): sys = SystemManager() sys.create_column_family( TEST_KS, 'CustomComposite1', comparator_type=CompositeType( IntegerType(), UTF8Type())) @classmethod def teardown_class(cls): sys = SystemManager() sys.drop_column_family(TEST_KS, 'CustomComposite1') def test_static_composite_basic(self): cf = ColumnFamily(pool, 'CustomComposite1') colname = (20120305, '12345') cf.insert('key', {colname: 'val1'}) assert_equal(cf.get('key'), {colname: 'val1'}) def test_insert_with_custom_composite(self): cf_std = ColumnFamily(pool, 'CustomComposite1') cf_cust = ColumnFamily(pool, 'CustomComposite1') cf_cust.column_name_class = CompositeType( TestCustomComposite.IntDateType(), TestCustomComposite.IntString()) std_col = (20120311, '321') cust_col = (date(2012, 3, 11), 321) cf_cust.insert('cust_insert_key_1', {cust_col: 'cust_insert_val_1'}) assert_equal(cf_std.get('cust_insert_key_1'), {std_col: 'cust_insert_val_1'}) def test_retrieve_with_custom_composite(self): cf_std = ColumnFamily(pool, 'CustomComposite1') cf_cust = ColumnFamily(pool, 'CustomComposite1') cf_cust.column_name_class = CompositeType( TestCustomComposite.IntDateType(), TestCustomComposite.IntString()) std_col = (20120312, '321') cust_col = (date(2012, 3, 12), 321) cf_std.insert('cust_insert_key_2', {std_col: 'cust_insert_val_2'}) assert_equal(cf_cust.get('cust_insert_key_2'), {cust_col: 'cust_insert_val_2'}) def test_composite_slicing(self): cf_std = ColumnFamily(pool, 'CustomComposite1') cf_cust = ColumnFamily(pool, 'CustomComposite1') cf_cust.column_name_class = CompositeType( TestCustomComposite.IntDateType(), TestCustomComposite.IntString2()) col0 = (20120101, '123') col1 = (20120102, '123') col2 = (20120102, '456') col3 = (20120102, '789') col4 = (20120103, '123') dt0 = date(2012, 1, 1) dt1 = date(2012, 1, 2) dt2 = date(2012, 1, 3) col1_cust = (dt1, 123) col2_cust = (dt1, 456) col3_cust = (dt1, 789) cf_std.insert('key2', {col0: '', col1: '', col2: '', col3: '', col4: ''}) def check(column_start, column_finish, col_reversed=False): result = cf_cust.get('key2', column_start=column_start, column_finish=column_finish, column_reversed=col_reversed) assert_equal(result, {col1_cust: '', col2_cust: '', col3_cust: ''}) # Defaults should be inclusive on both ends check((dt1,), (dt1,)) check((dt1,), (dt1,), True) check(((dt1, True),), ((dt1, True),)) check((dt1,), ((dt2, False),)) check(((dt1, True),), ((dt2, False),)) check(((dt0, False),), ((dt2, False),)) check((dt1, 123), (dt1, 789)) check((dt1, 123), (dt1, (789, True))) check((dt1, (123, True)), ((dt2, False),)) # Test inclusive ends for reversed check(((dt1, True),), ((dt1, True),), True) check( (dt1,), ((dt1, True),), True) check(((dt1, True),), (dt1,), True) # Test exclusive ends for reversed check(((dt2, False),), ((dt0, False),), True) check(((dt2, False),), (dt1,), True) check((dt1,), ((dt0, False),), True) pycassa-1.11.2.1/tests/test_batch_mutation.py000066400000000000000000000117011303744607500211630ustar00rootroot00000000000000from __future__ import with_statement import sys import unittest from nose import SkipTest from nose.tools import assert_raises, assert_equal from pycassa import ConnectionPool, ColumnFamily, NotFoundException import pycassa.batch as batch_mod from pycassa.system_manager import SystemManager ROWS = {'1': {'a': '123', 'b': '123'}, '2': {'a': '234', 'b': '234'}, '3': {'a': '345', 'b': '345'}} pool = cf = scf = counter_cf = super_counter_cf = sysman = None def setup_module(): global pool, cf, scf, counter_cf, super_counter_cf, sysman credentials = {'username': 'jsmith', 'password': 'havebadpass'} pool = ConnectionPool(keyspace='PycassaTestKeyspace', credentials=credentials) cf = ColumnFamily(pool, 'Standard1') scf = ColumnFamily(pool, 'Super1') sysman = SystemManager() counter_cf = ColumnFamily(pool, 'Counter1') super_counter_cf = ColumnFamily(pool, 'SuperCounter1') def teardown_module(): pool.dispose() class TestMutator(unittest.TestCase): def tearDown(self): for key, cols in cf.get_range(): cf.remove(key) for key, cols, in scf.get_range(): scf.remove(key) def test_insert(self): batch = cf.batch() for key, cols in ROWS.iteritems(): batch.insert(key, cols) batch.send() for key, cols in ROWS.items(): assert cf.get(key) == cols def test_insert_supercolumns(self): batch = scf.batch() batch.insert('one', ROWS) batch.insert('two', ROWS) batch.insert('three', ROWS) batch.send() assert scf.get('one') == ROWS assert scf.get('two') == ROWS assert scf.get('three') == ROWS def test_insert_counters(self): batch = counter_cf.batch() batch.insert('one', {'col': 1}) batch.insert('two', {'col': 2}) batch.insert('three', {'col': 3}) batch.send() assert_equal(counter_cf.get('one'), {'col': 1}) assert_equal(counter_cf.get('two'), {'col': 2}) assert_equal(counter_cf.get('three'), {'col': 3}) batch = super_counter_cf.batch() batch.insert('one', {'scol': {'col1': 1, 'col2': 2}}) batch.insert('two', {'scol': {'col1': 3, 'col2': 4}}) batch.send() assert_equal(super_counter_cf.get('one'), {'scol': {'col1': 1, 'col2': 2}}) assert_equal(super_counter_cf.get('two'), {'scol': {'col1': 3, 'col2': 4}}) def test_queue_size(self): batch = cf.batch(queue_size=2) batch.insert('1', ROWS['1']) batch.insert('2', ROWS['2']) batch.insert('3', ROWS['3']) assert cf.get('1') == ROWS['1'] assert_raises(NotFoundException, cf.get, '3') batch.send() for key, cols in ROWS.items(): assert cf.get(key) == cols def test_remove_key(self): batch = cf.batch() batch.insert('1', ROWS['1']) batch.remove('1') batch.send() assert_raises(NotFoundException, cf.get, '1') def test_remove_columns(self): batch = cf.batch() batch.insert('1', {'a': '123', 'b': '123'}) batch.remove('1', ['a']) batch.send() assert cf.get('1') == {'b': '123'} def test_remove_supercolumns(self): batch = scf.batch() batch.insert('one', ROWS) batch.insert('two', ROWS) batch.insert('three', ROWS) batch.remove('two', ['b'], '2') batch.send() assert scf.get('one') == ROWS assert scf.get('two')['2'] == {'a': '234'} assert scf.get('three') == ROWS def test_chained(self): batch = cf.batch() batch.insert('1', ROWS['1']).insert('2', ROWS['2']).insert('3', ROWS['3']).send() assert cf.get('1') == ROWS['1'] assert cf.get('2') == ROWS['2'] assert cf.get('3') == ROWS['3'] def test_contextmgr(self): if sys.version_info < (2, 5): raise SkipTest("No context managers in Python < 2.5") exec """with cf.batch(queue_size=2) as b: b.insert('1', ROWS['1']) b.insert('2', ROWS['2']) b.insert('3', ROWS['3']) assert cf.get('3') == ROWS['3']""" def test_multi_column_family(self): batch = batch_mod.Mutator(pool) cf2 = cf batch.insert(cf, '1', ROWS['1']) batch.insert(cf, '2', ROWS['2']) batch.remove(cf2, '1', ROWS['1']) batch.send() assert cf.get('2') == ROWS['2'] assert_raises(NotFoundException, cf.get, '1') def test_atomic_insert_at_mutator_creation(self): batch = cf.batch(atomic=True) for key, cols in ROWS.iteritems(): batch.insert(key, cols) batch.send() for key, cols in ROWS.items(): assert cf.get(key) == cols def test_atomic_insert_at_send(self): batch = cf.batch(atomic=True) for key, cols in ROWS.iteritems(): batch.insert(key, cols) batch.send(atomic=True) for key, cols in ROWS.items(): assert cf.get(key) == cols pycassa-1.11.2.1/tests/test_columnfamily.py000066400000000000000000000655101303744607500206700ustar00rootroot00000000000000import unittest from nose.tools import assert_raises, assert_equal, assert_true from pycassa import index, ColumnFamily, ConnectionPool,\ NotFoundException, SystemManager from pycassa.util import OrderedDict from tests.util import requireOPP pool = cf = scf = indexed_cf = counter_cf = counter_scf = sys_man = None def setup_module(): global pool, cf, scf, indexed_cf, counter_cf, counter_scf, sys_man credentials = {'username': 'jsmith', 'password': 'havebadpass'} pool = ConnectionPool(keyspace='PycassaTestKeyspace', credentials=credentials, timeout=1.0) cf = ColumnFamily(pool, 'Standard1', dict_class=TestDict) scf = ColumnFamily(pool, 'Super1', dict_class=dict) indexed_cf = ColumnFamily(pool, 'Indexed1') sys_man = SystemManager() counter_cf = ColumnFamily(pool, 'Counter1') counter_scf = ColumnFamily(pool, 'SuperCounter1') def teardown_module(): cf.truncate() indexed_cf.truncate() counter_cf.truncate() counter_scf.truncate() pool.dispose() class TestDict(dict): pass class TestColumnFamily(unittest.TestCase): def setUp(self): self.sys_man = sys_man def tearDown(self): for key, columns in cf.get_range(): cf.remove(key) for key, columns in indexed_cf.get_range(): cf.remove(key) def test_bad_kwarg(self): assert_raises(TypeError, ColumnFamily.__init__, pool, 'test', bar='foo') def test_empty(self): key = 'TestColumnFamily.test_empty' assert_raises(NotFoundException, cf.get, key) assert_equal(len(cf.multiget([key])), 0) for key, columns in cf.get_range(): assert_equal(len(columns), 0) def test_insert_get(self): key = 'TestColumnFamily.test_insert_get' columns = {'1': 'val1', '2': 'val2'} assert_raises(NotFoundException, cf.get, key) ts = cf.insert(key, columns) assert_true(isinstance(ts, (int, long))) assert_equal(cf.get(key), columns) def test_insert_multiget(self): key1 = 'TestColumnFamily.test_insert_multiget1' columns1 = {'1': 'val1', '2': 'val2'} key2 = 'test_insert_multiget1' columns2 = {'3': 'val1', '4': 'val2'} missing_key = 'key3' cf.insert(key1, columns1) cf.insert(key2, columns2) rows = cf.multiget([key1, key2, missing_key]) assert_equal(len(rows), 2) assert_equal(rows[key1], columns1) assert_equal(rows[key2], columns2) assert_true(missing_key not in rows) def test_multiget_multiple_bad_key(self): key = 'efefefef' cf.multiget([key, key, key]) def test_insert_get_count(self): key = 'TestColumnFamily.test_insert_get_count' columns = {'1': 'val1', '2': 'val2'} cf.insert(key, columns) assert_equal(cf.get_count(key), 2) assert_equal(cf.get_count(key, column_start='1'), 2) assert_equal(cf.get_count(key, column_finish='2'), 2) assert_equal(cf.get_count(key, column_start='1', column_finish='2'), 2) assert_equal(cf.get_count(key, column_start='1', column_finish='1'), 1) assert_equal(cf.get_count(key, columns=['1', '2']), 2) assert_equal(cf.get_count(key, columns=['1']), 1) assert_equal(cf.get_count(key, max_count=1), 1) assert_equal(cf.get_count(key, max_count=1, column_reversed=True), 1) assert_equal(cf.get_count(key, column_reversed=True), 2) assert_equal(cf.get_count(key, column_start='1', column_reversed=True), 1) def test_insert_multiget_count(self): keys = ['TestColumnFamily.test_insert_multiget_count1', 'TestColumnFamily.test_insert_multiget_count2', 'TestColumnFamily.test_insert_multiget_count3'] for key in keys: cf.insert(key, {'1': 'val1', '2': 'val2'}) result = cf.multiget_count(keys) assert_equal([result[k] for k in keys], [2 for key in keys]) result = cf.multiget_count(keys, column_start='1') assert_equal([result[k] for k in keys], [2 for key in keys]) result = cf.multiget_count(keys, column_finish='2') assert_equal([result[k] for k in keys], [2 for key in keys]) result = cf.multiget_count(keys, column_start='1', column_finish='2') assert_equal([result[k] for k in keys], [2 for key in keys]) result = cf.multiget_count(keys, column_start='1', column_finish='1') assert_equal([result[k] for k in keys], [1 for key in keys]) result = cf.multiget_count(keys, columns=['1', '2']) assert_equal([result[k] for k in keys], [2 for key in keys]) result = cf.multiget_count(keys, columns=['1']) assert_equal([result[k] for k in keys], [1 for key in keys]) result = cf.multiget_count(keys, max_count=1) assert_equal([result[k] for k in keys], [1 for key in keys]) result = cf.multiget_count(keys, column_start='1', column_reversed=True) assert_equal([result[k] for k in keys], [1 for key in keys]) @requireOPP def test_insert_get_range(self): keys = ['TestColumnFamily.test_insert_get_range%s' % i for i in xrange(5)] columns = {'1': 'val1', '2': 'val2'} for key in keys: cf.insert(key, columns) rows = list(cf.get_range(start=keys[0], finish=keys[-1])) assert_equal(len(rows), len(keys)) for i, (k, c) in enumerate(rows): assert_equal(k, keys[i]) assert_equal(c, columns) @requireOPP def test_get_range_batching(self): cf.truncate() keys = [] columns = {'c': 'v'} for i in range(100, 201): keys.append('key%d' % i) cf.insert('key%d' % i, columns) for i in range(201, 301): cf.insert('key%d' % i, columns) count = 0 for k, v in cf.get_range(row_count=100, buffer_size=10): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 100) count = 0 for k, v in cf.get_range(row_count=100, buffer_size=1000): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 100) count = 0 for k, v in cf.get_range(row_count=100, buffer_size=150): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 100) count = 0 for k, v in cf.get_range(row_count=100, buffer_size=7): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 100) count = 0 for k, v in cf.get_range(row_count=100, buffer_size=2): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 100) # Put the remaining keys in our list for i in range(201, 301): keys.append('key%d' % i) count = 0 for k, v in cf.get_range(row_count=10000, buffer_size=2): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 201) count = 0 for k, v in cf.get_range(row_count=10000, buffer_size=7): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 201) count = 0 for k, v in cf.get_range(row_count=10000, buffer_size=200): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 201) count = 0 for k, v in cf.get_range(row_count=10000, buffer_size=10000): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 201) # Don't give a row count count = 0 for k, v in cf.get_range(buffer_size=2): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 201) count = 0 for k, v in cf.get_range(buffer_size=77): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 201) count = 0 for k, v in cf.get_range(buffer_size=200): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 201) count = 0 for k, v in cf.get_range(buffer_size=10000): assert_true(k in keys, 'key "%s" should be in keys' % k) count += 1 assert_equal(count, 201) cf.truncate() @requireOPP def test_get_range_tokens(self): cf.truncate() columns = {'c': 'v'} for i in range(100, 201): cf.insert('key%d' % i, columns) results = list(cf.get_range(start_token="key100".encode('hex'), finish_token="key200".encode('hex'))) assert_equal(100, len(results)) results = list(cf.get_range(start_token="key100".encode('hex'), finish_token="key200".encode('hex'), buffer_size=10)) assert_equal(100, len(results)) results = list(cf.get_range(start_token="key100".encode('hex'), buffer_size=10)) assert_equal(100, len(results)) results = list(cf.get_range(finish_token="key201".encode('hex'), buffer_size=10)) assert_equal(101, len(results)) def insert_insert_get_indexed_slices(self): indexed_cf = ColumnFamily(pool, 'Indexed1') columns = {'birthdate': 1L} keys = [] for i in range(1, 4): indexed_cf.insert('key%d' % i, columns) keys.append('key%d') expr = index.create_index_expression(column_name='birthdate', value=1L) clause = index.create_index_clause([expr]) count = 0 for key, cols in indexed_cf.get_indexed_slices(clause): assert_equal(cols, columns) assert key in keys count += 1 assert_equal(count, 3) def test_get_indexed_slices_batching(self): indexed_cf = ColumnFamily(pool, 'Indexed1') columns = {'birthdate': 1L} for i in range(200): indexed_cf.insert('key%d' % i, columns) expr = index.create_index_expression(column_name='birthdate', value=1L) clause = index.create_index_clause([expr], count=10) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=2)) assert_equal(len(result), 10) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=10)) assert_equal(len(result), 10) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=77)) assert_equal(len(result), 10) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=200)) assert_equal(len(result), 10) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=1000)) assert_equal(len(result), 10) clause = index.create_index_clause([expr], count=250) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=2)) assert_equal(len(result), 200) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=10)) assert_equal(len(result), 200) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=77)) assert_equal(len(result), 200) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=200)) assert_equal(len(result), 200) result = list(indexed_cf.get_indexed_slices(clause, buffer_size=1000)) assert_equal(len(result), 200) def test_multiget_batching(self): key_prefix = "TestColumnFamily.test_multiget_batching" keys = [] expected = OrderedDict() for i in range(10): key = key_prefix + str(i) keys.append(key) expected[key] = {'col': 'val'} cf.insert(key, {'col': 'val'}) assert_equal(cf.multiget(keys, buffer_size=1), expected) assert_equal(cf.multiget(keys, buffer_size=2), expected) assert_equal(cf.multiget(keys, buffer_size=3), expected) assert_equal(cf.multiget(keys, buffer_size=9), expected) assert_equal(cf.multiget(keys, buffer_size=10), expected) assert_equal(cf.multiget(keys, buffer_size=11), expected) assert_equal(cf.multiget(keys, buffer_size=100), expected) def test_add(self): counter_cf.add('key', 'col') result = counter_cf.get('key') assert_equal(result['col'], 1) counter_cf.add('key', 'col') result = counter_cf.get('key') assert_equal(result['col'], 2) counter_cf.add('key', 'col2') result = counter_cf.get('key') assert_equal(result, {'col': 2, 'col2': 1}) def test_insert_counters(self): counter_cf.insert('counter_key', {'col1': 1}) result = counter_cf.get('counter_key') assert_equal(result['col1'], 1) counter_cf.insert('counter_key', {'col1': 1, 'col2': 1}) result = counter_cf.get('counter_key') assert_equal(result, {'col1': 2, 'col2': 1}) def test_remove(self): key = 'TestColumnFamily.test_remove' columns = {'1': 'val1', '2': 'val2'} cf.insert(key, columns) # An empty list for columns shouldn't delete anything cf.remove(key, columns=[]) assert_equal(cf.get(key), columns) cf.remove(key, columns=['2']) del columns['2'] assert_equal(cf.get(key), {'1': 'val1'}) cf.remove(key) assert_raises(NotFoundException, cf.get, key) def test_remove_counter(self): key = 'test_remove_counter' counter_cf.add(key, 'col') result = counter_cf.get(key) assert_equal(result['col'], 1) counter_cf.remove_counter(key, 'col') assert_raises(NotFoundException, cf.get, key) def test_dict_class(self): key = 'TestColumnFamily.test_dict_class' cf.insert(key, {'1': 'val1'}) assert isinstance(cf.get(key), TestDict) def test_xget(self): key = "test_xget_batching" cf.insert(key, dict((str(i), str(i)) for i in range(100, 300))) combos = [(100, 10), (100, 1000), (100, 199), (100, 200), (100, 201), (100, 7), (100, 2)] for count, bufsz in combos: res = list(cf.xget(key, column_count=count, buffer_size=bufsz)) assert_equal(len(res), count) assert_equal(res, [(str(i), str(i)) for i in range(100, 200)]) combos = [(10000, 2), (10000, 7), (10000, 199), (10000, 200), (10000, 201), (10000, 10000)] for count, bufsz in combos: res = list(cf.xget(key, column_count=count, buffer_size=bufsz)) assert_equal(len(res), 200) assert_equal(res, [(str(i), str(i)) for i in range(100, 300)]) for bufsz in [2, 77, 199, 200, 201, 10000]: res = list(cf.xget(key, column_count=None, buffer_size=bufsz)) assert_equal(len(res), 200) assert_equal(res, [(str(i), str(i)) for i in range(100, 300)]) def test_xget_counter(self): key = 'test_xget_counter' counter_cf.insert(key, {'col1': 1}) res = list(counter_cf.xget(key)) assert_equal(res, [('col1', 1)]) counter_cf.insert(key, {'col1': 1, 'col2': 1}) res = list(counter_cf.xget(key)) assert_equal(res, [('col1', 2), ('col2', 1)]) class TestSuperColumnFamily(unittest.TestCase): def tearDown(self): for key, columns in scf.get_range(): scf.remove(key) def test_empty(self): key = 'TestSuperColumnFamily.test_empty' assert_raises(NotFoundException, cf.get, key) assert_equal(len(cf.multiget([key])), 0) for key, columns in cf.get_range(): assert_equal(len(columns), 0) def test_get_whole_row(self): key = 'TestSuperColumnFamily.test_get_whole_row' columns = {'1': {'sub1': 'val1', 'sub2': 'val2'}, '2': {'sub3': 'val3', 'sub4': 'val4'}} scf.insert(key, columns) assert_equal(scf.get(key), columns) def test_get_super_column(self): key = 'TestSuperColumnFamily.test_get_super_column' subcolumns = {'sub1': 'val1', 'sub2': 'val2', 'sub3': 'val3'} columns = {'1': subcolumns} scf.insert(key, columns) assert_equal(scf.get(key), columns) assert_equal(scf.get(key, super_column='1'), subcolumns) assert_equal(scf.get(key, super_column='1', columns=['sub1']), {'sub1': 'val1'}) assert_equal(scf.get(key, super_column='1', column_start='sub3'), {'sub3': 'val3'}) assert_equal(scf.get(key, super_column='1', column_finish='sub1'), {'sub1': 'val1'}) assert_equal(scf.get(key, super_column='1', column_count=1), {'sub1': 'val1'}) assert_equal(scf.get(key, super_column='1', column_count=1, column_reversed=True), {'sub3': 'val3'}) def test_get_super_columns(self): key = 'TestSuperColumnFamily.test_get_super_columns' super1 = {'sub1': 'val1', 'sub2': 'val2'} super2 = {'sub3': 'val3', 'sub4': 'val4'} super3 = {'sub5': 'val5', 'sub6': 'val6'} columns = {'1': super1, '2': super2, '3': super3} scf.insert(key, columns) assert_equal(scf.get(key), columns) assert_equal(scf.get(key, columns=['1']), {'1': super1}) assert_equal(scf.get(key, column_start='3'), {'3': super3}) assert_equal(scf.get(key, column_finish='1'), {'1': super1}) assert_equal(scf.get(key, column_count=1), {'1': super1}) assert_equal(scf.get(key, column_count=1, column_reversed=True), {'3': super3}) def test_multiget_supercolumn(self): key1 = 'TestSuerColumnFamily.test_multiget_supercolumn1' key2 = 'TestSuerColumnFamily.test_multiget_supercolumn2' keys = [key1, key2] subcolumns = {'sub1': 'val1', 'sub2': 'val2', 'sub3': 'val3'} columns = {'1': subcolumns} scf.insert(key1, columns) scf.insert(key2, columns) assert_equal(scf.multiget(keys), {key1: columns, key2: columns}) assert_equal(scf.multiget(keys, super_column='1'), {key1: subcolumns, key2: subcolumns}) assert_equal(scf.multiget(keys, super_column='1', columns=['sub1']), {key1: {'sub1': 'val1'}, key2: {'sub1': 'val1'}}) assert_equal(scf.multiget(keys, super_column='1', column_start='sub3'), {key1: {'sub3': 'val3'}, key2: {'sub3': 'val3'}}) assert_equal(scf.multiget(keys, super_column='1', column_finish='sub1'), {key1: {'sub1': 'val1'}, key2: {'sub1': 'val1'}}) assert_equal(scf.multiget(keys, super_column='1', column_count=1), {key1: {'sub1': 'val1'}, key2: {'sub1': 'val1'}}) assert_equal(scf.multiget(keys, super_column='1', column_count=1, column_reversed=True), {key1: {'sub3': 'val3'}, key2: {'sub3': 'val3'}}) def test_multiget_supercolumns(self): key1 = 'TestSuerColumnFamily.test_multiget_supercolumns1' key2 = 'TestSuerColumnFamily.test_multiget_supercolumns2' keys = [key1, key2] super1 = {'sub1': 'val1', 'sub2': 'val2'} super2 = {'sub3': 'val3', 'sub4': 'val4'} super3 = {'sub5': 'val5', 'sub6': 'val6'} columns = {'1': super1, '2': super2, '3': super3} scf.insert(key1, columns) scf.insert(key2, columns) assert_equal(scf.multiget(keys), {key1: columns, key2: columns}) assert_equal(scf.multiget(keys, columns=['1']), {key1: {'1': super1}, key2: {'1': super1}}) assert_equal(scf.multiget(keys, column_start='3'), {key1: {'3': super3}, key2: {'3': super3}}) assert_equal(scf.multiget(keys, column_finish='1'), {key1: {'1': super1}, key2: {'1': super1}}) assert_equal(scf.multiget(keys, column_count=1), {key1: {'1': super1}, key2: {'1': super1}}) assert_equal(scf.multiget(keys, column_count=1, column_reversed=True), {key1: {'3': super3}, key2: {'3': super3}}) def test_get_range_super_column(self): key = 'TestSuperColumnFamily.test_get_range_super_column' subcolumns = {'sub1': 'val1', 'sub2': 'val2', 'sub3': 'val3'} columns = {'1': subcolumns} scf.insert(key, columns) assert_equal(list(scf.get_range(start=key, finish=key, super_column='1')), [(key, subcolumns)]) assert_equal(list(scf.get_range(start=key, finish=key, super_column='1', columns=['sub1'])), [(key, {'sub1': 'val1'})]) assert_equal(list(scf.get_range(start=key, finish=key, super_column='1', column_start='sub3')), [(key, {'sub3': 'val3'})]) assert_equal(list(scf.get_range(start=key, finish=key, super_column='1', column_finish='sub1')), [(key, {'sub1': 'val1'})]) assert_equal(list(scf.get_range(start=key, finish=key, super_column='1', column_count=1)), [(key, {'sub1': 'val1'})]) assert_equal(list(scf.get_range(start=key, finish=key, super_column='1', column_count=1, column_reversed=True)), [(key, {'sub3': 'val3'})]) def test_get_range_super_columns(self): key = 'TestSuperColumnFamily.test_get_range_super_columns' super1 = {'sub1': 'val1', 'sub2': 'val2'} super2 = {'sub3': 'val3', 'sub4': 'val4'} super3 = {'sub5': 'val5', 'sub6': 'val6'} columns = {'1': super1, '2': super2, '3': super3} scf.insert(key, columns) assert_equal(list(scf.get_range(start=key, finish=key, columns=['1'])), [(key, {'1': super1})]) assert_equal(list(scf.get_range(start=key, finish=key, column_start='3')), [(key, {'3': super3})]) assert_equal(list(scf.get_range(start=key, finish=key, column_finish='1')), [(key, {'1': super1})]) assert_equal(list(scf.get_range(start=key, finish=key, column_count=1)), [(key, {'1': super1})]) assert_equal(list(scf.get_range(start=key, finish=key, column_count=1, column_reversed=True)), [(key, {'3': super3})]) def test_get_count_super_column(self): key = 'TestSuperColumnFamily.test_get_count_super_column' subcolumns = {'sub1': 'val1', 'sub2': 'val2', 'sub3': 'val3'} columns = {'1': subcolumns} scf.insert(key, columns) assert_equal(scf.get_count(key, super_column='1'), 3) assert_equal(scf.get_count(key, super_column='1', columns=['sub1']), 1) assert_equal(scf.get_count(key, super_column='1', column_start='sub3'), 1) assert_equal(scf.get_count(key, super_column='1', column_finish='sub1'), 1) def test_get_count_super_columns(self): key = 'TestSuperColumnFamily.test_get_count_super_columns' columns = {'1': {'sub1': 'val1'}, '2': {'sub2': 'val2'}, '3': {'sub3': 'val3'}} scf.insert(key, columns) assert_equal(scf.get_count(key), 3) assert_equal(scf.get_count(key, columns=['1']), 1) assert_equal(scf.get_count(key, column_start='3'), 1) assert_equal(scf.get_count(key, column_finish='1'), 1) def test_multiget_count_super_column(self): key1 = 'TestSuperColumnFamily.test_multiget_count_super_column1' key2 = 'TestSuperColumnFamily.test_multiget_count_super_column2' keys = [key1, key2] subcolumns = {'sub1': 'val1', 'sub2': 'val2', 'sub3': 'val3'} columns = {'1': subcolumns} scf.insert(key1, columns) scf.insert(key2, columns) assert_equal(scf.multiget_count(keys, super_column='1'), {key1: 3, key2: 3}) assert_equal(scf.multiget_count(keys, super_column='1', columns=['sub1']), {key1: 1, key2: 1}) assert_equal(scf.multiget_count(keys, super_column='1', column_start='sub3'), {key1: 1, key2: 1}) assert_equal(scf.multiget_count(keys, super_column='1', column_finish='sub1'), {key1: 1, key2: 1}) def test_multiget_count_super_columns(self): key1 = 'TestSuperColumnFamily.test_multiget_count_super_columns1' key2 = 'TestSuperColumnFamily.test_multiget_count_super_columns2' keys = [key1, key2] columns = {'1': {'sub1': 'val1'}, '2': {'sub2': 'val2'}, '3': {'sub3': 'val3'}} scf.insert(key1, columns) scf.insert(key2, columns) assert_equal(scf.multiget_count(keys), {key1: 3, key2: 3}) assert_equal(scf.multiget_count(keys, columns=['1']), {key1: 1, key2: 1}) assert_equal(scf.multiget_count(keys, column_start='3'), {key1: 1, key2: 1}) assert_equal(scf.multiget_count(keys, column_finish='1'), {key1: 1, key2: 1}) def test_batch_insert(self): key1 = 'TestSuperColumnFamily.test_batch_insert1' key2 = 'TestSuperColumnFamily.test_batch_insert2' columns = {'1': {'sub1': 'val1'}, '2': {'sub2': 'val2', 'sub3': 'val3'}} scf.batch_insert({key1: columns, key2: columns}) assert_equal(scf.get(key1), columns) assert_equal(scf.get(key2), columns) def test_add(self): counter_scf.add('key', 'col', super_column='scol') result = counter_scf.get('key', super_column='scol') assert_equal(result['col'], 1) counter_scf.add('key', 'col', super_column='scol') result = counter_scf.get('key', super_column='scol') assert_equal(result['col'], 2) counter_scf.add('key', 'col2', super_column='scol') result = counter_scf.get('key', super_column='scol') assert_equal(result, {'col': 2, 'col2': 1}) def test_remove(self): key = 'TestSuperColumnFamily.test_remove' columns = {'1': {'sub1': 'val1'}, '2': {'sub2': 'val2', 'sub3': 'val3'}, '3': {'sub4': 'val4'}} scf.insert(key, columns) assert_equal(scf.get_count(key), 3) scf.remove(key, super_column='1') assert_equal(scf.get_count(key), 2) scf.remove(key, columns=['3']) assert_equal(scf.get_count(key), 1) assert_equal(scf.get_count(key, super_column='2'), 2) scf.remove(key, super_column='2', columns=['sub2']) assert_equal(scf.get_count(key, super_column='2'), 1) def test_remove_counter(self): key = 'test_remove_counter' counter_scf.add(key, 'col', super_column='scol') result = counter_scf.get(key, super_column='scol') assert_equal(result['col'], 1) counter_scf.remove_counter(key, 'col', super_column='scol') assert_raises(NotFoundException, scf.get, key) def test_xget_counter(self): key = 'test_xget_counter' counter_scf.insert(key, {'scol': {'col1': 1}}) res = list(counter_scf.xget(key)) assert_equal(res, [('scol', {'col1': 1})]) counter_scf.insert(key, {'scol': {'col1': 1, 'col2': 1}}) res = list(counter_scf.xget(key)) assert_equal(res, [('scol', {'col1': 2, 'col2': 1})]) pycassa-1.11.2.1/tests/test_columnfamilymap.py000066400000000000000000000223251303744607500213630ustar00rootroot00000000000000from datetime import datetime import unittest import uuid from nose.tools import assert_raises, assert_equal, assert_true from nose.plugins.skip import SkipTest import pycassa.types as types from pycassa import index, ColumnFamily, ConnectionPool, \ ColumnFamilyMap, NotFoundException, SystemManager from tests.util import requireOPP CF = 'Standard1' SCF = 'Super1' INDEXED_CF = 'Indexed1' pool = None sys_man = None def setup_module(): global pool, sys_man credentials = {'username': 'jsmith', 'password': 'havebadpass'} pool = ConnectionPool(keyspace='PycassaTestKeyspace', credentials=credentials, timeout=1.0) sys_man = SystemManager() def teardown_module(): pool.dispose() class TestUTF8(object): key = types.LexicalUUIDType() strcol = types.AsciiType(default='default') intcol = types.LongType(default=0) floatcol = types.FloatType(default=0.0) datetimecol = types.DateType() def __str__(self): return str(map(str, [self.strcol, self.intcol, self.floatcol, self.datetimecol])) def __eq__(self, other): return self.__dict__ == other.__dict__ def __ne__(self, other): return self.__dict__ != other.__dict__ class TestIndex(object): birthdate = types.LongType(default=0) def __eq__(self, other): return self.__dict__ == other.__dict__ def __ne__(self, other): return self.__dict__ != other.__dict__ class TestEmpty(object): pass class TestColumnFamilyMap(unittest.TestCase): def setUp(self): self.sys_man = sys_man self.map = ColumnFamilyMap(TestUTF8, pool, CF) self.indexed_map = ColumnFamilyMap(TestIndex, pool, INDEXED_CF) self.empty_map = ColumnFamilyMap(TestEmpty, pool, CF, raw_columns=True) def tearDown(self): for instance in self.map.get_range(): self.map.remove(instance) for instance in self.indexed_map.get_range(): self.indexed_map.remove(instance) def instance(self): instance = TestUTF8() instance.key = uuid.uuid4() instance.strcol = '1' instance.intcol = 2 instance.floatcol = 3.5 instance.datetimecol = datetime.now().replace(microsecond=0) return instance def test_empty(self): key = uuid.uuid4() assert_raises(NotFoundException, self.map.get, key) assert_equal(len(self.map.multiget([key])), 0) def test_insert_get(self): instance = self.instance() assert_raises(NotFoundException, self.map.get, instance.key) ts = self.map.insert(instance) assert_true(isinstance(ts, (int, long))) assert_equal(self.map.get(instance.key), instance) def test_insert_get_omitting_columns(self): """ When omitting columns, pycassa should not try to insert the CassandraType instance on a ColumnFamilyMap object """ instance2 = TestUTF8() instance2.key = uuid.uuid4() instance2.strcol = 'lol' instance2.intcol = 2 assert_raises(NotFoundException, self.map.get, instance2.key) self.map.insert(instance2) ret_inst = self.map.get(instance2.key) assert_equal(ret_inst.key, instance2.key) assert_equal(ret_inst.strcol, instance2.strcol) assert_equal(ret_inst.intcol, instance2.intcol) ## these lines are commented out because, though they should work, wont ## because CassandraTypes are not descriptors when used on a ColumnFamilyMap ## instance, they are merely class attributes that are overwritten at runtime # assert_equal(ret_inst.floatcol, instance2.floatcol) # assert_equal(ret_inst.datetimecol, instance2.datetimecol) # assert_equal(self.map.get(instance2.key), instance2) def test_insert_get_indexed_slices(self): instance1 = TestIndex() instance1.key = 'key1' instance1.birthdate = 1L self.indexed_map.insert(instance1) instance2 = TestIndex() instance2.key = 'key2' instance2.birthdate = 1L self.indexed_map.insert(instance2) instance3 = TestIndex() instance3.key = 'key3' instance3.birthdate = 2L self.indexed_map.insert(instance3) expr = index.create_index_expression(column_name='birthdate', value=2L) clause = index.create_index_clause([expr]) result = self.indexed_map.get_indexed_slices(index_clause=clause) count = 0 for instance in result: assert_equal(instance, instance3) count += 1 assert_equal(count, 1) def test_insert_multiget(self): instance1 = self.instance() instance2 = self.instance() missing_key = uuid.uuid4() self.map.insert(instance1) self.map.insert(instance2) rows = self.map.multiget([instance1.key, instance2.key, missing_key]) assert_equal(len(rows), 2) assert_equal(rows[instance1.key], instance1) assert_equal(rows[instance2.key], instance2) assert_true(missing_key not in rows) @requireOPP def test_insert_get_range(self): instances = [self.instance() for i in range(5)] instances = sorted(instances, key=lambda instance: instance.key) for instance in instances: self.map.insert(instance) rows = list(self.map.get_range(start=instances[0].key, finish=instances[-1].key)) assert_equal(len(rows), len(instances)) assert_equal(rows, instances) def test_remove(self): instance = self.instance() self.map.insert(instance) self.map.remove(instance) assert_raises(NotFoundException, self.map.get, instance.key) def test_does_not_insert_extra_column(self): instance = self.instance() instance.othercol = 'Test' self.map.insert(instance) get_instance = self.map.get(instance.key) assert_equal(get_instance.strcol, instance.strcol) assert_equal(get_instance.intcol, instance.intcol) assert_equal(get_instance.floatcol, instance.floatcol) assert_equal(get_instance.datetimecol, instance.datetimecol) assert_raises(AttributeError, getattr, get_instance, 'othercol') def test_has_defaults(self): key = uuid.uuid4() ColumnFamily.insert(self.map, key, {'strcol': '1'}) instance = self.map.get(key) assert_equal(instance.intcol, TestUTF8.intcol.default) assert_equal(instance.floatcol, TestUTF8.floatcol.default) assert_equal(instance.datetimecol, TestUTF8.datetimecol.default) def test_batch_insert(self): instances = [] for i in range(3): instance = TestUTF8() instance.key = uuid.uuid4() instance.strcol = 'instance%s' % (i + 1) instances.append(instance) for i in instances: assert_raises(NotFoundException, self.map.get, i.key) self.map.batch_insert(instances) for i in instances: get_instance = self.map.get(i.key) assert_equal(get_instance.key, i.key) assert_equal(get_instance.strcol, i.strcol) class TestSuperColumnFamilyMap(unittest.TestCase): def setUp(self): self.map = ColumnFamilyMap(TestUTF8, pool, SCF) def tearDown(self): for scols in self.map.get_range(): for instance in scols.values(): self.map.remove(instance) def instance(self, super_column): instance = TestUTF8() instance.key = uuid.uuid4() instance.super_column = super_column instance.strcol = '1' instance.intcol = 2 instance.floatcol = 3.5 instance.datetimecol = datetime.now().replace(microsecond=0) return instance def test_super(self): instance = self.instance('super1') assert_raises(NotFoundException, self.map.get, instance.key) self.map.insert(instance) res = self.map.get(instance.key)[instance.super_column] assert_equal(res, instance) assert_equal(self.map.multiget([instance.key])[instance.key][instance.super_column], instance) assert_equal(list(self.map.get_range(start=instance.key, finish=instance.key)), [{instance.super_column: instance}]) def test_super_remove(self): instance1 = self.instance('super1') assert_raises(NotFoundException, self.map.get, instance1.key) self.map.insert(instance1) instance2 = self.instance('super2') self.map.insert(instance2) self.map.remove(instance2) assert_equal(len(self.map.get(instance1.key)), 1) assert_equal(self.map.get(instance1.key)[instance1.super_column], instance1) def test_batch_insert_super(self): instances = [] for i in range(3): instance = self.instance('super_batch%s' % (i + 1)) instances.append(instance) for i in instances: assert_raises(NotFoundException, self.map.get, i.key) self.map.batch_insert(instances) for i in instances: result = self.map.get(i.key) get_instance = result[i.super_column] assert_equal(len(result), 1) assert_equal(get_instance.key, i.key) assert_equal(get_instance.super_column, i.super_column) assert_equal(get_instance.strcol, i.strcol) pycassa-1.11.2.1/tests/test_connection_pooling.py000066400000000000000000000514651303744607500220630ustar00rootroot00000000000000import threading import unittest import time from nose.tools import assert_raises, assert_equal, assert_true from pycassa import ColumnFamily, ConnectionPool, InvalidRequestError,\ NoConnectionAvailable, MaximumRetryException, AllServersUnavailable from pycassa.logging.pool_stats_logger import StatsLogger from pycassa.cassandra.ttypes import ColumnPath from pycassa.cassandra.ttypes import InvalidRequestException from pycassa.cassandra.ttypes import NotFoundException _credentials = {'username': 'jsmith', 'password': 'havebadpass'} def _get_list(): return ['foo:bar'] class PoolingCase(unittest.TestCase): def tearDown(self): pool = ConnectionPool('PycassaTestKeyspace') cf = ColumnFamily(pool, 'Standard1') for key, cols in cf.get_range(): cf.remove(key) def test_basic_pools(self): pool = ConnectionPool('PycassaTestKeyspace', credentials=_credentials) cf = ColumnFamily(pool, 'Standard1') cf.insert('key1', {'col': 'val'}) pool.dispose() def test_empty_list(self): assert_raises(AllServersUnavailable, ConnectionPool, 'PycassaTestKeyspace', server_list=[]) def test_server_list_func(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool('PycassaTestKeyspace', server_list=_get_list, listeners=[stats_logger], prefill=False) assert_equal(stats_logger.serv_list, ['foo:bar']) assert_equal(stats_logger.stats['list'], 1) pool.dispose() def test_queue_pool(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=True, pool_timeout=0.1, timeout=1, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=False) conns = [] for i in range(10): conns.append(pool.get()) assert_equal(stats_logger.stats['created']['success'], 10) assert_equal(stats_logger.stats['checked_out'], 10) # Pool is maxed out now assert_raises(NoConnectionAvailable, pool.get) assert_equal(stats_logger.stats['created']['success'], 10) assert_equal(stats_logger.stats['at_max'], 1) for i in range(0, 5): pool.return_conn(conns[i]) assert_equal(stats_logger.stats['disposed']['success'], 0) assert_equal(stats_logger.stats['checked_in'], 5) for i in range(5, 10): pool.return_conn(conns[i]) assert_equal(stats_logger.stats['disposed']['success'], 5) assert_equal(stats_logger.stats['checked_in'], 10) conns = [] # These connections should come from the pool for i in range(5): conns.append(pool.get()) assert_equal(stats_logger.stats['created']['success'], 10) assert_equal(stats_logger.stats['checked_out'], 15) # But these will need to be made for i in range(5): conns.append(pool.get()) assert_equal(stats_logger.stats['created']['success'], 15) assert_equal(stats_logger.stats['checked_out'], 20) assert_equal(stats_logger.stats['disposed']['success'], 5) for i in range(10): conns[i].return_to_pool() assert_equal(stats_logger.stats['checked_in'], 20) assert_equal(stats_logger.stats['disposed']['success'], 10) assert_raises(InvalidRequestError, conns[0].return_to_pool) assert_equal(stats_logger.stats['checked_in'], 20) assert_equal(stats_logger.stats['disposed']['success'], 10) print "in test:", id(conns[-1]) conns[-1].return_to_pool() assert_equal(stats_logger.stats['checked_in'], 20) assert_equal(stats_logger.stats['disposed']['success'], 10) pool.dispose() def test_queue_pool_threadlocal(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=True, pool_timeout=0.01, timeout=1, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=True) conns = [] assert_equal(stats_logger.stats['created']['success'], 5) # These connections should all be the same for i in range(10): conns.append(pool.get()) assert_equal(stats_logger.stats['created']['success'], 5) assert_equal(stats_logger.stats['checked_out'], 1) for i in range(0, 5): pool.return_conn(conns[i]) assert_equal(stats_logger.stats['checked_in'], 1) for i in range(5, 10): pool.return_conn(conns[i]) assert_equal(stats_logger.stats['checked_in'], 1) conns = [] assert_equal(stats_logger.stats['created']['success'], 5) # A single connection should come from the pool for i in range(5): conns.append(pool.get()) assert_equal(stats_logger.stats['created']['success'], 5) assert_equal(stats_logger.stats['checked_out'], 2) for conn in conns: pool.return_conn(conn) conns = [] threads = [] stats_logger.reset() def checkout_return(): conn = pool.get() time.sleep(1) pool.return_conn(conn) for i in range(5): threads.append(threading.Thread(target=checkout_return)) threads[-1].start() for thread in threads: thread.join() assert_equal(stats_logger.stats['created']['success'], 0) # Still 5 connections in pool assert_equal(stats_logger.stats['checked_out'], 5) assert_equal(stats_logger.stats['checked_in'], 5) # These should come from the pool threads = [] for i in range(5): threads.append(threading.Thread(target=checkout_return)) threads[-1].start() for thread in threads: thread.join() assert_equal(stats_logger.stats['created']['success'], 0) assert_equal(stats_logger.stats['checked_out'], 10) assert_equal(stats_logger.stats['checked_in'], 10) pool.dispose() def test_queue_pool_no_prefill(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=False, pool_timeout=0.1, timeout=1, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=False) conns = [] for i in range(10): conns.append(pool.get()) assert_equal(stats_logger.stats['created']['success'], i + 1) assert_equal(stats_logger.stats['checked_out'], i + 1) # Pool is maxed out now assert_raises(NoConnectionAvailable, pool.get) assert_equal(stats_logger.stats['created']['success'], 10) assert_equal(stats_logger.stats['at_max'], 1) for i in range(0, 5): pool.return_conn(conns[i]) assert_equal(stats_logger.stats['checked_in'], i + 1) assert_equal(stats_logger.stats['disposed']['success'], 0) for i in range(5, 10): pool.return_conn(conns[i]) assert_equal(stats_logger.stats['checked_in'], i + 1) assert_equal(stats_logger.stats['disposed']['success'], (i - 5) + 1) conns = [] # These connections should come from the pool for i in range(5): conns.append(pool.get()) assert_equal(stats_logger.stats['created']['success'], 10) assert_equal(stats_logger.stats['checked_out'], (i + 10) + 1) # But these will need to be made for i in range(5): conns.append(pool.get()) assert_equal(stats_logger.stats['created']['success'], (i + 10) + 1) assert_equal(stats_logger.stats['checked_out'], (i + 15) + 1) assert_equal(stats_logger.stats['disposed']['success'], 5) for i in range(10): conns[i].return_to_pool() assert_equal(stats_logger.stats['checked_in'], (i + 10) + 1) assert_equal(stats_logger.stats['disposed']['success'], 10) # Make sure a double return doesn't change our counts assert_raises(InvalidRequestError, conns[0].return_to_pool) assert_equal(stats_logger.stats['checked_in'], 20) assert_equal(stats_logger.stats['disposed']['success'], 10) conns[-1].return_to_pool() assert_equal(stats_logger.stats['checked_in'], 20) assert_equal(stats_logger.stats['disposed']['success'], 10) pool.dispose() def test_queue_pool_recycle(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=1, prefill=True, pool_timeout=0.5, timeout=1, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=False) cf = ColumnFamily(pool, 'Standard1') columns = {'col1': 'val', 'col2': 'val'} for i in range(10): cf.insert('key', columns) assert_equal(stats_logger.stats['recycled'], 5) pool.dispose() stats_logger.reset() # Try with threadlocal=True pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=1, prefill=False, pool_timeout=0.5, timeout=1, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=True) cf = ColumnFamily(pool, 'Standard1') for i in range(10): cf.insert('key', columns) pool.dispose() assert_equal(stats_logger.stats['recycled'], 5) def test_pool_connection_failure(self): stats_logger = StatsLoggerWithListStorage() def get_extra(): """Make failure count adjustments based on whether or not the permuted list starts with a good host:port""" if stats_logger.serv_list[0] == 'localhost:9160': return 0 else: return 1 pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=True, keyspace='PycassaTestKeyspace', credentials=_credentials, pool_timeout=0.01, timeout=0.05, listeners=[stats_logger], use_threadlocal=False, server_list=['localhost:9160', 'foobar:1']) assert_equal(stats_logger.stats['failed'], 4 + get_extra()) for i in range(0, 7): pool.get() assert_equal(stats_logger.stats['failed'], 6 + get_extra()) pool.dispose() stats_logger.reset() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=True, keyspace='PycassaTestKeyspace', credentials=_credentials, pool_timeout=0.01, timeout=0.05, listeners=[stats_logger], use_threadlocal=True, server_list=['localhost:9160', 'foobar:1']) assert_equal(stats_logger.stats['failed'], 4 + get_extra()) threads = [] for i in range(0, 7): threads.append(threading.Thread(target=pool.get)) threads[-1].start() for thread in threads: thread.join() assert_equal(stats_logger.stats['failed'], 6 + get_extra()) pool.dispose() def test_queue_failover(self): for prefill in (True, False): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=1, max_overflow=0, recycle=10000, prefill=prefill, timeout=1, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=False, server_list=['localhost:9160', 'localhost:9160']) cf = ColumnFamily(pool, 'Standard1') for i in range(1, 5): conn = pool.get() setattr(conn, 'send_batch_mutate', conn._fail_once) conn._should_fail = True conn.return_to_pool() # The first insert attempt should fail, but failover should occur # and the insert should succeed cf.insert('key', {'col': 'val%d' % i, 'col2': 'val'}) assert_equal(stats_logger.stats['failed'], i) assert_equal(cf.get('key'), {'col': 'val%d' % i, 'col2': 'val'}) pool.dispose() def test_queue_threadlocal_failover(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=1, max_overflow=0, recycle=10000, prefill=True, timeout=0.05, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=True, server_list=['localhost:9160', 'localhost:9160']) cf = ColumnFamily(pool, 'Standard1') for i in range(1, 5): conn = pool.get() setattr(conn, 'send_batch_mutate', conn._fail_once) conn._should_fail = True conn.return_to_pool() # The first insert attempt should fail, but failover should occur # and the insert should succeed cf.insert('key', {'col': 'val%d' % i, 'col2': 'val'}) assert_equal(stats_logger.stats['failed'], i) assert_equal(cf.get('key'), {'col': 'val%d' % i, 'col2': 'val'}) pool.dispose() stats_logger.reset() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=True, timeout=0.05, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=True, server_list=['localhost:9160', 'localhost:9160']) cf = ColumnFamily(pool, 'Standard1') for i in range(5): conn = pool.get() setattr(conn, 'send_batch_mutate', conn._fail_once) conn._should_fail = True conn.return_to_pool() threads = [] args = ('key', {'col': 'val', 'col2': 'val'}) for i in range(5): threads.append(threading.Thread(target=cf.insert, args=args)) threads[-1].start() for thread in threads: thread.join() assert_equal(stats_logger.stats['failed'], 5) pool.dispose() def test_queue_retry_limit(self): for prefill in (True, False): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=prefill, max_retries=3, # allow 3 retries keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=False, server_list=['localhost:9160', 'localhost:9160']) # Corrupt all of the connections for i in range(5): conn = pool.get() setattr(conn, 'send_batch_mutate', conn._fail_once) conn._should_fail = True conn.return_to_pool() cf = ColumnFamily(pool, 'Standard1') assert_raises(MaximumRetryException, cf.insert, 'key', {'col': 'val', 'col2': 'val'}) assert_equal(stats_logger.stats['failed'], 4) # On the 4th failure, didn't retry pool.dispose() def test_queue_failure_on_retry(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=True, max_retries=3, # allow 3 retries keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=False, server_list=['localhost:9160', 'localhost:9160']) def raiser(): raise IOError # Replace wrapper will open a connection to get the version, so if it # fails we need to retry as with any other connection failure pool._replace_wrapper = raiser # Corrupt all of the connections for i in range(5): conn = pool.get() setattr(conn, 'send_batch_mutate', conn._fail_once) conn._should_fail = True conn.return_to_pool() cf = ColumnFamily(pool, 'Standard1') assert_raises(MaximumRetryException, cf.insert, 'key', {'col': 'val', 'col2': 'val'}) assert_equal(stats_logger.stats['failed'], 4) # On the 4th failure, didn't retry pool.dispose() def test_queue_threadlocal_retry_limit(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=True, max_retries=3, # allow 3 retries keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=True, server_list=['localhost:9160', 'localhost:9160']) # Corrupt all of the connections for i in range(5): conn = pool.get() setattr(conn, 'send_batch_mutate', conn._fail_once) conn._should_fail = True conn.return_to_pool() cf = ColumnFamily(pool, 'Standard1') assert_raises(MaximumRetryException, cf.insert, 'key', {'col': 'val', 'col2': 'val'}) assert_equal(stats_logger.stats['failed'], 4) # On the 4th failure, didn't retry pool.dispose() def test_queue_failure_with_no_retries(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=True, max_retries=3, # allow 3 retries keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=False, server_list=['localhost:9160', 'localhost:9160']) # Corrupt all of the connections for i in range(5): conn = pool.get() setattr(conn, 'send_batch_mutate', conn._fail_once) conn._should_fail = True conn.return_to_pool() cf = ColumnFamily(pool, 'Counter1') assert_raises(MaximumRetryException, cf.insert, 'key', {'col': 2, 'col2': 2}) assert_equal(stats_logger.stats['failed'], 1) # didn't retry at all pool.dispose() def test_failure_connection_info(self): stats_logger = StatsLoggerRequestInfo() pool = ConnectionPool(pool_size=1, max_overflow=0, recycle=10000, prefill=True, max_retries=0, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=True, server_list=['localhost:9160']) cf = ColumnFamily(pool, 'Counter1') # Corrupt the connection conn = pool.get() setattr(conn, 'send_get', conn._fail_once) conn._should_fail = True conn.return_to_pool() assert_raises(MaximumRetryException, cf.get, 'greunt', columns=['col']) assert_true('request' in stats_logger.failure_dict['connection'].info) request = stats_logger.failure_dict['connection'].info['request'] assert_equal(request['method'], 'get') assert_equal(request['args'], ('greunt', ColumnPath('Counter1', None, 'col'), 1)) assert_equal(request['kwargs'], {}) def test_pool_invalid_request(self): stats_logger = StatsLoggerWithListStorage() pool = ConnectionPool(pool_size=1, max_overflow=0, recycle=10000, prefill=True, max_retries=3, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[stats_logger], use_threadlocal=False, server_list=['localhost:9160']) cf = ColumnFamily(pool, 'Standard1') # Make sure the pool doesn't hide and retries invalid requests assert_raises(InvalidRequestException, cf.add, 'key', 'col') assert_raises(NotFoundException, cf.get, 'none') pool.dispose() class StatsLoggerWithListStorage(StatsLogger): def obtained_server_list(self, dic): StatsLogger.obtained_server_list(self, dic) self.serv_list = dic.get('server_list') class StatsLoggerRequestInfo(StatsLogger): def connection_failed(self, dic): StatsLogger.connection_failed(self, dic) self.failure_dict = dic pycassa-1.11.2.1/tests/test_pool_logger.py000066400000000000000000000116001303744607500204700ustar00rootroot00000000000000from unittest import TestCase from nose.tools import assert_equal, assert_raises from pycassa.logging.pool_stats_logger import StatsLogger from pycassa.pool import ConnectionPool, NoConnectionAvailable, InvalidRequestError __author__ = 'gilles' _credentials = {'username': 'jsmith', 'password': 'havebadpass'} class TestStatsLogger(TestCase): def __init__(self, methodName='runTest'): super(TestStatsLogger, self).__init__(methodName) def setUp(self): super(TestStatsLogger, self).setUp() self.logger = StatsLogger() def test_empty(self): assert_equal(self.logger.stats, self.logger._stats) def test_connection_created(self): self.logger.connection_created({'level': 'info'}) self.logger.connection_created({'level': 'error'}) stats = self.logger.stats assert_equal(stats['created']['success'], 1) assert_equal(stats['created']['failure'], 1) def test_connection_checked(self): self.logger.connection_checked_out({}) self.logger.connection_checked_out({}) self.logger.connection_checked_in({}) stats = self.logger.stats assert_equal(stats['checked_out'], 2) assert_equal(stats['checked_in'], 1) assert_equal(stats['opened'], {'current': 1, 'max': 2}) def test_connection_disposed(self): self.logger.connection_disposed({'level': 'info'}) self.logger.connection_disposed({'level': 'error'}) stats = self.logger.stats assert_equal(stats['disposed']['success'], 1) assert_equal(stats['disposed']['failure'], 1) def test_connection_recycled(self): self.logger.connection_recycled({}) stats = self.logger.stats assert_equal(stats['recycled'], 1) def test_connection_failed(self): self.logger.connection_failed({}) stats = self.logger.stats assert_equal(stats['failed'], 1) def test_obtained_server_list(self): self.logger.obtained_server_list({}) stats = self.logger.stats assert_equal(stats['list'], 1) def test_pool_at_max(self): self.logger.pool_at_max({}) stats = self.logger.stats assert_equal(stats['at_max'], 1) class TestInPool(TestCase): def __init__(self, methodName='runTest'): super(TestInPool, self).__init__(methodName) def test_pool(self): listener = StatsLogger() pool = ConnectionPool(pool_size=5, max_overflow=5, recycle=10000, prefill=True, pool_timeout=0.1, timeout=1, keyspace='PycassaTestKeyspace', credentials=_credentials, listeners=[listener], use_threadlocal=False) conns = [] for i in range(10): conns.append(pool.get()) assert_equal(listener.stats['created']['success'], 10) assert_equal(listener.stats['created']['failure'], 0) assert_equal(listener.stats['checked_out'], 10) assert_equal(listener.stats['opened'], {'current': 10, 'max': 10}) # Pool is maxed out now assert_raises(NoConnectionAvailable, pool.get) assert_equal(listener.stats['created']['success'], 10) assert_equal(listener.stats['checked_out'], 10) assert_equal(listener.stats['opened'], {'current': 10, 'max': 10}) assert_equal(listener.stats['at_max'], 1) for i in range(0, 5): pool.return_conn(conns[i]) assert_equal(listener.stats['disposed']['success'], 0) assert_equal(listener.stats['checked_in'], 5) assert_equal(listener.stats['opened'], {'current': 5, 'max': 10}) for i in range(5, 10): pool.return_conn(conns[i]) assert_equal(listener.stats['disposed']['success'], 5) assert_equal(listener.stats['checked_in'], 10) conns = [] # These connections should come from the pool for i in range(5): conns.append(pool.get()) assert_equal(listener.stats['created']['success'], 10) assert_equal(listener.stats['checked_out'], 15) # But these will need to be made for i in range(5): conns.append(pool.get()) assert_equal(listener.stats['created']['success'], 15) assert_equal(listener.stats['checked_out'], 20) assert_equal(listener.stats['disposed']['success'], 5) for i in range(10): conns[i].return_to_pool() assert_equal(listener.stats['checked_in'], 20) assert_equal(listener.stats['disposed']['success'], 10) assert_raises(InvalidRequestError, conns[0].return_to_pool) assert_equal(listener.stats['checked_in'], 20) assert_equal(listener.stats['disposed']['success'], 10) print "in test:", id(conns[-1]) conns[-1].return_to_pool() assert_equal(listener.stats['checked_in'], 20) assert_equal(listener.stats['disposed']['success'], 10) pool.dispose() pycassa-1.11.2.1/tests/test_system_manager.py000066400000000000000000000135041303744607500212030ustar00rootroot00000000000000import unittest from nose import SkipTest from nose.tools import assert_equal, assert_raises from pycassa.pool import ConnectionPool from pycassa.columnfamily import ColumnFamily from pycassa.system_manager import (SIMPLE_STRATEGY, LONG_TYPE, SystemManager, UTF8_TYPE, TIME_UUID_TYPE, ASCII_TYPE, INT_TYPE) from pycassa.cassandra.ttypes import InvalidRequestException from pycassa.types import LongType TEST_KS = 'PycassaTestKeyspace' sys = None def setup_module(): global sys sys = SystemManager() def teardown_module(): sys.close() class SystemManagerTest(unittest.TestCase): def test_system_calls(self): # keyspace modifications try: sys.drop_keyspace('TestKeyspace') except InvalidRequestException: pass sys.create_keyspace('TestKeyspace', SIMPLE_STRATEGY, {'replication_factor': '3'}) sys.alter_keyspace('TestKeyspace', strategy_options={'replication_factor': '1'}) sys.create_column_family('TestKeyspace', 'TestCF') sys.alter_column_family('TestKeyspace', 'TestCF', comment='testing') sys.create_index('TestKeyspace', 'TestCF', 'column', LONG_TYPE) sys.drop_column_family('TestKeyspace', 'TestCF') sys.describe_ring('TestKeyspace') sys.describe_cluster_name() sys.describe_version() sys.describe_schema_versions() sys.list_keyspaces() sys.drop_keyspace('TestKeyspace') def test_bad_comparator(self): sys.create_keyspace('TestKeyspace', SIMPLE_STRATEGY, {'replication_factor': '3'}) for comparator in [LongType, 123]: assert_raises(TypeError, sys.create_column_family, 'TestKeyspace', 'TestBadCF', comparator_type=comparator) sys.drop_keyspace('TestKeyspace') def test_alter_column_non_bytes_type(self): sys.create_column_family(TEST_KS, 'LongCF', comparator_type=LONG_TYPE) sys.create_index(TEST_KS, 'LongCF', 3, LONG_TYPE) pool = ConnectionPool(TEST_KS) cf = ColumnFamily(pool, 'LongCF') cf.insert('key', {3: 3}) assert_equal(cf.get('key')[3], 3) sys.alter_column(TEST_KS, 'LongCF', 2, LONG_TYPE) cf = ColumnFamily(pool, 'LongCF') cf.insert('key', {2: 2}) assert_equal(cf.get('key')[2], 2) def test_alter_column_family_default_validation_class(self): sys.create_column_family(TEST_KS, 'AlteredCF', default_validation_class=LONG_TYPE) pool = ConnectionPool(TEST_KS) cf = ColumnFamily(pool, 'AlteredCF') assert_equal(cf.default_validation_class, "LongType") sys.alter_column_family(TEST_KS, 'AlteredCF', default_validation_class=UTF8_TYPE) cf = ColumnFamily(pool, 'AlteredCF') assert_equal(cf.default_validation_class, "UTF8Type") def test_alter_column_super_cf(self): sys.create_column_family(TEST_KS, 'SuperCF', super=True, comparator_type=TIME_UUID_TYPE, subcomparator_type=UTF8_TYPE) sys.alter_column(TEST_KS, 'SuperCF', 'foobar_col', UTF8_TYPE) def test_column_validators(self): validators = {'name': UTF8_TYPE, 'age': LONG_TYPE} sys.create_column_family(TEST_KS, 'ValidatedCF', column_validation_classes=validators) pool = ConnectionPool(TEST_KS) cf = ColumnFamily(pool, 'ValidatedCF') cf.insert('key', {'name': 'John', 'age': 40}) self.assertEquals(cf.get('key'), {'name': 'John', 'age': 40}) validators = {'name': ASCII_TYPE, 'age': INT_TYPE} sys.alter_column_family(TEST_KS, 'ValidatedCF', column_validation_classes=validators) cf.load_schema() self.assertEquals(cf.get('key'), {'name': 'John', 'age': 40}) def test_caching_pre_11(self): version = tuple( [int(v) for v in sys._conn.describe_version().split('.')]) if version >= (19, 30, 0): raise SkipTest('CF specific caching no longer supported.') sys.create_column_family(TEST_KS, 'CachedCF10', row_cache_size=100, key_cache_size=100, row_cache_save_period_in_seconds=3, key_cache_save_period_in_seconds=3) pool = ConnectionPool(TEST_KS) cf = ColumnFamily(pool, 'CachedCF10') assert_equal(cf._cfdef.row_cache_size, 100) assert_equal(cf._cfdef.key_cache_size, 100) assert_equal(cf._cfdef.row_cache_save_period_in_seconds, 3) assert_equal(cf._cfdef.key_cache_save_period_in_seconds, 3) sys.alter_column_family(TEST_KS, 'CachedCF10', row_cache_size=200, key_cache_size=200, row_cache_save_period_in_seconds=4, key_cache_save_period_in_seconds=4) cf1 = ColumnFamily(pool, 'CachedCF10') assert_equal(cf1._cfdef.row_cache_size, 200) assert_equal(cf1._cfdef.key_cache_size, 200) assert_equal(cf1._cfdef.row_cache_save_period_in_seconds, 4) assert_equal(cf1._cfdef.key_cache_save_period_in_seconds, 4) def test_caching_post_11(self): version = tuple( [int(v) for v in sys._conn.describe_version().split('.')]) if version < (19, 30, 0): raise SkipTest('CF caching policy not yet supported.') sys.create_column_family(TEST_KS, 'CachedCF11') pool = ConnectionPool(TEST_KS) cf = ColumnFamily(pool, 'CachedCF11') assert_equal(cf._cfdef.caching, 'KEYS_ONLY') sys.alter_column_family(TEST_KS, 'CachedCF11', caching='all') cf = ColumnFamily(pool, 'CachedCF11') assert_equal(cf._cfdef.caching, 'ALL') sys.alter_column_family(TEST_KS, 'CachedCF11', caching='rows_only') cf = ColumnFamily(pool, 'CachedCF11') assert_equal(cf._cfdef.caching, 'ROWS_ONLY') sys.alter_column_family(TEST_KS, 'CachedCF11', caching='none') cf = ColumnFamily(pool, 'CachedCF11') assert_equal(cf._cfdef.caching, 'NONE') pycassa-1.11.2.1/tests/util.py000066400000000000000000000007371303744607500161070ustar00rootroot00000000000000from functools import wraps from nose.plugins.skip import SkipTest def requireOPP(f): """ Decorator to require an order-preserving partitioner """ @wraps(f) def wrapper(self, *args, **kwargs): partitioner = self.sys_man.describe_partitioner() if partitioner in ('RandomPartitioner', 'Murmur3Partitioner'): raise SkipTest('Must use order preserving partitioner for this test') return f(self, *args, **kwargs) return wrapper