bsddb3-6.1.0/0000755000000000000000000000000012363235112012613 5ustar rootroot00000000000000bsddb3-6.1.0/ChangeLog0000644000000000000000000011451612363203263014377 0ustar rootroot000000000000006.1.0: * Support Berkeley DB 6.1.x. * Solve a ResourceWarning when compiling. * Drop support for Python 2.4, 2.5 and 3.1. If you need compatibility with those versions, you can keep using old releases of these bindings. * Drop support for Berkeley DB 4.3, 4.4, 4.5, 4.6. If you need compatibility with those versions, you can keep using old releases of these bindings. * From now, on our support reference is Red Had Enterprise Linux 6. * Drop modules attributes "cvsid". * Drop (hidden) $Id$ keyword in the documentation. 6.0.1: * Clarification of license. Thanks to Jan StanÄ›k for bringing this issue up. This work is now explicitly licensed under 3-clause BSD license. * Fixed a long standing bug (August 2008, rev 9fd52748fa59) on "dbtables.py". Notified by Maxime Labelle. * If you want to link with Oracle Berkeley DB 6.0, you will need to create the environment variable 'YES_I_HAVE_THE_RIGHT_TO_USE_THIS_BERKELEY_DB_VERSION' to signal to the pybsddb that you are legal. To be legal, your code MUST be AGPL3 *OR* you have to buy a commercial license from Oracle. If you are not legally entitled to use Berkeley DB 6.0 and you have previous versions of Berkeley DB on your system, you can a) delete Berkeley DB 6.0 and try again, OR b) instruct pybsddb to use a previous Berkeley DB version, using environment variables or command line options. Sorry for the inconvenience. I am trying to protect you. Some details: https://forums.oracle.com/message/11184885 http://lists.debian.org/debian-legal/2013/07/ 6.0.0: * Support Berkeley DB 6.0.x. * HEADS UP: If you are using "bsddb3._bsddb" in your code, for example for exceptions, change it to "bsddb3._db". * Print test working directory when running the testsuite. You can control it using "TMPDIR" environment variable. Defaults to "/tmp/z-Berkeley_DB/". * Support for "DB_EVENT_REP_AUTOTAKEOVER_FAILED" event. * Support for "DB_REPMGR_ISVIEW", "DB_DBT_BLOB", "DB_LOG_BLOB", "DB_STREAM_READ", "DB_STREAM_WRITE" and "DB_STREAM_SYNC_WRITE" flags. * Some DB_SEQUENCE function signatures changed in Berkeley DB 6.0.x. * Erratic behaviour of "DBEnv->rep_elect()" because a typo. * The testsuite prints Python bitness (32/64). * Tests compatible with hash randomization, default in Python 3.3. See http://bugs.python.org/issue13703 . * Errors when trying to calculate the length of a DB were masked, and an unuseful and unrelated exception was raised. * Code cleanup since pybsddb is not in the Python 3.x stdlib anymore, and the version in Python 2.6/2.7 is being maintained separately. * Improvements to documentation generation. 5.3.0: * Support Berkeley DB 5.3.x. * Drop support for Berkeley DB 4.2 and Python 2.3. Our reference is Red Had Enterprise Linux 5, until march 2014. After that, RHEL6 has Python 2.6 and BDB 4.7. * According to http://superuser.com/questions/189931/python-and-berkeley-db-versions-in-redhat-enterprise-linux-3-4-5-and-upcoming-6 : * RHEL3: Python 2.2.3, BDB 4.1.25 * RHEL4: Python 2.3.4, BDB 4.2.52 * RHEL5: Python 2.4.3, BDB 4.3.29 * RHEL6: Python 2.6.2, BDB 4.7.25 * Support for "DBEnv->set_intermediate_dir()", available in Berkeley DB 4.3-4.6. Patch by Garret Cooper. * Support for "DB->set_dup_compare()". Original patches by Nikita M. Kozlovsky and Ben Schmeckpeper. * Fixed a testsuite compatibility problem with BDB 5.2. * If we are running Solaris or derivatives, and 64bit python, try to find the library under "/usr/local/Berkeley.*.*/64/". * Solaris 10 Update 10 exposes a very old race condition in the replication master election tests. Some details in https://forums.oracle.com/forums/thread.jspa?messageID=9902860 . Workaround proposed in a private email from Paula Bingham (Oracle), in 20110929. * When doing the full matrix test for a release, stop the verification if any test failed. 5.2.0: * Support for Berkeley DB 5.2. * Support for the newly available replication manager events: DB_EVENT_REP_SITE_ADDED, DB_EVENT_REP_SITE_REMOVED, DB_EVENT_REP_LOCAL_SITE_REMOVED, DB_EVENT_REP_CONNECT_BROKEN, DB_EVENT_REP_CONNECT_ESTD, DB_EVENT_REP_CONNECT_TRY_FAILED, DB_EVENT_REP_INIT_DONE. * New Object: "DB_SITE". Support for all its methods. * Parameters for "DB_SITE->set_config()": DB_BOOTSTRAP_HELPER, DB_GROUP_CREATOR, DB_LEGACY, DB_LOCAL_SITE, DB_REPMGR_PEER. * Support for some stuff in the new "Dynamic Environment Configuration": DB_MEM_LOCK, DB_MEM_LOCKOBJECT, DB_MEM_LOCKER, DB_MEM_LOGID, DB_MEM_TRANSACTION, DB_MEM_THREAD. * Add "bytes" to "DBEnv_memp_stat()". Original patch from Garrett Cooper. 5.1.2: * 5.1.1 install fails if the bsddb in the standard library is not installed, under Python 2.7. Reported by Arfrever Frehtes Taifersar Arahesis. * Since 5.0.0, we can't find 4.x libraries unless we specify a "--berkeley-db=/path/to/bsddb" option. Reported by Wen Heping. * Support "DB_ENV->get_open_flags()", "DB_ENV->set_intermediate_dir_mode()", "DB_ENV->get_intermediate_dir_mode()". * Support "DB->get_dbname()", "DB->get_open_flags()". * Support "db_full_version()". * Document "version()". This top-level function has been supported forever. * Bugfix when calling "DB->get_size()" on a zero length record. Reported by Austin Bingham. * 'assertEquals()' is deprecated in Python 3.2. * 'assert_()' is deprecated in Python 3.2. * Solved 'ResourceWarning' under Python 3.2. 5.1.1: * Recent pre-releases of Python 3.2 issue ResourceWarnings about fileshandles deallocated without being closed first. Fix testsuite. * Current "*.pyc" and "*.pyo" cleaning is not working in a PEP 3147 world ("__pycache__"). I don't think this code is actually necessary anymore. Deleted. * Python 2.7.0 deprecates CObject incorrectly. See Python issue #9675. * Testsuite for "DB->get_transactional()" should not create databases outside the TMP directory, neither leave the files behind. * If something happens while creating the CObject/Capsule object, keep going, even without exporting the C API, instead of crashing. * Support for "DB_FORCESYNC", "DB_FAILCHK", "DB_SET_REG_TIMEOUT", "DB_TXN_BULK", "DB_HOTBACKUP_IN_PROGRESS". * Support "DB_EVENT_REG_ALIVE", "DB_EVENT_REG_PANIC", "DB_EVENT_REP_DUPMASTER", "DB_REPMGR_CONF_ELECTIONS", "DB_EVENT_REP_ELECTION_FAILED", "DB_EVENT_REP_MASTER_FAILURE". * Support for "DB_VERB_REP_ELECT", "DB_VERB_REP_LEASE", "DB_VERB_REP_MISC", "DB_VERB_REP_MSGS", "DB_VERB_REP_SYNC", "DB_VERB_REP_SYSTEM", "DB_VERB_REPMGR_CONNFAIL", "DB_VERB_REPMGR_MISC". * Support for "DB_STAT_LOCK_CONF", "DB_STAT_LOCK_LOCKERS", "DB_STAT_LOCK_OBJECTS", "DB_STAT_LOCK_PARAMS". * Support for "DB_REP_CONF_INMEM". * Support for "DB_TIMEOUT ". * Support for "DB_CURSOR_BULK". 5.1.0: * Support for Berkeley DB 5.1. * Drop support for Berkeley DB 4.1. Our reference is Red Had Enterprise Linux 4, until February 2012. After that, RHEL5 has Python 2.4 and BDB 4.3. * According to http://superuser.com/questions/189931/python-and-berkeley-db-versions-in-redhat-enterprise-linux-3-4-5-and-upcoming-6 : * RHEL3: Python 2.2.3, BDB 4.1.25 * RHEL4: Python 2.3.4, BDB 4.2.52 * RHEL5: Python 2.4.3, BDB 4.3.29 * RHEL6: Python 2.6.2, BDB 4.7.25 (Currently in BETA) * Include documentation source (*.rst) in the EGG. * Include processed HTML documentation in the EGG. * Update the external links in documentation, since Oracle changed its web structure. * Some link fixes for external documentation. * Links added in the documentation to Oracle Berkeley DB programmer reference. * Support for "DB->get_transactional()". * Support for "DB_REPMGR_ACKS_ALL_AVAILABLE". 5.0.0: * Support for Berkeley DB 5.0. * Drop support for Python 3.0. * Now you can use TMPDIR env variable to override default test directory ("/tmp"). * Versioning of C API. If you use the code from C, please check the bsddb_api->api_version number against PYBSDDB_API_VERSION macro. * In C code, the bsddb_api->dbsequence_type component is always available, even if the Berkeley DB version used doesn't support sequences. In that case, the component will be NULL. * In C code, "DBSequenceObject_Check()" macro always exists, even if the Berkeley DB version used doesn't suport sequences. In that case, the test macro always returns "false". * For a long time, the API has been accesible via C using "_bsddb.api" or "_pybsddb.api". If you are using Python >=2.7, you acquire access to that API via the new Capsule protocol (see "bsddb.h"). If you use the C API and upgrade to Python 2.7 and up, you must update the access code (see "bsddb.h"). The Capsule protocol is not supported in Python 3.0, but pybsddb 5.0.x doesn't support Python 3.0 anymore. * Capsule support was buggy. The string passed in to PyCapsule_New() must outlive the capsule. (Larry Hastings) * Solve an "Overflow" warning in the testsuite running under python 2.3. * When doing a complete full-matrix test, any warning will be considered an error. 4.8.4: * When doing the full matrix testing with python >=2.6, we activate the deprecation warnings (py3k). * Split dependencies in the Replication testsuite. * Help the Gargabe Collection freeing resources when the replication testsuite is completed. * Import warning when used as stdlib "bsddb" instead of pybsddb project as "bsddb3", when using python >=2.6 and py3k warnings are active. * Old regression: dbshelve objects are iterable again. The bug was introduced in pybsddb 4.7.2. Added relevant testcases. * Patches ported from Python developers: * Memory leaks: #7808 - http://bugs.python.org/issue7808 - Florent Xicluna * Floating point rounding in testcases: #5073 - http://bugs.python.org/issue5073 - Mark Dickinson * Orthograpy: #5341 - http://bugs.python.org/issue5341 * Py3k warnings in Python >=2.6: #7092 - http://bugs.python.org/issue7092 * Correct path for tests: #7269 - http://bugs.python.org/issue7269 - Florent Xicluna * Shebang: benjamin.peterson * Use new Python 2.7 assert()'s: Florent Xicluna * Solve a spurious stdlib warning in python >=2.6 with -3 flags. * Remove "DBIncompleteError", for sure this time. There were traces in "dbtables", in some tests and in the docs. * The DBKeyEmptyError exception raised by the library is not the same DBKeyEmptyError available in the lib. So the raised exception was uncatchable unless you catch DBError. And you can not identify it. * Solved last point, document that DBKeyEmptyError exception derives also from KeyError, just like DBNotFoundError exception. * Update documentation to describe all exceptions provided by this module. 4.8.3: * "bsddb.h" inclusion in PYPI is inconsistent. Solved. * Support for "DB_ENV->mutex_stat()", "DB_ENV->mutex_stat_print()", "DB->stat_print()", "DB_ENV->lock_stat_print()", "DB_ENV->log_stat_print()", "DB_ENV->stat_print()", "DB_ENV->memp_stat()" and "DB_ENV->memp_stat_print()". * Support for "DB_ENV->get_tmp_dir()". * Support for "DB_STAT_SUBSYSTEM", "DB_STAT_MEMP_HASH" flags. * Support for "DB_ENV->set_mp_max_openfd()", "DB_ENV->get_mp_max_openfd()", "DB_ENV->set_mp_max_write()", "DB_ENV->get_mp_max_write()", "DB_ENV->get_mp_mmapsize()". * New DataType: DBLogCursor. If you are using the C api, you could need to recompile your code because the changes in the api interface structure. * Support for "DB_ENV->log_file()", "DB_ENV->log_printf()". * Solve a core dump if something bad happens while trying to create a transaction object. * We protect ourselves of failures in creation of Locks and Sequences objects. * EGG file is a ZIP file again, not a directory. This requires that any program importing the module can write in the ".python-eggs" of its user. * Keeping a cached copy of the database stats is a bad idea if we have several processes working together. We drop all this code. So "len()" will require a database scanning always, not only when there is any write. If you need an accurate and fast "len()", the application must keep that information manually in a database register. 4.8.2: * Support for "DB_OVERWRITE_DUP", "DB_FOREIGN_ABORT", "DB_FOREIGN_CASCADE", "DB_FOREIGN_NULLIFY", "DB_PRINTABLE", "DB_INORDER" flags. * Support for "DB_FOREIGN_CONFLICT" exception. * Support for "DB_ENV->memp_trickle()", "DB_ENV->memp_sync()", "DB_ENV->get_lg_bsize()", "DB_ENV->get_lg_dir()", "DB_ENV->get_lg_filemode()", "DB_ENV->set_lg_filemode()", "DB_ENV->get_lk_detect()", "DB_ENV->get_lg_regionmax()", "DB_ENV->get_lk_max_lockers()", "DB_ENV->set_lk_max_locks()", "DB_ENV->get_lk_max_objects()", "DB_ENV->set_lk_partitions()", "DB_ENV->get_lk_partitions()", "DB_ENV->get_flags()", "DB_ENV->set_cache_max()", "DB_ENV->get_cache_max()", "DB_ENV->set_thread_count()", "DB_ENV->get_thread_count()", "DB_ENV->log_set_config()", "DB_ENV->log_get_config()" functions. * Support for "DB->get_h_ffactor()", "DB->set_h_nelem()", "DB->get_h_nelem()", "DB->get_lorder()", "DB->get_pagesize()", "DB->get_re_pad()", "DB->get_re_len()", "DB->get_re_delim()", "DB->get_flags()", "DB->get_bt_minkey()", "DB->set_priority()", "DB->get_priority()", "DB->set_q_extentsize()", "DB->get_q_extentsize()", "DB->set_re_source()", "DB->get_re_source()" functions. * Unlock the Python GIL when doing "DB_ENV->db_home_get()". This is slower, because the function is very fast so we add overhead, but it is called very infrequently and we do the change for consistency. 4.8.1: * Support for "DB_ENV->mutex_set_align()" and "DB_ENV->mutex_get_align()". * Support for "DB_ENV->mutex_set_increment()" and "DB_ENV->mutex_get_increment()". * Support for "DB_ENV->mutex_set_tas_spins()" and "DB_ENV->mutex_get_tas_spins()". * Support for "DB_ENV->get_encrypt_flags()". * Support for "DB->get_encrypt_flags()". * Support for "DB_ENV->get_shm_key()". * Support for "DB_ENV->get_cachesize()". * Support for "DB->get_cachesize()". * Support for "DB_ENV->get_data_dirs()". * Testsuite compatibility with recent releases of Python 3.0 and 3.1, where cPickle has been removed. * Compatibility with development versions of Python 2.7 and 3.2 (r76123). * For a long time, the API has been accesible via C using "_bsddb.api" or "_pybsddb.api". If you are using Python 3.2 or up, you acquire access to that API via the new Capsule protocol (see "bsddb.h"). If you use the C API and upgrade to Python 3.2 and up, you must update the access code (see "bsddb.h"). 4.8.0: * Support for Berkeley DB 4.8. * Compatibility with Python 3.1. * The "DB_XIDDATASIZE" constant has been renamed to "DB_GID_SIZE". Update your code!. If linked to BDB 4.8, only "DB_GID_SIZE" is defined. If linked to previous BDB versions, we keep "DB_XIDDATASIZE" but define "DB_GID_SIZE" too, to be the same value. So, new code can use the updated constant when used against old BDB releases. * "DB_XA_CREATE" is removed. BDB 4.8 has eliminated XA Resource Manager support. * Drop support for Berkeley DB 4.0. Our reference is Red Had Enterprise Linux 3, until October 2010. After that, RHEL4 has Python 2.3 and BDB 4.2. * Remove "DBIncompleteError" exception. It was only used in BDB 4.0. * Remove "DB_INCOMPLETE", "DB_CHECKPOINT", "DB_CURLSN". They came from BDB 4.0 too. * RPC is dropped in Berkeley DB 4.8. The bindings still keep the API if you link to previous BDB releases. * In recno/queue databases, "set_re_delim()" and "set_re_pad()" require a byte instead of a unicode char, under Python3. * Support for "DB_ENV->mutex_set_max()" and "DB_ENV->mutex_get_max()". 4.7.6: * Compatibility with Python 3.0.1. * Add support for "DB_ENV->stat()" and "DB_ENV->stat_print()". * Add support for "DB_ENV->rep_set_clockskew()" and "DB_ENV->rep_get_clockskew()". The binding support for base replication is now complete. * "DB.has_key()" used to return 0 or 1. Changed to return True or False instead. Check your code!. * As requested by several users, implement "DB.__contains__()", to allow constructions like "if key in DB" without iterating over the entire database. But, BEWARE, this test is not protected by transactions!. This is the same problem we already have with "DB.has_key()". * Change "DBSequence.init_value()" to "DBSequence.initial_value()", for consistence with Berkeley DB real method name. This could require minimal changes in your code. The documentation was right. Noted by "anan". * Implements "DBCursor->prev_dup()". * Add support for "DB_GET_BOTH_RANGE", "DB_PREV_DUP", and "DB_IGNORE_LEASE" flags. * Export exception "DBRepLeaseExpiredError". * Add support for "DB_PRIORITY_VERY_LOW", "DB_PRIORITY_LOW", "DB_PRIORITY_DEFAULT", "DB_PRIORITY_HIGH", "DB_PRIORITY_VERY_HIGH", and "DB_PRIORITY_UNCHANGED" flags. * Add support for "DBCursor->set_priority()" and "DBCursor->get_priority()". The binding support for cursors is now complete. 4.7.5: * Add support for "DB_EID_INVALID" and "DB_EID_BROADCAST" flags. * Add support for "DB_SEQUENCE->stat_print()". The binding support for "DB_SEQUENCE" is now complete. * Add support for "DB_ENV->txn_stat_print()". * Add support for "DB_ENV->get_timeout()". * Document that "DB_ENV->txn_stat()" accepts a flag. * Unlock the GIL when doing "DB_ENV->set_tx_max()" and "DB_ENV->set_tx_timestamp()". * Add support for "DB_ENV->get_tx_max()". * Add support for "DB_ENV->get_tx_timestamp()". * Add support for "DB_TXN_WAIT" flag. * Add support for "DB_TXN->set_timeout()". * Add support for "DB_TXN->set_name()" and "DB_TXN->get_name()". Under Python 3.0, the name is an Unicode string. The binding support for "DB_TXN" is now complete. * Add support for "DB_REP_PERMANENT", "DB_REP_CONF_NOAUTOINIT", "DB_REP_CONF_DELAYCLIENT", "DB_REP_CONF_BULK", "DB_REP_CONF_NOWAIT", "DB_REP_LEASE_EXPIRED", "DB_REP_CONF_LEASE", "DB_REPMGR_CONF_2SITE_STRICT", "DB_REP_ANYWHERE", "DB_REP_NOBUFFER" and "DB_REP_REREQUEST" flags. 4.7.4: * Under Python 3.0, "bsddb.db.DB_VERSION_STRING", "bsddb.db.__version__" and "bsddb.db.cvsid" must return (unicode) strings instead of bytes. Solved. * Use the new (20081018) trove classifiers in PyPI to identify Python supported versions. * In "DB_ENV->rep_set_timeout()" and "DB_ENV->rep_get_timeout()", support flags "DB_REP_LEASE_TIMEOUT". * In "DB_ENV->rep_set_timeout()" and "DB_ENV->rep_get_timeout()", support flags "DB_REP_HEARTBEAT_MONITOR" and "DB_REP_HEARTBEAT_SEND". These flags are used in the Replication Manager framework, ignored if using Base Replication. * Implements "DB->exists()". * Add support for "DB_IMMUTABLE_KEY" flag. * Add support for "DB_REP_LOCKOUT" exception. * Support returning a list of strings in "associate()" callback. (Kung Phu) * Testsuite and Python 3.0 compatibility for "associate()" returning a list. In particular, in Python 3.0 the list must contain bytes. * Implements "DBEnv->fileid_reset()". (Duncan Findlay) * Implements "DB->compact()". (Gregory P. Smith) Berkeley DB 4.6 implementation is buggy, so we only support this function from Berkeley DB 4.7 and newer. We also support related flags "DB_FREELIST_ONLY" and "DB_FREE_SPACE". 4.7.3: (Python 2.6 release. First release with Python 3.0 support) * "private" is a keyword in C++. (Duncan Grisby) * setup.py should install "bsddb.h". (Duncan Grisby) * "DB_remove" memory corruption & crash. (Duncan Grisby) * Under Python 3.0, you can't use string keys/values, but bytes ones. Print the right error message. * "DB.has_key()" allowed transactions as a positional parameter. We allow, now, transactions as a keyword parameter also, as documented. * Correct "DB.associate()" parameter order in the documentation. * "DB.append()" recognizes "txn" both as a positional and a keyword parameter. * Small fix in "dbshelve" for compatibility with Python 3.0. * A lot of changes in "dbtables" for compatibility with Python 3.0. * Huge work making the testsuite compatible with Python 3.0. * In some cases the C module returned Unicode strings under Python 3.0. It should return "bytes", ALWAYS. Solved. * Remove a dict.has_key() use to silence a warning raised under Python2.6 -3 parameter. Python SVN r65391, Brett Cannon. * Solve some memory leaks - Neal Norwitz * If DBEnv creation fails, library can crash. (Victor Stinner) * Raising exceptions while doing a garbage collection will kill the interpreter. (Victor Stinner) * Crash in "DB.verify()". Noted by solsTiCe d'Hiver. 4.7.2: * Solved a race condition in Replication Manager testcode. * Changing any python code, automatically regenerates the Python3 version. The master version is Python2. * Compatibility with Python 3.0. * Solved a crash when DB handle creation fails. STINNER Victor - http://bugs.python.org/issue3307 * Improve internal error checking, as suggested by Neal Norwitz when reviewing commit 63207 in Python SVN. * Routines without parameters should be defined so, as suggested by Neal Norwitz when reviewing commit 63207 in Python SVN. The resulting code is (marginally) faster, smaller and clearer. * Routines with a simple object parameter are defines so, as suggested by Neal Norwitz when reviewing commit 63207 in Python SVN. The resulting code is (marginally) faster, smaller and clearer. * Routines taking objects as arguments can parse them better, as suggested by Neal Norwitz when reviewing commit 63207 in Python SVN. The resulting code is (marginally) faster, smaller and clearer. * Improve testsuite behaviour under MS Windows. * Use ABC (Abstract Base Classes) under Python 2.6 and 3.0. * Support for "relative imports". * Replication testcode behaves better in heavily loaded machines. 4.7.1: * Workaround a problem with un-initialized threads with the replication callback. * Export "DBRepUnavailError" exception. * Get rid of Berkeley DB 3.3 support. Rationale: http://mailman.jcea.es/pipermail/pybsddb/2008-March/000019.html * Better integration between Python test framework and bsddb3. * Improved Python 3.0 support in the C code. * Iteration over the database, using the legacy interface, now raises a RuntimeError if the database changes while iterating. http://bugs.python.org/issue2669 - gregory.p.smith * Create "set_private()" and "get_private()" methods for DB and DBEnv objects, to allow applications to link an arbitrary object to a DB/DBEnv. Useful for callbacks. * Support some more base replication calls: "DB_ENV->rep_start", "DB_ENV->rep_sync", "DB_ENV->rep_set_config", "DB_ENV->rep_get_config", "DB_ENV->rep_set_limit", "DB_ENV->rep_get_limit", "DB_ENV->rep_set_request", "DB_ENV->rep_get_request". * Support more base replication calls: "DB_ENV->rep_elect", "DB_ENV->rep_set_transport" and "DB_ENV->rep_process_message". Support also related flags. 4.7.0: * Support for Berkeley DB 4.7. * Support "DB_ENV->log_set_config", and related flags. * Complete the Berkeley DB Replication Manager support: "DB_ENV->repmgr_site_list" and related flags. "DB_ENV->repmgr_stat", "DB_ENV->repmgr_stat_print" and related flags. * Solved an old crash when building with debug python. (Neal Norwitz) * Extend the testsuite driver to check also against Python 2.6 (a3). * Support for RPC client service. 4.6.4: * Basic support for Berkeley DB Replication Manager. * Support for a few replication calls, for benefice of Berkeley DB Replication Manager: "DB_ENV->rep_set_priority", "DB_ENV->rep_get_priority", "DB_ENV->rep_set_nsites", "DB_ENV->rep_get_nsites", "DB_ENV->rep_set_timeout", "DB_ENV->rep_get_timeout". * Implemented "DB_ENV->set_event_notify" and related flags. * Export flags related to replication timeouts. * Export "DBRepHandleDeadError" exception. * Implemented "DB_ENV->set_verbose", "DB_ENV->get_verbose" and related flags. * Implemented "DB_ENV->get_lg_max". * Improved performance and coverage of following tests: lock, threaded ConcurrentDataStore, threaded simple locks, threaded transactions. * New exported flags: "DB_LOCK_EXPIRE" and "DB_LOCK_MAXWRITE". 4.6.3: * Be sure all DBEnv/DB paths in the TestSuite are generated in a way compatible with launching the tests in multiple threads/processes. * Move all the "assert" in the TestSuite to the version in the framework. This is very convenient, for example, to generate the final report, or better automation. * Implements "dbenv.log_flush()". * Regression: bug when creating a transaction and its parent is explicitly set to 'None'. * Regression: bug when duplicationg cursors. Solved. * Provide "dbenv.txn_recover()" and "txn.discard()", for fully support recovery of distributed transactions. Any user of this service should use Berkeley DB 4.5 or up. * If a transaction is in "prepare" or "recover" state, we MUST NOT abort it implicitly if the transaction goes out of scope, it is garbaged collected, etc. Better to leak than sorry. * In the previous case, we don't show any warning either. * Export "DB_XIDDATASIZE", for GID of distributed transactions. * If "db_seq_t" and PY_LONG_LONG are not compatible, compiler should show a warning while compiling, and the generated code would be incorrect but safe to use. No crash. Added sanity check in the testunit to verify this is not the case, and the datatypes are 64 bit width in fact. * Solve a compilation warning when including "bsddb.h" in other projects. (George Feinberg) 4.6.2: * Support for MVCC (MultiVersion Concurrency Control). * Support for DB_DSYNC_LOG, DB_DSYNC_DB and DB_OVERWRITE flags. * Move old documentation to ReST format. This is important for several reasons, notably to be able to integrate the documentation "as is" in python official docs (from Python 2.6). * Don't include Berkeley DB documentation. Link to the online version. * DBSequence objects documented. * DBSequence.get_key() didn't check for parameters. Fixed. * If a DB is closed, its children DBSequences will be closed also. * To be consistent with other close methods, you can call "DBSequence.close()" several times without error. * If a Sequence is opened inside a transaction, it will be automatically closed if the transaction is aborted. If the transaction is committed and it is actually a subtransaction, the sequence will be inherited by the parent transaction. * Be sure "db_seq_t" and "long long" are compatible. **Disabled because MS Windows issues to be investigated.** * Documented the already available DBEnv methods: "dbremove", "dbrename", "set_encrypt", "set_timeout", "set_shm_key", "lock_id_free", "set_tx_timestamp", "lsn_reset" and "log_stat". * Completed and documented "DBEnv.txn_stat()". * Completed and documented "DBEnv.lock_stat()". * Documented the already available DB methods: "set_encrypt", "pget". * Completed documentation of DB methods: "associate", "open". * Completed and documented "DB.stat()". * Documented the already available DBCursor methods: "pget" (several flavours). * Completed documentation of DBCursor methods: "consume", "join_item". 4.6.1: (first release from Jesus Cea Avion) * 'egg' (setuptools) support. * Environments, database handles and cursors are maintained in a logical tree. Closing any element of the tree, implicitly closes its children. * Transactions are managed in a logical tree. When aborting transactions, enclosed db handles, cursors and transactions, are closed. If transaction commits, the enclosed db handles are "inherited" by the parent transaction/environment. * Solved a bug when a DBEnv goes out of scope without closing first. * Add transactions to the management of closing of nested objects. (not completed yet!) * Fix memory leaks. * Previous versions were inconsistent when key or value were "" (the null string), according to opening the database in thread safe mode or not. In one case the lib gives "" and in the other it gives None. 4.6.0: * Adds support for compiling and linking with BerkeleyDB 4.6.21. * Fixes a double free bug with DBCursor.get and friends. Based on submitted pybsddb patch #1708868. (jjjhhhll) * Adds a basic C API to the module so that other extensions or third party modules can access types directly. Based on pybsddb patch #1551895. (Duncan Grisby) * bsddb.dbshelve now uses the most recent cPickle protocol, based on pybsddb patch #1551443. (w_barnes) * Fix the bsddb.dbshelve.DBShelf append method to work for RECNO dbs. * Fix Bug #477182 - Load the database flags at database open time so that opening a database previously created with the DB_DUP or DB_DUPSORT flag set will keep the proper behavior on subsequent opens. Specifically dictionary assignment to a DB object. It will now replace all values for a given key when the database allows duplicate values. DB users should use DB.put(k, v) when they want to store duplicates; not DB[k] = v. This only works with BerkeleyDB >= 4.2. * Add the DBEnv.lock_id_free method. * Removes any remnants of support for Python older than 2.1. * Removes any remnants of support for BerkeleyDB 3.2. 4.5.0: * Adds supports for compiling and linking with BerkeleyDB 4.5 * Python Bug #1599782: Fix segfault on bsddb.db.DB().type() due to releasing the GIL when it shouldn't. (nnorowitz) * Fixes a bug with bsddb.DB.stat where the flags and txn keyword arguments are transposed. * change test cases to use tempfile.gettempdir() 4.4.5: * pybsddb Bug #1527939: bsddb module DBEnv dbremove and dbrename methods now allow their database parameter to be None as the sleepycat API allows. 4.4.4: * fix DBCursor.pget() bug with keyword argument names when no data= is supplied [SF pybsddb bug #1477863] * add support for DBSequence objects [patch #1466734] * support DBEnv.log_stat() method on BerkeleyDB >= 4.0 [patch #1494885] * support DBEnv.lsn_reset() method on BerkeleyDB >= 4.4 [patch #1494902] * add DB_ARCH_REMOVE flag and fix DBEnv.log_archive() to accept it without potentially following an uninitialized pointer. 4.4.3: * fix DBEnv.set_tx_timestamp to not crash on Win64 platforms (thomas.wouters) * tons of memory leak fixes all over the code (thomas.wouters) * fixes ability to unpickle DBError (and children) exceptions 4.4.2: * Wrap the DBEnv.set_tx_timeout method * fix problem when DBEnv deleted before Txn sf bug #1413192 (Neal Norwitz) 4.4.1: * sf.net patch 1407992 - fixes associate tests on BerkeleyDB 3.3 thru 4.1 (contributed by Neal Norwitz) 4.4.0: * Added support for compiling and linking with BerkeleyDB 4.4.20. 4.3.3: * NOTICE: set_bt_compare() callback function arguments CHANGED to only require two arguments (left, right) rather than (db, left, right). * DB.associate() would crash when a DBError occurred. fixed. [pybsddb SF bug id 1215432]. 4.3.2: * the has_key() method was not raising a DBError when a database error had occurred. [SF patch id 1212590] * added a wrapper for the DBEnv.set_lg_regionmax method [SF patch id 1212590] * DBKeyEmptyError now derives from KeyError just like DBNotFoundError. * internally everywhere DB_NOTFOUND was checked for has been updated to also check for DB_KEYEMPTY. This fixes the semantics of a couple operations on recno and queue databases to be more intuitive and results in less unexpected DBKeyEmptyError exceptions being raised. 4.3.1: * Added support for DB.set_bt_compare() method to use a user supplied python comparison function taking (db, left, right) args as the database's B-Tree comparison function. 4.3.0: * Added support for building properly against BerkeleyDB 4.3.21. * fixed bug introduced in 4.2.8 that prevent the module from compiling against BerkeleyDB 3.2 (which doesn't support pget). * setup.py was cleaned up a bit to search for and find the latest version of the correct combo of db.h and libdb. 4.2.9: * DB keys() values() and items() methods were ignoring their optional txn parameter. This would lead to deadlocks in applications needing those to be transaction protected. 4.2.8: * Adds support for DB and DBCursor pget methods. Based on a patch submitted to the mailing list by Ian Ward * Added weakref support to all bsddb.db objects. * Make DBTxn objects automatically call abort() in their destructor if not yet finalized and raise a RuntimeWarning to that effect. 4.2.7: * fix an error with the legacy interface relying on the DB_TRUNCATE flag that changed behaviour to not work in a locking environment with BerkeleyDB 4.2.52. [SF bug id 897820] * fixed memory leaks in DB.get, DBC.set_range and potentially several other methods that would occur primarily when using queue | recno format databases with integer keys. [SF patch id 967763] 4.2.6: * the DB.has_key method was not honoring its txn parameter to perform its lookup within the specified (optional) transaction. fixed. [SF bug id 914019] 4.2.5: * Fixed a bug in the compatibility interface set_location() method where it would not properly search to the next nearest key when used on BTree databases. [SF bug id 788421] * Fixed a bug in the compatibility interface set_location() method where it could crash when looking up keys in a hash or recno format database due to an incorrect free(). 4.2.4: * changed DB and DBEnv set_get_returns_none() default from 1 to 2. * cleaned up compatibility iterator interface. 4.2.3: * the legacy compatibility dict-like interface now support iterators and generators and allows multithreaded access to the database. * fixed a tuple memory leak when raising "object has been closed" exceptions for DB, DBEnv and DBCursor objects. I doubt much previous code triggered this. * use of a closed DBCursor now raises a DBCursorClosedError exception subclass of DBError rather than a boring old DBError. 4.2.2: * added DBCursor.get_current_size() method to return the length in bytes of the value pointed to by the cursor without reading the actual data. 4.2.1: * Standalone pybsddb builds now use a _pybsddb dynamic/shared library rather than _bsddb. This allows for pybsddb to be built, installed and used on python >= 2.3 which includes an older version of pybsddb as its bsddb library. 4.2.0: * Can now compile and link with BerkeleyDB 4.2.x (when its released). * the legacy bsddb module supports the iterator interface on python 2.3. 4.1.x: * Support the DBEnv.set_shm_key() method. * Fixed setup.py include/{db4,db3} header file searching (SF bug #789740). 4.1.6: * Extended DB & DBEnv set_get_returns_none functionality to take a "level" instead of a boolean flag. The boolean 0 and 1 values still have the same effect. A value of 2 extends the "return None instead of raising an exception" behaviour to the DBCursor set methods. This will become the default behaviour in pybsddb 4.2. * Updated documentation for set_get_returns_none. Regenerated the stale html docs from the text documentation. * Fixed a typo in DBCursor.join_item method that made it crash instead of returning a value. Obviously nobody uses it. Wrote a test case for join and join_item. * Added the dbobj wrapper for DBEnv set_timeout method. * Updated README.txt 4.1.5: * Added the DBEnv.set_timeout method. 4.1.4: * rebuilt the windows 4.1.3 package, the original package was corrupt due to bad ram on my build host. 4.1.3 - 2003-02-02: * code cleanup to use python 2.x features in .py files * the standalone pybsddb distribution will install a module called bsddb3 while the module included with python >= 2.3 will be known as bsddb. 4.1.2 - 2003-01-17: * Shared all .py and .c source with the Python project. * Fixed DBTxn objects to raise an exception if they are used after the underlying DB_TXN handle becomes invalid. (rather than potentially causing a segfault) * Fixed module to work when compiled against a python without thread support. * Do not attempt to double-close DB cursor's whos underlying DB has already been closed (fixes a segfault). * Close DB objects when DB.open fails to prevent an exception about databases still being open when calling DBEnv.close. 4.1.1 - 2002-12-20: * Fixed a memory leak when raising exceptions from the database library. Debugged and fixed by Josh Hoyt . Thanks! (sourceforge patch 656517) 4.1.0 - 2002-12-13: * Updated our version number to track the latest BerkeleyDB interface version that we support. * Simplified the build and test process. Now you should just be able to say "python setup.py build" and "python setup.py install". Also added a nice test.py harness. Do "python test.py -h" for details. * The windows binary is build against BerkeleyDB 4.1.24 with current eight patches issued by Sleepycat applied. * REMINDER: BerkeleyDB 4.1 requires code changes if you use database transactions. See the upgrade docs on http://www.sleepycat.com/. 3.4.3 - 2002-10-18: * added support for BerkeleyDB 4.1: DB.open and DB.associate will now accept a txn keyword argument when using BerkeleyDB 4.1. DBEnv.dbremove, DBEnv.dbrename, DBEnv.set_encrypt and DB.set_encrypt methods have been exposed for 4.1. 3.4.2 - 2002-08-14: * dbtables.py: serious bug fix. The Select, Modify and Delete methods could all act upon rows that did not match all of the conditions. (bug # 590449) A test case was added. * dbutils.py: updated DeadlockWrap * test_threads.py: fixed to use dbutils.DeadlockWrap to catch and avoid DBLockDeadlockError exceptions during simple threading tests. 3.4.1: * fixed typo cut and paste bugs in test_dbsimple.py and test_threads.py * fixed bug with cursors where calling DBCursor.close() would cause the object's destructor __del__() method to raise an exception when it was called by the gc. * fixed a bug in associated callbacks that could cause a null pointer dereference when python threading had not yet been initialized. 3.4.0: * many bugfixes, its been a long while since a new package was created. * ChangeLog started. bsddb3-6.1.0/Modules/0000755000000000000000000000000012363235112014223 5ustar rootroot00000000000000bsddb3-6.1.0/Modules/_bsddb.c0000644000000000000000000101702512363167637015632 0ustar rootroot00000000000000/*---------------------------------------------------------------------- Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA and Andrew Kuchling. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: o Redistributions of source code must retain the above copyright notice, this list of conditions, and the disclaimer that follows. o Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following disclaimer in the documentation and/or other materials provided with the distribution. o Neither the name of Digital Creations nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DIGITAL CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ------------------------------------------------------------------------*/ /* * Handwritten code to wrap version 3.x of the Berkeley DB library, * written to replace a SWIG-generated file. It has since been updated * to compile with Berkeley DB versions 3.2 through 4.2. * * This module was started by Andrew Kuchling to remove the dependency * on SWIG in a package by Gregory P. Smith who based his work on a * similar package by Robin Dunn which wrapped * Berkeley DB 2.7.x. * * Development of this module then returned full circle back to Robin Dunn * who worked on behalf of Digital Creations to complete the wrapping of * the DB 3.x API and to build a solid unit test suite. Robin has * since gone onto other projects (wxPython). * * Gregory P. Smith was once again the maintainer. * * Since January 2008, new maintainer is Jesus Cea . * Jesus Cea licenses this code to PSF under a Contributor Agreement. * * Use the pybsddb@jcea.es mailing list for all questions. * Things can change faster than the header of this file is updated. * * http://www.jcea.es/programacion/pybsddb.htm * * This module contains 8 types: * * DB (Database) * DBCursor (Database Cursor) * DBEnv (database environment) * DBTxn (An explicit database transaction) * DBLock (A lock handle) * DBSequence (Sequence) * DBSite (Site) * DBLogCursor (Log Cursor) * */ /* --------------------------------------------------------------------- */ /* * Portions of this module, associated unit tests and build scripts are the * result of a contract with The Written Word (http://thewrittenword.com/) * Many thanks go out to them for causing me to raise the bar on quality and * functionality, resulting in a better bsddb3 package for all of us to use. * * --Robin */ /* --------------------------------------------------------------------- */ #include /* for offsetof() */ #include #define COMPILING_BSDDB_C #include "bsddb.h" #undef COMPILING_BSDDB_C /* --------------------------------------------------------------------- */ /* Various macro definitions */ #if (PY_VERSION_HEX >= 0x03000000) #define NUMBER_Check PyLong_Check #define NUMBER_AsLong PyLong_AsLong #define NUMBER_FromLong PyLong_FromLong #define NUMBER_FromUnsignedLong PyLong_FromUnsignedLong #else #define NUMBER_Check PyInt_Check #define NUMBER_AsLong PyInt_AsLong #define NUMBER_FromLong PyInt_FromLong #define NUMBER_FromUnsignedLong PyInt_FromSize_t #endif #ifdef WITH_THREAD /* These are for when calling Python --> C */ #define MYDB_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS; #define MYDB_END_ALLOW_THREADS Py_END_ALLOW_THREADS; /* and these are for calling C --> Python */ #define MYDB_BEGIN_BLOCK_THREADS \ PyGILState_STATE __savestate = PyGILState_Ensure(); #define MYDB_END_BLOCK_THREADS \ PyGILState_Release(__savestate); #else /* Compiled without threads - avoid all this cruft */ #define MYDB_BEGIN_ALLOW_THREADS #define MYDB_END_ALLOW_THREADS #define MYDB_BEGIN_BLOCK_THREADS #define MYDB_END_BLOCK_THREADS #endif /* --------------------------------------------------------------------- */ /* Exceptions */ static PyObject* DBError; /* Base class, all others derive from this */ static PyObject* DBCursorClosedError; /* raised when trying to use a closed cursor object */ static PyObject* DBKeyEmptyError; /* DB_KEYEMPTY: also derives from KeyError */ static PyObject* DBKeyExistError; /* DB_KEYEXIST */ static PyObject* DBLockDeadlockError; /* DB_LOCK_DEADLOCK */ static PyObject* DBLockNotGrantedError; /* DB_LOCK_NOTGRANTED */ static PyObject* DBNotFoundError; /* DB_NOTFOUND: also derives from KeyError */ static PyObject* DBOldVersionError; /* DB_OLD_VERSION */ static PyObject* DBRunRecoveryError; /* DB_RUNRECOVERY */ static PyObject* DBVerifyBadError; /* DB_VERIFY_BAD */ static PyObject* DBNoServerError; /* DB_NOSERVER */ #if (DBVER < 52) static PyObject* DBNoServerHomeError; /* DB_NOSERVER_HOME */ static PyObject* DBNoServerIDError; /* DB_NOSERVER_ID */ #endif static PyObject* DBPageNotFoundError; /* DB_PAGE_NOTFOUND */ static PyObject* DBSecondaryBadError; /* DB_SECONDARY_BAD */ static PyObject* DBInvalidArgError; /* EINVAL */ static PyObject* DBAccessError; /* EACCES */ static PyObject* DBNoSpaceError; /* ENOSPC */ static PyObject* DBNoMemoryError; /* DB_BUFFER_SMALL */ static PyObject* DBAgainError; /* EAGAIN */ static PyObject* DBBusyError; /* EBUSY */ static PyObject* DBFileExistsError; /* EEXIST */ static PyObject* DBNoSuchFileError; /* ENOENT */ static PyObject* DBPermissionsError; /* EPERM */ static PyObject* DBRepHandleDeadError; /* DB_REP_HANDLE_DEAD */ static PyObject* DBRepLockoutError; /* DB_REP_LOCKOUT */ static PyObject* DBRepLeaseExpiredError; /* DB_REP_LEASE_EXPIRED */ static PyObject* DBForeignConflictError; /* DB_FOREIGN_CONFLICT */ static PyObject* DBRepUnavailError; /* DB_REP_UNAVAIL */ #if (DBVER < 48) #define DB_GID_SIZE DB_XIDDATASIZE #endif /* --------------------------------------------------------------------- */ /* Structure definitions */ /* Defaults for moduleFlags in DBEnvObject and DBObject. */ #define DEFAULT_GET_RETURNS_NONE 1 #define DEFAULT_CURSOR_SET_RETURNS_NONE 1 /* See comment in Python 2.6 "object.h" */ #ifndef staticforward #define staticforward static #endif #ifndef statichere #define statichere static #endif staticforward PyTypeObject DB_Type, DBCursor_Type, DBEnv_Type, DBTxn_Type, DBLock_Type, DBLogCursor_Type; staticforward PyTypeObject DBSequence_Type; #if (DBVER >= 52) staticforward PyTypeObject DBSite_Type; #endif #define DBObject_Check(v) (Py_TYPE(v) == &DB_Type) #define DBCursorObject_Check(v) (Py_TYPE(v) == &DBCursor_Type) #define DBLogCursorObject_Check(v) (Py_TYPE(v) == &DBLogCursor_Type) #define DBEnvObject_Check(v) (Py_TYPE(v) == &DBEnv_Type) #define DBTxnObject_Check(v) (Py_TYPE(v) == &DBTxn_Type) #define DBLockObject_Check(v) (Py_TYPE(v) == &DBLock_Type) #define DBSequenceObject_Check(v) (Py_TYPE(v) == &DBSequence_Type) #if (DBVER >= 52) #define DBSiteObject_Check(v) (Py_TYPE(v) == &DBSite_Type) #endif #define _DBC_close(dbc) dbc->close(dbc) #define _DBC_count(dbc,a,b) dbc->count(dbc,a,b) #define _DBC_del(dbc,a) dbc->del(dbc,a) #define _DBC_dup(dbc,a,b) dbc->dup(dbc,a,b) #define _DBC_get(dbc,a,b,c) dbc->get(dbc,a,b,c) #define _DBC_pget(dbc,a,b,c,d) dbc->pget(dbc,a,b,c,d) #define _DBC_put(dbc,a,b,c) dbc->put(dbc,a,b,c) /* --------------------------------------------------------------------- */ /* Utility macros and functions */ #define INSERT_IN_DOUBLE_LINKED_LIST(backlink,object) \ { \ object->sibling_next=backlink; \ object->sibling_prev_p=&(backlink); \ backlink=object; \ if (object->sibling_next) { \ object->sibling_next->sibling_prev_p=&(object->sibling_next); \ } \ } #define EXTRACT_FROM_DOUBLE_LINKED_LIST(object) \ { \ if (object->sibling_next) { \ object->sibling_next->sibling_prev_p=object->sibling_prev_p; \ } \ *(object->sibling_prev_p)=object->sibling_next; \ } #define EXTRACT_FROM_DOUBLE_LINKED_LIST_MAYBE_NULL(object) \ { \ if (object->sibling_next) { \ object->sibling_next->sibling_prev_p=object->sibling_prev_p; \ } \ if (object->sibling_prev_p) { \ *(object->sibling_prev_p)=object->sibling_next; \ } \ } #define INSERT_IN_DOUBLE_LINKED_LIST_TXN(backlink,object) \ { \ object->sibling_next_txn=backlink; \ object->sibling_prev_p_txn=&(backlink); \ backlink=object; \ if (object->sibling_next_txn) { \ object->sibling_next_txn->sibling_prev_p_txn= \ &(object->sibling_next_txn); \ } \ } #define EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(object) \ { \ if (object->sibling_next_txn) { \ object->sibling_next_txn->sibling_prev_p_txn= \ object->sibling_prev_p_txn; \ } \ *(object->sibling_prev_p_txn)=object->sibling_next_txn; \ } #define RETURN_IF_ERR() \ if (makeDBError(err)) { \ return NULL; \ } #define RETURN_NONE() Py_INCREF(Py_None); return Py_None; #define _CHECK_OBJECT_NOT_CLOSED(nonNull, pyErrObj, name) \ if ((nonNull) == NULL) { \ PyObject *errTuple = NULL; \ errTuple = Py_BuildValue("(is)", 0, #name " object has been closed"); \ if (errTuple) { \ PyErr_SetObject((pyErrObj), errTuple); \ Py_DECREF(errTuple); \ } \ return NULL; \ } #define CHECK_DB_NOT_CLOSED(dbobj) \ _CHECK_OBJECT_NOT_CLOSED(dbobj->db, DBError, DB) #define CHECK_ENV_NOT_CLOSED(env) \ _CHECK_OBJECT_NOT_CLOSED(env->db_env, DBError, DBEnv) #define CHECK_CURSOR_NOT_CLOSED(curs) \ _CHECK_OBJECT_NOT_CLOSED(curs->dbc, DBCursorClosedError, DBCursor) #define CHECK_LOGCURSOR_NOT_CLOSED(logcurs) \ _CHECK_OBJECT_NOT_CLOSED(logcurs->logc, DBCursorClosedError, DBLogCursor) #define CHECK_SEQUENCE_NOT_CLOSED(curs) \ _CHECK_OBJECT_NOT_CLOSED(curs->sequence, DBError, DBSequence) #if (DBVER >= 52) #define CHECK_SITE_NOT_CLOSED(db_site) \ _CHECK_OBJECT_NOT_CLOSED(db_site->site, DBError, DBSite) #endif #define CHECK_DBFLAG(mydb, flag) (((mydb)->flags & (flag)) || \ (((mydb)->myenvobj != NULL) && ((mydb)->myenvobj->flags & (flag)))) #define CLEAR_DBT(dbt) (memset(&(dbt), 0, sizeof(dbt))) #define FREE_DBT(dbt) if ((dbt.flags & (DB_DBT_MALLOC|DB_DBT_REALLOC)) && \ dbt.data != NULL) { free(dbt.data); dbt.data = NULL; } static int makeDBError(int err); /* Return the access method type of the DBObject */ static int _DB_get_type(DBObject* self) { DBTYPE type; int err; err = self->db->get_type(self->db, &type); if (makeDBError(err)) { return -1; } return type; } /* Create a DBT structure (containing key and data values) from Python strings. Returns 1 on success, 0 on an error. */ static int make_dbt(PyObject* obj, DBT* dbt) { CLEAR_DBT(*dbt); if (obj == Py_None) { /* no need to do anything, the structure has already been zeroed */ } else if (!PyArg_Parse(obj, "s#", &dbt->data, &dbt->size)) { PyErr_SetString(PyExc_TypeError, #if (PY_VERSION_HEX < 0x03000000) "Data values must be of type string or None."); #else "Data values must be of type bytes or None."); #endif return 0; } return 1; } /* Recno and Queue DBs can have integer keys. This function figures out what's been given, verifies that it's allowed, and then makes the DBT. Caller MUST call FREE_DBT(key) when done. */ static int make_key_dbt(DBObject* self, PyObject* keyobj, DBT* key, int* pflags) { db_recno_t recno; int type; CLEAR_DBT(*key); if (keyobj == Py_None) { type = _DB_get_type(self); if (type == -1) return 0; if (type == DB_RECNO || type == DB_QUEUE) { PyErr_SetString( PyExc_TypeError, "None keys not allowed for Recno and Queue DB's"); return 0; } /* no need to do anything, the structure has already been zeroed */ } else if (PyBytes_Check(keyobj)) { /* verify access method type */ type = _DB_get_type(self); if (type == -1) return 0; if (type == DB_RECNO || type == DB_QUEUE) { PyErr_SetString( PyExc_TypeError, #if (PY_VERSION_HEX < 0x03000000) "String keys not allowed for Recno and Queue DB's"); #else "Bytes keys not allowed for Recno and Queue DB's"); #endif return 0; } /* * NOTE(gps): I don't like doing a data copy here, it seems * wasteful. But without a clean way to tell FREE_DBT if it * should free key->data or not we have to. Other places in * the code check for DB_THREAD and forceably set DBT_MALLOC * when we otherwise would leave flags 0 to indicate that. */ key->data = malloc(PyBytes_GET_SIZE(keyobj)); if (key->data == NULL) { PyErr_SetString(PyExc_MemoryError, "Key memory allocation failed"); return 0; } memcpy(key->data, PyBytes_AS_STRING(keyobj), PyBytes_GET_SIZE(keyobj)); key->flags = DB_DBT_REALLOC; key->size = PyBytes_GET_SIZE(keyobj); } else if (NUMBER_Check(keyobj)) { /* verify access method type */ type = _DB_get_type(self); if (type == -1) return 0; if (type == DB_BTREE && pflags != NULL) { /* if BTREE then an Integer key is allowed with the * DB_SET_RECNO flag */ *pflags |= DB_SET_RECNO; } else if (type != DB_RECNO && type != DB_QUEUE) { PyErr_SetString( PyExc_TypeError, "Integer keys only allowed for Recno and Queue DB's"); return 0; } /* Make a key out of the requested recno, use allocated space so DB * will be able to realloc room for the real key if needed. */ recno = NUMBER_AsLong(keyobj); key->data = malloc(sizeof(db_recno_t)); if (key->data == NULL) { PyErr_SetString(PyExc_MemoryError, "Key memory allocation failed"); return 0; } key->ulen = key->size = sizeof(db_recno_t); memcpy(key->data, &recno, sizeof(db_recno_t)); key->flags = DB_DBT_REALLOC; } else { PyErr_Format(PyExc_TypeError, #if (PY_VERSION_HEX < 0x03000000) "String or Integer object expected for key, %s found", #else "Bytes or Integer object expected for key, %s found", #endif Py_TYPE(keyobj)->tp_name); return 0; } return 1; } /* Add partial record access to an existing DBT data struct. If dlen and doff are set, then the DB_DBT_PARTIAL flag will be set and the data storage/retrieval will be done using dlen and doff. */ static int add_partial_dbt(DBT* d, int dlen, int doff) { /* if neither were set we do nothing (-1 is the default value) */ if ((dlen == -1) && (doff == -1)) { return 1; } if ((dlen < 0) || (doff < 0)) { PyErr_SetString(PyExc_TypeError, "dlen and doff must both be >= 0"); return 0; } d->flags = d->flags | DB_DBT_PARTIAL; d->dlen = (unsigned int) dlen; d->doff = (unsigned int) doff; return 1; } /* a safe strcpy() without the zeroing behaviour and semantics of strncpy. */ /* TODO: make this use the native libc strlcpy() when available (BSD) */ unsigned int our_strlcpy(char* dest, const char* src, unsigned int n) { unsigned int srclen, copylen; srclen = strlen(src); if (n <= 0) return srclen; copylen = (srclen > n-1) ? n-1 : srclen; /* populate dest[0] thru dest[copylen-1] */ memcpy(dest, src, copylen); /* guarantee null termination */ dest[copylen] = 0; return srclen; } /* Callback used to save away more information about errors from the DB * library. */ static char _db_errmsg[1024]; static void _db_errorCallback(const DB_ENV *db_env, const char* prefix, const char* msg) { our_strlcpy(_db_errmsg, msg, sizeof(_db_errmsg)); } /* ** We need these functions because some results ** are undefined if pointer is NULL. Some other ** give None instead of "". ** ** This functions are static and will be ** -I hope- inlined. */ static const char *DummyString = "This string is a simple placeholder"; static PyObject *Build_PyString(const char *p,int s) { if (!p) { p=DummyString; assert(s==0); } return PyBytes_FromStringAndSize(p,s); } static PyObject *BuildValue_S(const void *p,int s) { if (!p) { p=DummyString; assert(s==0); } return PyBytes_FromStringAndSize(p, s); } static PyObject *BuildValue_SS(const void *p1,int s1,const void *p2,int s2) { PyObject *a, *b, *r; if (!p1) { p1=DummyString; assert(s1==0); } if (!p2) { p2=DummyString; assert(s2==0); } if (!(a = PyBytes_FromStringAndSize(p1, s1))) { return NULL; } if (!(b = PyBytes_FromStringAndSize(p2, s2))) { Py_DECREF(a); return NULL; } r = PyTuple_Pack(2, a, b) ; Py_DECREF(a); Py_DECREF(b); return r; } static PyObject *BuildValue_IS(int i,const void *p,int s) { PyObject *a, *r; if (!p) { p=DummyString; assert(s==0); } if (!(a = PyBytes_FromStringAndSize(p, s))) { return NULL; } r = Py_BuildValue("iO", i, a); Py_DECREF(a); return r; } static PyObject *BuildValue_LS(long l,const void *p,int s) { PyObject *a, *r; if (!p) { p=DummyString; assert(s==0); } if (!(a = PyBytes_FromStringAndSize(p, s))) { return NULL; } r = Py_BuildValue("lO", l, a); Py_DECREF(a); return r; } /* make a nice exception object to raise for errors. */ static int makeDBError(int err) { char errTxt[2048]; /* really big, just in case... */ PyObject *errObj = NULL; PyObject *errTuple = NULL; int exceptionRaised = 0; unsigned int bytes_left; switch (err) { case 0: /* successful, no error */ return 0; case DB_KEYEMPTY: errObj = DBKeyEmptyError; break; case DB_KEYEXIST: errObj = DBKeyExistError; break; case DB_LOCK_DEADLOCK: errObj = DBLockDeadlockError; break; case DB_LOCK_NOTGRANTED: errObj = DBLockNotGrantedError; break; case DB_NOTFOUND: errObj = DBNotFoundError; break; case DB_OLD_VERSION: errObj = DBOldVersionError; break; case DB_RUNRECOVERY: errObj = DBRunRecoveryError; break; case DB_VERIFY_BAD: errObj = DBVerifyBadError; break; case DB_NOSERVER: errObj = DBNoServerError; break; #if (DBVER < 52) case DB_NOSERVER_HOME: errObj = DBNoServerHomeError; break; case DB_NOSERVER_ID: errObj = DBNoServerIDError; break; #endif case DB_PAGE_NOTFOUND: errObj = DBPageNotFoundError; break; case DB_SECONDARY_BAD: errObj = DBSecondaryBadError; break; case DB_BUFFER_SMALL: errObj = DBNoMemoryError; break; case ENOMEM: errObj = PyExc_MemoryError; break; case EINVAL: errObj = DBInvalidArgError; break; case EACCES: errObj = DBAccessError; break; case ENOSPC: errObj = DBNoSpaceError; break; case EAGAIN: errObj = DBAgainError; break; case EBUSY : errObj = DBBusyError; break; case EEXIST: errObj = DBFileExistsError; break; case ENOENT: errObj = DBNoSuchFileError; break; case EPERM : errObj = DBPermissionsError; break; case DB_REP_HANDLE_DEAD : errObj = DBRepHandleDeadError; break; case DB_REP_LOCKOUT : errObj = DBRepLockoutError; break; case DB_REP_LEASE_EXPIRED : errObj = DBRepLeaseExpiredError; break; case DB_FOREIGN_CONFLICT : errObj = DBForeignConflictError; break; case DB_REP_UNAVAIL : errObj = DBRepUnavailError; break; default: errObj = DBError; break; } if (errObj != NULL) { bytes_left = our_strlcpy(errTxt, db_strerror(err), sizeof(errTxt)); /* Ensure that bytes_left never goes negative */ if (_db_errmsg[0] && bytes_left < (sizeof(errTxt) - 4)) { bytes_left = sizeof(errTxt) - bytes_left - 4 - 1; assert(bytes_left >= 0); strcat(errTxt, " -- "); strncat(errTxt, _db_errmsg, bytes_left); } _db_errmsg[0] = 0; errTuple = Py_BuildValue("(is)", err, errTxt); if (errTuple == NULL) { Py_DECREF(errObj); return !0; } PyErr_SetObject(errObj, errTuple); Py_DECREF(errTuple); } return ((errObj != NULL) || exceptionRaised); } /* set a type exception */ static void makeTypeError(char* expected, PyObject* found) { PyErr_Format(PyExc_TypeError, "Expected %s argument, %s found.", expected, Py_TYPE(found)->tp_name); } /* verify that an obj is either None or a DBTxn, and set the txn pointer */ static int checkTxnObj(PyObject* txnobj, DB_TXN** txn) { if (txnobj == Py_None || txnobj == NULL) { *txn = NULL; return 1; } if (DBTxnObject_Check(txnobj)) { *txn = ((DBTxnObject*)txnobj)->txn; return 1; } else makeTypeError("DBTxn", txnobj); return 0; } /* Delete a key from a database Returns 0 on success, -1 on an error. */ static int _DB_delete(DBObject* self, DB_TXN *txn, DBT *key, int flags) { int err; MYDB_BEGIN_ALLOW_THREADS; err = self->db->del(self->db, txn, key, 0); MYDB_END_ALLOW_THREADS; if (makeDBError(err)) { return -1; } return 0; } /* Store a key into a database Returns 0 on success, -1 on an error. */ static int _DB_put(DBObject* self, DB_TXN *txn, DBT *key, DBT *data, int flags) { int err; MYDB_BEGIN_ALLOW_THREADS; err = self->db->put(self->db, txn, key, data, flags); MYDB_END_ALLOW_THREADS; if (makeDBError(err)) { return -1; } return 0; } /* Get a key/data pair from a cursor */ static PyObject* _DBCursor_get(DBCursorObject* self, int extra_flags, PyObject *args, PyObject *kwargs, char *format) { int err; PyObject* retval = NULL; DBT key, data; int dlen = -1; int doff = -1; int flags = 0; static char* kwnames[] = { "flags", "dlen", "doff", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, format, kwnames, &flags, &dlen, &doff)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); flags |= extra_flags; CLEAR_DBT(key); CLEAR_DBT(data); if (!add_partial_dbt(&data, dlen, doff)) return NULL; MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(self->dbc, &key, &data, flags); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->mydb->moduleFlags.getReturnsNone) { Py_INCREF(Py_None); retval = Py_None; } else if (makeDBError(err)) { retval = NULL; } else { /* otherwise, success! */ /* if Recno or Queue, return the key as an Int */ switch (_DB_get_type(self->mydb)) { case -1: retval = NULL; break; case DB_RECNO: case DB_QUEUE: retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size); break; case DB_HASH: case DB_BTREE: default: retval = BuildValue_SS(key.data, key.size, data.data, data.size); break; } } return retval; } /* add an integer to a dictionary using the given name as a key */ static void _addIntToDict(PyObject* dict, char *name, int value) { PyObject* v = NUMBER_FromLong((long) value); if (!v || PyDict_SetItemString(dict, name, v)) PyErr_Clear(); Py_XDECREF(v); } #if (DBVER >= 60) /* add an unsigned integer to a dictionary using the given name as a key */ static void _addUnsignedIntToDict(PyObject* dict, char *name, unsigned int value) { PyObject* v = NUMBER_FromUnsignedLong((unsigned long) value); if (!v || PyDict_SetItemString(dict, name, v)) PyErr_Clear(); Py_XDECREF(v); } #endif /* The same, when the value is a time_t */ static void _addTimeTToDict(PyObject* dict, char *name, time_t value) { PyObject* v; /* if the value fits in regular int, use that. */ #ifdef PY_LONG_LONG if (sizeof(time_t) > sizeof(long)) v = PyLong_FromLongLong((PY_LONG_LONG) value); else #endif v = NUMBER_FromLong((long) value); if (!v || PyDict_SetItemString(dict, name, v)) PyErr_Clear(); Py_XDECREF(v); } /* add an db_seq_t to a dictionary using the given name as a key */ static void _addDb_seq_tToDict(PyObject* dict, char *name, db_seq_t value) { PyObject* v = PyLong_FromLongLong(value); if (!v || PyDict_SetItemString(dict, name, v)) PyErr_Clear(); Py_XDECREF(v); } static void _addDB_lsnToDict(PyObject* dict, char *name, DB_LSN value) { PyObject *v = Py_BuildValue("(ll)",value.file,value.offset); if (!v || PyDict_SetItemString(dict, name, v)) PyErr_Clear(); Py_XDECREF(v); } /* --------------------------------------------------------------------- */ /* Allocators and deallocators */ static DBObject* newDBObject(DBEnvObject* arg, int flags) { DBObject* self; DB_ENV* db_env = NULL; int err; self = PyObject_New(DBObject, &DB_Type); if (self == NULL) return NULL; self->flags = 0; self->setflags = 0; self->myenvobj = NULL; self->db = NULL; self->children_cursors = NULL; self->children_sequences = NULL; self->associateCallback = NULL; self->btCompareCallback = NULL; self->dupCompareCallback = NULL; self->primaryDBType = 0; Py_INCREF(Py_None); self->private_obj = Py_None; self->in_weakreflist = NULL; /* keep a reference to our python DBEnv object */ if (arg) { Py_INCREF(arg); self->myenvobj = arg; db_env = arg->db_env; INSERT_IN_DOUBLE_LINKED_LIST(self->myenvobj->children_dbs,self); } else { self->sibling_prev_p=NULL; self->sibling_next=NULL; } self->txn=NULL; self->sibling_prev_p_txn=NULL; self->sibling_next_txn=NULL; if (self->myenvobj) self->moduleFlags = self->myenvobj->moduleFlags; else self->moduleFlags.getReturnsNone = DEFAULT_GET_RETURNS_NONE; self->moduleFlags.cursorSetReturnsNone = DEFAULT_CURSOR_SET_RETURNS_NONE; MYDB_BEGIN_ALLOW_THREADS; err = db_create(&self->db, db_env, flags); if (self->db != NULL) { self->db->set_errcall(self->db, _db_errorCallback); self->db->app_private = (void*)self; } MYDB_END_ALLOW_THREADS; /* TODO add a weakref(self) to the self->myenvobj->open_child_weakrefs * list so that a DBEnv can refuse to close without aborting any open * DBTxns and closing any open DBs first. */ if (makeDBError(err)) { if (self->myenvobj) { Py_DECREF(self->myenvobj); self->myenvobj = NULL; } Py_DECREF(self); self = NULL; } return self; } /* Forward declaration */ static PyObject *DB_close_internal(DBObject* self, int flags, int do_not_close); static void DB_dealloc(DBObject* self) { PyObject *dummy; if (self->db != NULL) { dummy=DB_close_internal(self, 0, 0); /* ** Raising exceptions while doing ** garbage collection is a fatal error. */ if (dummy) Py_DECREF(dummy); else PyErr_Clear(); } if (self->in_weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *) self); } if (self->myenvobj) { Py_DECREF(self->myenvobj); self->myenvobj = NULL; } if (self->associateCallback != NULL) { Py_DECREF(self->associateCallback); self->associateCallback = NULL; } if (self->btCompareCallback != NULL) { Py_DECREF(self->btCompareCallback); self->btCompareCallback = NULL; } if (self->dupCompareCallback != NULL) { Py_DECREF(self->dupCompareCallback); self->dupCompareCallback = NULL; } Py_DECREF(self->private_obj); PyObject_Del(self); } static DBCursorObject* newDBCursorObject(DBC* dbc, DBTxnObject *txn, DBObject* db) { DBCursorObject* self = PyObject_New(DBCursorObject, &DBCursor_Type); if (self == NULL) return NULL; self->dbc = dbc; self->mydb = db; INSERT_IN_DOUBLE_LINKED_LIST(self->mydb->children_cursors,self); if (txn && ((PyObject *)txn!=Py_None)) { INSERT_IN_DOUBLE_LINKED_LIST_TXN(txn->children_cursors,self); self->txn=txn; } else { self->txn=NULL; } self->in_weakreflist = NULL; Py_INCREF(self->mydb); return self; } /* Forward declaration */ static PyObject *DBC_close_internal(DBCursorObject* self); static void DBCursor_dealloc(DBCursorObject* self) { PyObject *dummy; if (self->dbc != NULL) { dummy=DBC_close_internal(self); /* ** Raising exceptions while doing ** garbage collection is a fatal error. */ if (dummy) Py_DECREF(dummy); else PyErr_Clear(); } if (self->in_weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *) self); } Py_DECREF(self->mydb); PyObject_Del(self); } static DBLogCursorObject* newDBLogCursorObject(DB_LOGC* dblogc, DBEnvObject* env) { DBLogCursorObject* self; self = PyObject_New(DBLogCursorObject, &DBLogCursor_Type); if (self == NULL) return NULL; self->logc = dblogc; self->env = env; INSERT_IN_DOUBLE_LINKED_LIST(self->env->children_logcursors, self); self->in_weakreflist = NULL; Py_INCREF(self->env); return self; } /* Forward declaration */ static PyObject *DBLogCursor_close_internal(DBLogCursorObject* self); static void DBLogCursor_dealloc(DBLogCursorObject* self) { PyObject *dummy; if (self->logc != NULL) { dummy = DBLogCursor_close_internal(self); /* ** Raising exceptions while doing ** garbage collection is a fatal error. */ if (dummy) Py_DECREF(dummy); else PyErr_Clear(); } if (self->in_weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *) self); } Py_DECREF(self->env); PyObject_Del(self); } static DBEnvObject* newDBEnvObject(int flags) { int err; DBEnvObject* self = PyObject_New(DBEnvObject, &DBEnv_Type); if (self == NULL) return NULL; self->db_env = NULL; self->closed = 1; self->flags = flags; self->moduleFlags.getReturnsNone = DEFAULT_GET_RETURNS_NONE; self->moduleFlags.cursorSetReturnsNone = DEFAULT_CURSOR_SET_RETURNS_NONE; self->children_dbs = NULL; self->children_txns = NULL; self->children_logcursors = NULL ; #if (DBVER >= 52) self->children_sites = NULL; #endif Py_INCREF(Py_None); self->private_obj = Py_None; Py_INCREF(Py_None); self->rep_transport = Py_None; self->in_weakreflist = NULL; self->event_notifyCallback = NULL; MYDB_BEGIN_ALLOW_THREADS; err = db_env_create(&self->db_env, flags); MYDB_END_ALLOW_THREADS; if (makeDBError(err)) { Py_DECREF(self); self = NULL; } else { self->db_env->set_errcall(self->db_env, _db_errorCallback); self->db_env->app_private = self; } return self; } /* Forward declaration */ static PyObject *DBEnv_close_internal(DBEnvObject* self, int flags); static void DBEnv_dealloc(DBEnvObject* self) { PyObject *dummy; if (self->db_env) { dummy=DBEnv_close_internal(self, 0); /* ** Raising exceptions while doing ** garbage collection is a fatal error. */ if (dummy) Py_DECREF(dummy); else PyErr_Clear(); } Py_XDECREF(self->event_notifyCallback); self->event_notifyCallback = NULL; if (self->in_weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *) self); } Py_DECREF(self->private_obj); Py_DECREF(self->rep_transport); PyObject_Del(self); } static DBTxnObject* newDBTxnObject(DBEnvObject* myenv, DBTxnObject *parent, DB_TXN *txn, int flags) { int err; DB_TXN *parent_txn = NULL; DBTxnObject* self = PyObject_New(DBTxnObject, &DBTxn_Type); if (self == NULL) return NULL; self->in_weakreflist = NULL; self->children_txns = NULL; self->children_dbs = NULL; self->children_cursors = NULL; self->children_sequences = NULL; self->flag_prepare = 0; self->parent_txn = NULL; self->env = NULL; /* We initialize just in case "txn_begin" fails */ self->txn = NULL; if (parent && ((PyObject *)parent!=Py_None)) { parent_txn = parent->txn; } if (txn) { self->txn = txn; } else { MYDB_BEGIN_ALLOW_THREADS; err = myenv->db_env->txn_begin(myenv->db_env, parent_txn, &(self->txn), flags); MYDB_END_ALLOW_THREADS; if (makeDBError(err)) { /* Free object half initialized */ Py_DECREF(self); return NULL; } } /* Can't use 'parent' because could be 'parent==Py_None' */ if (parent_txn) { self->parent_txn = parent; Py_INCREF(parent); self->env = NULL; INSERT_IN_DOUBLE_LINKED_LIST(parent->children_txns, self); } else { self->parent_txn = NULL; Py_INCREF(myenv); self->env = myenv; INSERT_IN_DOUBLE_LINKED_LIST(myenv->children_txns, self); } return self; } /* Forward declaration */ static PyObject * DBTxn_abort_discard_internal(DBTxnObject* self, int discard); static void DBTxn_dealloc(DBTxnObject* self) { PyObject *dummy; if (self->txn) { int flag_prepare = self->flag_prepare; dummy=DBTxn_abort_discard_internal(self, 0); /* ** Raising exceptions while doing ** garbage collection is a fatal error. */ if (dummy) Py_DECREF(dummy); else PyErr_Clear(); if (!flag_prepare) { PyErr_Warn(PyExc_RuntimeWarning, "DBTxn aborted in destructor. No prior commit() or abort()."); } } if (self->in_weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *) self); } if (self->env) { Py_DECREF(self->env); } else { /* ** We can have "self->env==NULL" and "self->parent_txn==NULL" ** if something happens when creating the transaction object ** and we abort the object while half done. */ Py_XDECREF(self->parent_txn); } PyObject_Del(self); } static DBLockObject* newDBLockObject(DBEnvObject* myenv, u_int32_t locker, DBT* obj, db_lockmode_t lock_mode, int flags) { int err; DBLockObject* self = PyObject_New(DBLockObject, &DBLock_Type); if (self == NULL) return NULL; self->in_weakreflist = NULL; self->lock_initialized = 0; /* Just in case the call fails */ MYDB_BEGIN_ALLOW_THREADS; err = myenv->db_env->lock_get(myenv->db_env, locker, flags, obj, lock_mode, &self->lock); MYDB_END_ALLOW_THREADS; if (makeDBError(err)) { Py_DECREF(self); self = NULL; } else { self->lock_initialized = 1; } return self; } static void DBLock_dealloc(DBLockObject* self) { if (self->in_weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *) self); } /* TODO: is this lock held? should we release it? */ /* CAUTION: The lock can be not initialized if the creation has failed */ PyObject_Del(self); } static DBSequenceObject* newDBSequenceObject(DBObject* mydb, int flags) { int err; DBSequenceObject* self = PyObject_New(DBSequenceObject, &DBSequence_Type); if (self == NULL) return NULL; Py_INCREF(mydb); self->mydb = mydb; INSERT_IN_DOUBLE_LINKED_LIST(self->mydb->children_sequences,self); self->txn = NULL; self->in_weakreflist = NULL; self->sequence = NULL; /* Just in case the call fails */ MYDB_BEGIN_ALLOW_THREADS; err = db_sequence_create(&self->sequence, self->mydb->db, flags); MYDB_END_ALLOW_THREADS; if (makeDBError(err)) { Py_DECREF(self); self = NULL; } return self; } /* Forward declaration */ static PyObject *DBSequence_close_internal(DBSequenceObject* self, int flags, int do_not_close); static void DBSequence_dealloc(DBSequenceObject* self) { PyObject *dummy; if (self->sequence != NULL) { dummy=DBSequence_close_internal(self,0,0); /* ** Raising exceptions while doing ** garbage collection is a fatal error. */ if (dummy) Py_DECREF(dummy); else PyErr_Clear(); } if (self->in_weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *) self); } Py_DECREF(self->mydb); PyObject_Del(self); } #if (DBVER >= 52) static DBSiteObject* newDBSiteObject(DB_SITE* sitep, DBEnvObject* env) { DBSiteObject* self; self = PyObject_New(DBSiteObject, &DBSite_Type); if (self == NULL) return NULL; self->site = sitep; self->env = env; INSERT_IN_DOUBLE_LINKED_LIST(self->env->children_sites, self); self->in_weakreflist = NULL; Py_INCREF(self->env); return self; } /* Forward declaration */ static PyObject *DBSite_close_internal(DBSiteObject* self); static void DBSite_dealloc(DBSiteObject* self) { PyObject *dummy; if (self->site != NULL) { dummy = DBSite_close_internal(self); /* ** Raising exceptions while doing ** garbage collection is a fatal error. */ if (dummy) Py_DECREF(dummy); else PyErr_Clear(); } if (self->in_weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *) self); } Py_DECREF(self->env); PyObject_Del(self); } #endif /* --------------------------------------------------------------------- */ /* DB methods */ static PyObject* DB_append(DBObject* self, PyObject* args, PyObject* kwargs) { PyObject* txnobj = NULL; PyObject* dataobj; db_recno_t recno; DBT key, data; DB_TXN *txn = NULL; static char* kwnames[] = { "data", "txn", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|O:append", kwnames, &dataobj, &txnobj)) return NULL; CHECK_DB_NOT_CLOSED(self); /* make a dummy key out of a recno */ recno = 0; CLEAR_DBT(key); key.data = &recno; key.size = sizeof(recno); key.ulen = key.size; key.flags = DB_DBT_USERMEM; if (!make_dbt(dataobj, &data)) return NULL; if (!checkTxnObj(txnobj, &txn)) return NULL; if (-1 == _DB_put(self, txn, &key, &data, DB_APPEND)) return NULL; return NUMBER_FromLong(recno); } static int _db_associateCallback(DB* db, const DBT* priKey, const DBT* priData, DBT* secKey) { int retval = DB_DONOTINDEX; DBObject* secondaryDB = (DBObject*)db->app_private; PyObject* callback = secondaryDB->associateCallback; int type = secondaryDB->primaryDBType; PyObject* args; PyObject* result = NULL; if (callback != NULL) { MYDB_BEGIN_BLOCK_THREADS; if (type == DB_RECNO || type == DB_QUEUE) args = BuildValue_LS(*((db_recno_t*)priKey->data), priData->data, priData->size); else args = BuildValue_SS(priKey->data, priKey->size, priData->data, priData->size); if (args != NULL) { result = PyEval_CallObject(callback, args); } if (args == NULL || result == NULL) { PyErr_Print(); } else if (result == Py_None) { retval = DB_DONOTINDEX; } else if (NUMBER_Check(result)) { retval = NUMBER_AsLong(result); } else if (PyBytes_Check(result)) { char* data; Py_ssize_t size; CLEAR_DBT(*secKey); PyBytes_AsStringAndSize(result, &data, &size); secKey->flags = DB_DBT_APPMALLOC; /* DB will free */ secKey->data = malloc(size); /* TODO, check this */ if (secKey->data) { memcpy(secKey->data, data, size); secKey->size = size; retval = 0; } else { PyErr_SetString(PyExc_MemoryError, "malloc failed in _db_associateCallback"); PyErr_Print(); } } else if (PyList_Check(result)) { char* data; Py_ssize_t size; int i, listlen; DBT* dbts; listlen = PyList_Size(result); dbts = (DBT *)malloc(sizeof(DBT) * listlen); for (i=0; iassociate callback should be a list of strings."); #else "The list returned by DB->associate callback should be a list of bytes."); #endif PyErr_Print(); } PyBytes_AsStringAndSize( PyList_GetItem(result, i), &data, &size); CLEAR_DBT(dbts[i]); dbts[i].data = malloc(size); /* TODO, check this */ if (dbts[i].data) { memcpy(dbts[i].data, data, size); dbts[i].size = size; dbts[i].ulen = dbts[i].size; dbts[i].flags = DB_DBT_APPMALLOC; /* DB will free */ } else { PyErr_SetString(PyExc_MemoryError, "malloc failed in _db_associateCallback (list)"); PyErr_Print(); } } CLEAR_DBT(*secKey); secKey->data = dbts; secKey->size = listlen; secKey->flags = DB_DBT_APPMALLOC | DB_DBT_MULTIPLE; retval = 0; } else { PyErr_SetString( PyExc_TypeError, #if (PY_VERSION_HEX < 0x03000000) "DB associate callback should return DB_DONOTINDEX/string/list of strings."); #else "DB associate callback should return DB_DONOTINDEX/bytes/list of bytes."); #endif PyErr_Print(); } Py_XDECREF(args); Py_XDECREF(result); MYDB_END_BLOCK_THREADS; } return retval; } static PyObject* DB_associate(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; DBObject* secondaryDB; PyObject* callback; PyObject *txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = {"secondaryDB", "callback", "flags", "txn", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|iO:associate", kwnames, &secondaryDB, &callback, &flags, &txnobj)) { return NULL; } if (!checkTxnObj(txnobj, &txn)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!DBObject_Check(secondaryDB)) { makeTypeError("DB", (PyObject*)secondaryDB); return NULL; } CHECK_DB_NOT_CLOSED(secondaryDB); if (callback == Py_None) { callback = NULL; } else if (!PyCallable_Check(callback)) { makeTypeError("Callable", callback); return NULL; } /* Save a reference to the callback in the secondary DB. */ Py_XDECREF(secondaryDB->associateCallback); Py_XINCREF(callback); secondaryDB->associateCallback = callback; secondaryDB->primaryDBType = _DB_get_type(self); /* PyEval_InitThreads is called here due to a quirk in python 1.5 * - 2.2.1 (at least) according to Russell Williamson : * The global interepreter lock is not initialized until the first * thread is created using thread.start_new_thread() or fork() is * called. that would cause the ALLOW_THREADS here to segfault due * to a null pointer reference if no threads or child processes * have been created. This works around that and is a no-op if * threads have already been initialized. * (see pybsddb-users mailing list post on 2002-08-07) */ #ifdef WITH_THREAD PyEval_InitThreads(); #endif MYDB_BEGIN_ALLOW_THREADS; err = self->db->associate(self->db, txn, secondaryDB->db, _db_associateCallback, flags); MYDB_END_ALLOW_THREADS; if (err) { Py_XDECREF(secondaryDB->associateCallback); secondaryDB->associateCallback = NULL; secondaryDB->primaryDBType = 0; } RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_close_internal(DBObject* self, int flags, int do_not_close) { PyObject *dummy; int err = 0; if (self->db != NULL) { /* Can be NULL if db is not in an environment */ EXTRACT_FROM_DOUBLE_LINKED_LIST_MAYBE_NULL(self); if (self->txn) { EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(self); self->txn=NULL; } while(self->children_cursors) { dummy=DBC_close_internal(self->children_cursors); Py_XDECREF(dummy); } while(self->children_sequences) { dummy=DBSequence_close_internal(self->children_sequences,0,0); Py_XDECREF(dummy); } /* ** "do_not_close" is used to dispose all related objects in the ** tree, without actually releasing the "root" object. ** This is done, for example, because function calls like ** "DB.verify()" implicitly close the underlying handle. So ** the handle doesn't need to be closed, but related objects ** must be cleaned up. */ if (!do_not_close) { MYDB_BEGIN_ALLOW_THREADS; err = self->db->close(self->db, flags); MYDB_END_ALLOW_THREADS; self->db = NULL; } RETURN_IF_ERR(); } RETURN_NONE(); } static PyObject* DB_close(DBObject* self, PyObject* args) { int flags=0; if (!PyArg_ParseTuple(args,"|i:close", &flags)) return NULL; return DB_close_internal(self, flags, 0); } static PyObject* _DB_consume(DBObject* self, PyObject* args, PyObject* kwargs, int consume_flag) { int err, flags=0, type; PyObject* txnobj = NULL; PyObject* retval = NULL; DBT key, data; DB_TXN *txn = NULL; static char* kwnames[] = { "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:consume", kwnames, &txnobj, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); type = _DB_get_type(self); if (type == -1) return NULL; if (type != DB_QUEUE) { PyErr_SetString(PyExc_TypeError, "Consume methods only allowed for Queue DB's"); return NULL; } if (!checkTxnObj(txnobj, &txn)) return NULL; CLEAR_DBT(key); CLEAR_DBT(data); if (CHECK_DBFLAG(self, DB_THREAD)) { /* Tell Berkeley DB to malloc the return value (thread safe) */ data.flags = DB_DBT_MALLOC; key.flags = DB_DBT_MALLOC; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->get(self->db, txn, &key, &data, flags|consume_flag); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->moduleFlags.getReturnsNone) { err = 0; Py_INCREF(Py_None); retval = Py_None; } else if (!err) { retval = BuildValue_SS(key.data, key.size, data.data, data.size); FREE_DBT(key); FREE_DBT(data); } RETURN_IF_ERR(); return retval; } static PyObject* DB_consume(DBObject* self, PyObject* args, PyObject* kwargs, int consume_flag) { return _DB_consume(self, args, kwargs, DB_CONSUME); } static PyObject* DB_consume_wait(DBObject* self, PyObject* args, PyObject* kwargs, int consume_flag) { return _DB_consume(self, args, kwargs, DB_CONSUME_WAIT); } static PyObject* DB_cursor(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; DBC* dbc; PyObject* txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = { "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:cursor", kwnames, &txnobj, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!checkTxnObj(txnobj, &txn)) return NULL; MYDB_BEGIN_ALLOW_THREADS; err = self->db->cursor(self->db, txn, &dbc, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return (PyObject*) newDBCursorObject(dbc, (DBTxnObject *)txnobj, self); } static PyObject* DB_delete(DBObject* self, PyObject* args, PyObject* kwargs) { PyObject* txnobj = NULL; int flags = 0; PyObject* keyobj; DBT key; DB_TXN *txn = NULL; static char* kwnames[] = { "key", "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Oi:delete", kwnames, &keyobj, &txnobj, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!make_key_dbt(self, keyobj, &key, NULL)) return NULL; if (!checkTxnObj(txnobj, &txn)) { FREE_DBT(key); return NULL; } if (-1 == _DB_delete(self, txn, &key, 0)) { FREE_DBT(key); return NULL; } FREE_DBT(key); RETURN_NONE(); } static PyObject* DB_compact(DBObject* self, PyObject* args, PyObject* kwargs) { PyObject* txnobj = NULL; PyObject *startobj = NULL, *stopobj = NULL; int flags = 0; DB_TXN *txn = NULL; DBT *start_p = NULL, *stop_p = NULL; DBT start, stop; int err; DB_COMPACT c_data = { 0 }; static char* kwnames[] = { "txn", "start", "stop", "flags", "compact_fillpercent", "compact_pages", "compact_timeout", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|OOOiiiI:compact", kwnames, &txnobj, &startobj, &stopobj, &flags, &c_data.compact_fillpercent, &c_data.compact_pages, &c_data.compact_timeout)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!checkTxnObj(txnobj, &txn)) { return NULL; } if (startobj && make_key_dbt(self, startobj, &start, NULL)) { start_p = &start; } if (stopobj && make_key_dbt(self, stopobj, &stop, NULL)) { stop_p = &stop; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->compact(self->db, txn, start_p, stop_p, &c_data, flags, NULL); MYDB_END_ALLOW_THREADS; if (startobj) FREE_DBT(start); if (stopobj) FREE_DBT(stop); RETURN_IF_ERR(); return PyLong_FromUnsignedLong(c_data.compact_pages_truncated); } static PyObject* DB_fd(DBObject* self) { int err, the_fd; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->fd(self->db, &the_fd); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(the_fd); } static PyObject* DB_exists(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; PyObject* txnobj = NULL; PyObject* keyobj; DBT key; DB_TXN *txn; static char* kwnames[] = {"key", "txn", "flags", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Oi:exists", kwnames, &keyobj, &txnobj, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!make_key_dbt(self, keyobj, &key, NULL)) return NULL; if (!checkTxnObj(txnobj, &txn)) { FREE_DBT(key); return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->exists(self->db, txn, &key, flags); MYDB_END_ALLOW_THREADS; FREE_DBT(key); if (!err) { Py_INCREF(Py_True); return Py_True; } if ((err == DB_NOTFOUND || err == DB_KEYEMPTY)) { Py_INCREF(Py_False); return Py_False; } /* ** If we reach there, there was an error. The ** "return" should be unreachable. */ RETURN_IF_ERR(); assert(0); /* This coude SHOULD be unreachable */ return NULL; } static PyObject* DB_get(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; PyObject* txnobj = NULL; PyObject* keyobj; PyObject* dfltobj = NULL; PyObject* retval = NULL; int dlen = -1; int doff = -1; DBT key, data; DB_TXN *txn = NULL; static char* kwnames[] = {"key", "default", "txn", "flags", "dlen", "doff", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|OOiii:get", kwnames, &keyobj, &dfltobj, &txnobj, &flags, &dlen, &doff)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!make_key_dbt(self, keyobj, &key, &flags)) return NULL; if (!checkTxnObj(txnobj, &txn)) { FREE_DBT(key); return NULL; } CLEAR_DBT(data); if (CHECK_DBFLAG(self, DB_THREAD)) { /* Tell Berkeley DB to malloc the return value (thread safe) */ data.flags = DB_DBT_MALLOC; } if (!add_partial_dbt(&data, dlen, doff)) { FREE_DBT(key); return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->get(self->db, txn, &key, &data, flags); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && (dfltobj != NULL)) { err = 0; Py_INCREF(dfltobj); retval = dfltobj; } else if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->moduleFlags.getReturnsNone) { err = 0; Py_INCREF(Py_None); retval = Py_None; } else if (!err) { if (flags & DB_SET_RECNO) /* return both key and data */ retval = BuildValue_SS(key.data, key.size, data.data, data.size); else /* return just the data */ retval = Build_PyString(data.data, data.size); FREE_DBT(data); } FREE_DBT(key); RETURN_IF_ERR(); return retval; } static PyObject* DB_pget(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; PyObject* txnobj = NULL; PyObject* keyobj; PyObject* dfltobj = NULL; PyObject* retval = NULL; int dlen = -1; int doff = -1; DBT key, pkey, data; DB_TXN *txn = NULL; static char* kwnames[] = {"key", "default", "txn", "flags", "dlen", "doff", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|OOiii:pget", kwnames, &keyobj, &dfltobj, &txnobj, &flags, &dlen, &doff)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!make_key_dbt(self, keyobj, &key, &flags)) return NULL; if (!checkTxnObj(txnobj, &txn)) { FREE_DBT(key); return NULL; } CLEAR_DBT(data); if (CHECK_DBFLAG(self, DB_THREAD)) { /* Tell Berkeley DB to malloc the return value (thread safe) */ data.flags = DB_DBT_MALLOC; } if (!add_partial_dbt(&data, dlen, doff)) { FREE_DBT(key); return NULL; } CLEAR_DBT(pkey); pkey.flags = DB_DBT_MALLOC; MYDB_BEGIN_ALLOW_THREADS; err = self->db->pget(self->db, txn, &key, &pkey, &data, flags); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && (dfltobj != NULL)) { err = 0; Py_INCREF(dfltobj); retval = dfltobj; } else if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->moduleFlags.getReturnsNone) { err = 0; Py_INCREF(Py_None); retval = Py_None; } else if (!err) { PyObject *pkeyObj; PyObject *dataObj; dataObj = Build_PyString(data.data, data.size); if (self->primaryDBType == DB_RECNO || self->primaryDBType == DB_QUEUE) pkeyObj = NUMBER_FromLong(*(int *)pkey.data); else pkeyObj = Build_PyString(pkey.data, pkey.size); if (flags & DB_SET_RECNO) /* return key , pkey and data */ { PyObject *keyObj; int type = _DB_get_type(self); if (type == DB_RECNO || type == DB_QUEUE) keyObj = NUMBER_FromLong(*(int *)key.data); else keyObj = Build_PyString(key.data, key.size); retval = PyTuple_Pack(3, keyObj, pkeyObj, dataObj); Py_DECREF(keyObj); } else /* return just the pkey and data */ { retval = PyTuple_Pack(2, pkeyObj, dataObj); } Py_DECREF(dataObj); Py_DECREF(pkeyObj); FREE_DBT(pkey); FREE_DBT(data); } FREE_DBT(key); RETURN_IF_ERR(); return retval; } /* Return size of entry */ static PyObject* DB_get_size(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; PyObject* txnobj = NULL; PyObject* keyobj; PyObject* retval = NULL; DBT key, data; DB_TXN *txn = NULL; static char* kwnames[] = { "key", "txn", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|O:get_size", kwnames, &keyobj, &txnobj)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!make_key_dbt(self, keyobj, &key, &flags)) return NULL; if (!checkTxnObj(txnobj, &txn)) { FREE_DBT(key); return NULL; } CLEAR_DBT(data); /* We don't allocate any memory, forcing a DB_BUFFER_SMALL error and thus getting the record size. */ data.flags = DB_DBT_USERMEM; data.ulen = 0; MYDB_BEGIN_ALLOW_THREADS; err = self->db->get(self->db, txn, &key, &data, flags); MYDB_END_ALLOW_THREADS; if ((err == DB_BUFFER_SMALL) || (err == 0)) { retval = NUMBER_FromLong((long)data.size); err = 0; } FREE_DBT(key); FREE_DBT(data); RETURN_IF_ERR(); return retval; } static PyObject* DB_get_both(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; PyObject* txnobj = NULL; PyObject* keyobj; PyObject* dataobj; PyObject* retval = NULL; DBT key, data; void *orig_data; DB_TXN *txn = NULL; static char* kwnames[] = { "key", "data", "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|Oi:get_both", kwnames, &keyobj, &dataobj, &txnobj, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!make_key_dbt(self, keyobj, &key, NULL)) return NULL; if ( !make_dbt(dataobj, &data) || !checkTxnObj(txnobj, &txn) ) { FREE_DBT(key); return NULL; } flags |= DB_GET_BOTH; orig_data = data.data; if (CHECK_DBFLAG(self, DB_THREAD)) { /* Tell Berkeley DB to malloc the return value (thread safe) */ /* XXX(nnorwitz): At least 4.4.20 and 4.5.20 require this flag. */ data.flags = DB_DBT_MALLOC; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->get(self->db, txn, &key, &data, flags); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->moduleFlags.getReturnsNone) { err = 0; Py_INCREF(Py_None); retval = Py_None; } else if (!err) { /* XXX(nnorwitz): can we do: retval = dataobj; Py_INCREF(retval); */ retval = Build_PyString(data.data, data.size); /* Even though the flags require DB_DBT_MALLOC, data is not always allocated. 4.4: allocated, 4.5: *not* allocated. :-( */ if (data.data != orig_data) FREE_DBT(data); } FREE_DBT(key); RETURN_IF_ERR(); return retval; } static PyObject* DB_get_byteswapped(DBObject* self) { int err = 0; int retval = -1; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_byteswapped(self->db, &retval); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(retval); } static PyObject* DB_get_type(DBObject* self) { int type; CHECK_DB_NOT_CLOSED(self); type = _DB_get_type(self); if (type == -1) return NULL; return NUMBER_FromLong(type); } static PyObject* DB_join(DBObject* self, PyObject* args) { int err, flags=0; int length, x; PyObject* cursorsObj; DBC** cursors; DBC* dbc; if (!PyArg_ParseTuple(args,"O|i:join", &cursorsObj, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!PySequence_Check(cursorsObj)) { PyErr_SetString(PyExc_TypeError, "Sequence of DBCursor objects expected"); return NULL; } length = PyObject_Length(cursorsObj); cursors = malloc((length+1) * sizeof(DBC*)); if (!cursors) { PyErr_NoMemory(); return NULL; } cursors[length] = NULL; for (x=0; xdbc; Py_DECREF(item); } MYDB_BEGIN_ALLOW_THREADS; err = self->db->join(self->db, cursors, &dbc, flags); MYDB_END_ALLOW_THREADS; free(cursors); RETURN_IF_ERR(); /* FIXME: this is a buggy interface. The returned cursor contains internal references to the passed in cursors but does not hold python references to them or prevent them from being closed prematurely. This can cause python to crash when things are done in the wrong order. */ return (PyObject*) newDBCursorObject(dbc, NULL, self); } static PyObject* DB_key_range(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; PyObject* txnobj = NULL; PyObject* keyobj; DBT key; DB_TXN *txn = NULL; DB_KEY_RANGE range; static char* kwnames[] = { "key", "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Oi:key_range", kwnames, &keyobj, &txnobj, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!make_dbt(keyobj, &key)) /* BTree only, don't need to allow for an int key */ return NULL; if (!checkTxnObj(txnobj, &txn)) return NULL; MYDB_BEGIN_ALLOW_THREADS; err = self->db->key_range(self->db, txn, &key, &range, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return Py_BuildValue("ddd", range.less, range.equal, range.greater); } static PyObject* DB_open(DBObject* self, PyObject* args, PyObject* kwargs) { int err, type = DB_UNKNOWN, flags=0, mode=0660; char* filename = NULL; char* dbname = NULL; PyObject *txnobj = NULL; DB_TXN *txn = NULL; /* with dbname */ static char* kwnames[] = { "filename", "dbname", "dbtype", "flags", "mode", "txn", NULL}; /* without dbname */ static char* kwnames_basic[] = { "filename", "dbtype", "flags", "mode", "txn", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "z|ziiiO:open", kwnames, &filename, &dbname, &type, &flags, &mode, &txnobj)) { PyErr_Clear(); type = DB_UNKNOWN; flags = 0; mode = 0660; filename = NULL; dbname = NULL; if (!PyArg_ParseTupleAndKeywords(args, kwargs,"z|iiiO:open", kwnames_basic, &filename, &type, &flags, &mode, &txnobj)) return NULL; } if (!checkTxnObj(txnobj, &txn)) return NULL; if (NULL == self->db) { PyObject *t = Py_BuildValue("(is)", 0, "Cannot call open() twice for DB object"); if (t) { PyErr_SetObject(DBError, t); Py_DECREF(t); } return NULL; } if (txn) { /* Can't use 'txnobj' because could be 'txnobj==Py_None' */ INSERT_IN_DOUBLE_LINKED_LIST_TXN(((DBTxnObject *)txnobj)->children_dbs,self); self->txn=(DBTxnObject *)txnobj; } else { self->txn=NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->open(self->db, txn, filename, dbname, type, flags, mode); MYDB_END_ALLOW_THREADS; if (makeDBError(err)) { PyObject *dummy; dummy=DB_close_internal(self, 0, 0); Py_XDECREF(dummy); return NULL; } self->db->get_flags(self->db, &self->setflags); self->flags = flags; RETURN_NONE(); } static PyObject* DB_put(DBObject* self, PyObject* args, PyObject* kwargs) { int flags=0; PyObject* txnobj = NULL; int dlen = -1; int doff = -1; PyObject* keyobj, *dataobj, *retval; DBT key, data; DB_TXN *txn = NULL; static char* kwnames[] = { "key", "data", "txn", "flags", "dlen", "doff", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|Oiii:put", kwnames, &keyobj, &dataobj, &txnobj, &flags, &dlen, &doff)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!make_key_dbt(self, keyobj, &key, NULL)) return NULL; if ( !make_dbt(dataobj, &data) || !add_partial_dbt(&data, dlen, doff) || !checkTxnObj(txnobj, &txn) ) { FREE_DBT(key); return NULL; } if (-1 == _DB_put(self, txn, &key, &data, flags)) { FREE_DBT(key); return NULL; } if (flags & DB_APPEND) retval = NUMBER_FromLong(*((db_recno_t*)key.data)); else { retval = Py_None; Py_INCREF(retval); } FREE_DBT(key); return retval; } static PyObject* DB_remove(DBObject* self, PyObject* args, PyObject* kwargs) { char* filename; char* database = NULL; int err, flags=0; static char* kwnames[] = { "filename", "dbname", "flags", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|zi:remove", kwnames, &filename, &database, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); EXTRACT_FROM_DOUBLE_LINKED_LIST_MAYBE_NULL(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->remove(self->db, filename, database, flags); MYDB_END_ALLOW_THREADS; self->db = NULL; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_rename(DBObject* self, PyObject* args) { char* filename; char* database; char* newname; int err, flags=0; if (!PyArg_ParseTuple(args, "sss|i:rename", &filename, &database, &newname, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->rename(self->db, filename, database, newname, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_private(DBObject* self) { /* We can give out the private field even if db is closed */ Py_INCREF(self->private_obj); return self->private_obj; } static PyObject* DB_set_private(DBObject* self, PyObject* private_obj) { /* We can set the private field even if db is closed */ Py_DECREF(self->private_obj); Py_INCREF(private_obj); self->private_obj = private_obj; RETURN_NONE(); } static PyObject* DB_set_priority(DBObject* self, PyObject* args) { int err, priority; if (!PyArg_ParseTuple(args,"i:set_priority", &priority)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_priority(self->db, priority); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_priority(DBObject* self) { int err = 0; DB_CACHE_PRIORITY priority; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_priority(self->db, &priority); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(priority); } static PyObject* DB_get_dbname(DBObject* self) { int err; const char *filename, *dbname; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_dbname(self->db, &filename, &dbname); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); /* If "dbname==NULL", it is correctly converted to "None" */ return Py_BuildValue("(ss)", filename, dbname); } static PyObject* DB_get_open_flags(DBObject* self) { int err; unsigned int flags; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_open_flags(self->db, &flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(flags); } static PyObject* DB_set_q_extentsize(DBObject* self, PyObject* args) { int err; u_int32_t extentsize; if (!PyArg_ParseTuple(args,"i:set_q_extentsize", &extentsize)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_q_extentsize(self->db, extentsize); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_q_extentsize(DBObject* self) { int err = 0; u_int32_t extentsize; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_q_extentsize(self->db, &extentsize); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(extentsize); } static PyObject* DB_set_bt_minkey(DBObject* self, PyObject* args) { int err, minkey; if (!PyArg_ParseTuple(args,"i:set_bt_minkey", &minkey)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_bt_minkey(self->db, minkey); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_bt_minkey(DBObject* self) { int err; u_int32_t bt_minkey; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_bt_minkey(self->db, &bt_minkey); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(bt_minkey); } static int _default_cmp(const DBT *leftKey, const DBT *rightKey) { int res; int lsize = leftKey->size, rsize = rightKey->size; res = memcmp(leftKey->data, rightKey->data, lsize < rsize ? lsize : rsize); if (res == 0) { if (lsize < rsize) { res = -1; } else if (lsize > rsize) { res = 1; } } return res; } static int _db_compareCallback(DB* db, const DBT *leftKey, const DBT *rightKey #if (DBVER >= 60) , size_t *locp #endif ) { int res = 0; PyObject *args; PyObject *result = NULL; DBObject *self = (DBObject *)db->app_private; # if (DBVER >= 60) locp = NULL; /* As required by documentation */ #endif if (self == NULL || self->btCompareCallback == NULL) { MYDB_BEGIN_BLOCK_THREADS; PyErr_SetString(PyExc_TypeError, (self == 0 ? "DB_bt_compare db is NULL." : "DB_bt_compare callback is NULL.")); /* we're in a callback within the DB code, we can't raise */ PyErr_Print(); res = _default_cmp(leftKey, rightKey); MYDB_END_BLOCK_THREADS; } else { MYDB_BEGIN_BLOCK_THREADS; args = BuildValue_SS(leftKey->data, leftKey->size, rightKey->data, rightKey->size); if (args != NULL) { result = PyEval_CallObject(self->btCompareCallback, args); } if (args == NULL || result == NULL) { /* we're in a callback within the DB code, we can't raise */ PyErr_Print(); res = _default_cmp(leftKey, rightKey); } else if (NUMBER_Check(result)) { res = NUMBER_AsLong(result); } else { PyErr_SetString(PyExc_TypeError, "DB_bt_compare callback MUST return an int."); /* we're in a callback within the DB code, we can't raise */ PyErr_Print(); res = _default_cmp(leftKey, rightKey); } Py_XDECREF(args); Py_XDECREF(result); MYDB_END_BLOCK_THREADS; } return res; } static PyObject* DB_set_bt_compare(DBObject* self, PyObject* comparator) { int err; PyObject *tuple, *result; CHECK_DB_NOT_CLOSED(self); if (!PyCallable_Check(comparator)) { makeTypeError("Callable", comparator); return NULL; } /* * Perform a test call of the comparator function with two empty * string objects here. verify that it returns an int (0). * err if not. */ tuple = Py_BuildValue("(ss)", "", ""); result = PyEval_CallObject(comparator, tuple); Py_DECREF(tuple); if (result == NULL) return NULL; if (!NUMBER_Check(result)) { Py_DECREF(result); PyErr_SetString(PyExc_TypeError, "callback MUST return an int"); return NULL; } else if (NUMBER_AsLong(result) != 0) { Py_DECREF(result); PyErr_SetString(PyExc_TypeError, "callback failed to return 0 on two empty strings"); return NULL; } Py_DECREF(result); /* We don't accept multiple set_bt_compare operations, in order to * simplify the code. This would have no real use, as one cannot * change the function once the db is opened anyway */ if (self->btCompareCallback != NULL) { PyErr_SetString(PyExc_RuntimeError, "set_bt_compare() cannot be called more than once"); return NULL; } Py_INCREF(comparator); self->btCompareCallback = comparator; /* This is to workaround a problem with un-initialized threads (see comment in DB_associate) */ #ifdef WITH_THREAD PyEval_InitThreads(); #endif err = self->db->set_bt_compare(self->db, _db_compareCallback); if (err) { /* restore the old state in case of error */ Py_DECREF(comparator); self->btCompareCallback = NULL; } RETURN_IF_ERR(); RETURN_NONE(); } static int _db_dupCompareCallback(DB* db, const DBT *leftKey, const DBT *rightKey #if (DBVER >= 60) , size_t *locp #endif ) { int res = 0; PyObject *args; PyObject *result = NULL; DBObject *self = (DBObject *)db->app_private; #if (DBVER >= 60) locp = NULL; /* As required by documentation */ #endif if (self == NULL || self->dupCompareCallback == NULL) { MYDB_BEGIN_BLOCK_THREADS; PyErr_SetString(PyExc_TypeError, (self == 0 ? "DB_dup_compare db is NULL." : "DB_dup_compare callback is NULL.")); /* we're in a callback within the DB code, we can't raise */ PyErr_Print(); res = _default_cmp(leftKey, rightKey); MYDB_END_BLOCK_THREADS; } else { MYDB_BEGIN_BLOCK_THREADS; args = BuildValue_SS(leftKey->data, leftKey->size, rightKey->data, rightKey->size); if (args != NULL) { result = PyEval_CallObject(self->dupCompareCallback, args); } if (args == NULL || result == NULL) { /* we're in a callback within the DB code, we can't raise */ PyErr_Print(); res = _default_cmp(leftKey, rightKey); } else if (NUMBER_Check(result)) { res = NUMBER_AsLong(result); } else { PyErr_SetString(PyExc_TypeError, "DB_dup_compare callback MUST return an int."); /* we're in a callback within the DB code, we can't raise */ PyErr_Print(); res = _default_cmp(leftKey, rightKey); } Py_XDECREF(args); Py_XDECREF(result); MYDB_END_BLOCK_THREADS; } return res; } static PyObject* DB_set_dup_compare(DBObject* self, PyObject* comparator) { int err; PyObject *tuple, *result; CHECK_DB_NOT_CLOSED(self); if (!PyCallable_Check(comparator)) { makeTypeError("Callable", comparator); return NULL; } /* * Perform a test call of the comparator function with two empty * string objects here. verify that it returns an int (0). * err if not. */ tuple = Py_BuildValue("(ss)", "", ""); result = PyEval_CallObject(comparator, tuple); Py_DECREF(tuple); if (result == NULL) return NULL; if (!NUMBER_Check(result)) { Py_DECREF(result); PyErr_SetString(PyExc_TypeError, "callback MUST return an int"); return NULL; } else if (NUMBER_AsLong(result) != 0) { Py_DECREF(result); PyErr_SetString(PyExc_TypeError, "callback failed to return 0 on two empty strings"); return NULL; } Py_DECREF(result); /* We don't accept multiple set_dup_compare operations, in order to * simplify the code. This would have no real use, as one cannot * change the function once the db is opened anyway */ if (self->dupCompareCallback != NULL) { PyErr_SetString(PyExc_RuntimeError, "set_dup_compare() cannot be called more than once"); return NULL; } Py_INCREF(comparator); self->dupCompareCallback = comparator; /* This is to workaround a problem with un-initialized threads (see comment in DB_associate) */ #ifdef WITH_THREAD PyEval_InitThreads(); #endif err = self->db->set_dup_compare(self->db, _db_dupCompareCallback); if (err) { /* restore the old state in case of error */ Py_DECREF(comparator); self->dupCompareCallback = NULL; } RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_set_cachesize(DBObject* self, PyObject* args) { int err; int gbytes = 0, bytes = 0, ncache = 0; if (!PyArg_ParseTuple(args,"ii|i:set_cachesize", &gbytes,&bytes,&ncache)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_cachesize(self->db, gbytes, bytes, ncache); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_cachesize(DBObject* self) { int err; u_int32_t gbytes, bytes; int ncache; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_cachesize(self->db, &gbytes, &bytes, &ncache); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return Py_BuildValue("(iii)", gbytes, bytes, ncache); } static PyObject* DB_set_flags(DBObject* self, PyObject* args) { int err, flags; if (!PyArg_ParseTuple(args,"i:set_flags", &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_flags(self->db, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); self->setflags |= flags; RETURN_NONE(); } static PyObject* DB_get_flags(DBObject* self) { int err; u_int32_t flags; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_flags(self->db, &flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(flags); } static PyObject* DB_get_transactional(DBObject* self) { int err; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_transactional(self->db); MYDB_END_ALLOW_THREADS; if(err == 0) { Py_INCREF(Py_False); return Py_False; } else if(err == 1) { Py_INCREF(Py_True); return Py_True; } /* ** If we reach there, there was an error. The ** "return" should be unreachable. */ RETURN_IF_ERR(); assert(0); /* This code SHOULD be unreachable */ return NULL; } static PyObject* DB_set_h_ffactor(DBObject* self, PyObject* args) { int err, ffactor; if (!PyArg_ParseTuple(args,"i:set_h_ffactor", &ffactor)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_h_ffactor(self->db, ffactor); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_h_ffactor(DBObject* self) { int err; u_int32_t ffactor; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_h_ffactor(self->db, &ffactor); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(ffactor); } static PyObject* DB_set_h_nelem(DBObject* self, PyObject* args) { int err, nelem; if (!PyArg_ParseTuple(args,"i:set_h_nelem", &nelem)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_h_nelem(self->db, nelem); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_h_nelem(DBObject* self) { int err; u_int32_t nelem; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_h_nelem(self->db, &nelem); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(nelem); } static PyObject* DB_set_lorder(DBObject* self, PyObject* args) { int err, lorder; if (!PyArg_ParseTuple(args,"i:set_lorder", &lorder)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_lorder(self->db, lorder); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_lorder(DBObject* self) { int err; int lorder; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_lorder(self->db, &lorder); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(lorder); } static PyObject* DB_set_pagesize(DBObject* self, PyObject* args) { int err, pagesize; if (!PyArg_ParseTuple(args,"i:set_pagesize", &pagesize)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_pagesize(self->db, pagesize); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_pagesize(DBObject* self) { int err; u_int32_t pagesize; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_pagesize(self->db, &pagesize); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(pagesize); } static PyObject* DB_set_re_delim(DBObject* self, PyObject* args) { int err; char delim; if (!PyArg_ParseTuple(args,"b:set_re_delim", &delim)) { PyErr_Clear(); if (!PyArg_ParseTuple(args,"c:set_re_delim", &delim)) return NULL; } CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_re_delim(self->db, delim); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_re_delim(DBObject* self) { int err, re_delim; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_re_delim(self->db, &re_delim); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(re_delim); } static PyObject* DB_set_re_len(DBObject* self, PyObject* args) { int err, len; if (!PyArg_ParseTuple(args,"i:set_re_len", &len)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_re_len(self->db, len); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_re_len(DBObject* self) { int err; u_int32_t re_len; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_re_len(self->db, &re_len); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(re_len); } static PyObject* DB_set_re_pad(DBObject* self, PyObject* args) { int err; char pad; if (!PyArg_ParseTuple(args,"b:set_re_pad", &pad)) { PyErr_Clear(); if (!PyArg_ParseTuple(args,"c:set_re_pad", &pad)) return NULL; } CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_re_pad(self->db, pad); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_re_pad(DBObject* self) { int err, re_pad; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_re_pad(self->db, &re_pad); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(re_pad); } static PyObject* DB_set_re_source(DBObject* self, PyObject* args) { int err; char *source; if (!PyArg_ParseTuple(args,"s:set_re_source", &source)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_re_source(self->db, source); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_re_source(DBObject* self) { int err; const char *source; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_re_source(self->db, &source); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return PyBytes_FromString(source); } static PyObject* DB_stat(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags = 0, type; void* sp; PyObject* d; PyObject* txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = { "flags", "txn", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|iO:stat", kwnames, &flags, &txnobj)) return NULL; if (!checkTxnObj(txnobj, &txn)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->stat(self->db, txn, &sp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); /* Turn the stat structure into a dictionary */ type = _DB_get_type(self); if ((type == -1) || ((d = PyDict_New()) == NULL)) { free(sp); return NULL; } #define MAKE_HASH_ENTRY(name) _addIntToDict(d, #name, ((DB_HASH_STAT*)sp)->hash_##name) #define MAKE_BT_ENTRY(name) _addIntToDict(d, #name, ((DB_BTREE_STAT*)sp)->bt_##name) #define MAKE_QUEUE_ENTRY(name) _addIntToDict(d, #name, ((DB_QUEUE_STAT*)sp)->qs_##name) switch (type) { case DB_HASH: MAKE_HASH_ENTRY(magic); MAKE_HASH_ENTRY(version); MAKE_HASH_ENTRY(nkeys); MAKE_HASH_ENTRY(ndata); MAKE_HASH_ENTRY(pagecnt); MAKE_HASH_ENTRY(pagesize); MAKE_HASH_ENTRY(ffactor); MAKE_HASH_ENTRY(buckets); MAKE_HASH_ENTRY(free); MAKE_HASH_ENTRY(bfree); MAKE_HASH_ENTRY(bigpages); MAKE_HASH_ENTRY(big_bfree); MAKE_HASH_ENTRY(overflows); MAKE_HASH_ENTRY(ovfl_free); MAKE_HASH_ENTRY(dup); MAKE_HASH_ENTRY(dup_free); break; case DB_BTREE: case DB_RECNO: MAKE_BT_ENTRY(magic); MAKE_BT_ENTRY(version); MAKE_BT_ENTRY(nkeys); MAKE_BT_ENTRY(ndata); MAKE_BT_ENTRY(pagecnt); MAKE_BT_ENTRY(pagesize); MAKE_BT_ENTRY(minkey); MAKE_BT_ENTRY(re_len); MAKE_BT_ENTRY(re_pad); MAKE_BT_ENTRY(levels); MAKE_BT_ENTRY(int_pg); MAKE_BT_ENTRY(leaf_pg); MAKE_BT_ENTRY(dup_pg); MAKE_BT_ENTRY(over_pg); MAKE_BT_ENTRY(empty_pg); MAKE_BT_ENTRY(free); MAKE_BT_ENTRY(int_pgfree); MAKE_BT_ENTRY(leaf_pgfree); MAKE_BT_ENTRY(dup_pgfree); MAKE_BT_ENTRY(over_pgfree); break; case DB_QUEUE: MAKE_QUEUE_ENTRY(magic); MAKE_QUEUE_ENTRY(version); MAKE_QUEUE_ENTRY(nkeys); MAKE_QUEUE_ENTRY(ndata); MAKE_QUEUE_ENTRY(pagesize); MAKE_QUEUE_ENTRY(extentsize); MAKE_QUEUE_ENTRY(pages); MAKE_QUEUE_ENTRY(re_len); MAKE_QUEUE_ENTRY(re_pad); MAKE_QUEUE_ENTRY(pgfree); MAKE_QUEUE_ENTRY(first_recno); MAKE_QUEUE_ENTRY(cur_recno); break; default: PyErr_SetString(PyExc_TypeError, "Unknown DB type, unable to stat"); Py_DECREF(d); d = NULL; } #undef MAKE_HASH_ENTRY #undef MAKE_BT_ENTRY #undef MAKE_QUEUE_ENTRY free(sp); return d; } static PyObject* DB_stat_print(DBObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:stat_print", kwnames, &flags)) { return NULL; } CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->stat_print(self->db, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_sync(DBObject* self, PyObject* args) { int err; int flags = 0; if (!PyArg_ParseTuple(args,"|i:sync", &flags )) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->sync(self->db, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_truncate(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; u_int32_t count=0; PyObject* txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = { "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:cursor", kwnames, &txnobj, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); if (!checkTxnObj(txnobj, &txn)) return NULL; MYDB_BEGIN_ALLOW_THREADS; err = self->db->truncate(self->db, txn, &count, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(count); } static PyObject* DB_upgrade(DBObject* self, PyObject* args) { int err, flags=0; char *filename; if (!PyArg_ParseTuple(args,"s|i:upgrade", &filename, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db->upgrade(self->db, filename, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_verify(DBObject* self, PyObject* args, PyObject* kwargs) { int err, flags=0; char* fileName; char* dbName=NULL; char* outFileName=NULL; FILE* outFile=NULL; static char* kwnames[] = { "filename", "dbname", "outfile", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|zzi:verify", kwnames, &fileName, &dbName, &outFileName, &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); if (outFileName) outFile = fopen(outFileName, "w"); /* XXX(nnorwitz): it should probably be an exception if outFile can't be opened. */ { /* DB.verify acts as a DB handle destructor (like close) */ PyObject *error; error=DB_close_internal(self, 0, 1); if (error) { return error; } } MYDB_BEGIN_ALLOW_THREADS; err = self->db->verify(self->db, fileName, dbName, outFile, flags); MYDB_END_ALLOW_THREADS; self->db = NULL; /* Implicit close; related objects already released */ if (outFile) fclose(outFile); RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_set_get_returns_none(DBObject* self, PyObject* args) { int flags=0; int oldValue=0; if (!PyArg_ParseTuple(args,"i:set_get_returns_none", &flags)) return NULL; CHECK_DB_NOT_CLOSED(self); if (self->moduleFlags.getReturnsNone) ++oldValue; if (self->moduleFlags.cursorSetReturnsNone) ++oldValue; self->moduleFlags.getReturnsNone = (flags >= 1); self->moduleFlags.cursorSetReturnsNone = (flags >= 2); return NUMBER_FromLong(oldValue); } static PyObject* DB_set_encrypt(DBObject* self, PyObject* args, PyObject* kwargs) { int err; u_int32_t flags=0; char *passwd = NULL; static char* kwnames[] = { "passwd", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|i:set_encrypt", kwnames, &passwd, &flags)) { return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->set_encrypt(self->db, passwd, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DB_get_encrypt_flags(DBObject* self) { int err; u_int32_t flags; MYDB_BEGIN_ALLOW_THREADS; err = self->db->get_encrypt_flags(self->db, &flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(flags); } /*-------------------------------------------------------------- */ /* Mapping and Dictionary-like access routines */ Py_ssize_t DB_length(PyObject* _self) { int err; Py_ssize_t size = 0; void* sp; DBObject* self = (DBObject*)_self; if (self->db == NULL) { PyObject *t = Py_BuildValue("(is)", 0, "DB object has been closed"); if (t) { PyErr_SetObject(DBError, t); Py_DECREF(t); } return -1; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->stat(self->db, /*txnid*/ NULL, &sp, 0); MYDB_END_ALLOW_THREADS; if (makeDBError(err)) { return -1; } /* All the stat structures have matching fields upto the ndata field, so we can use any of them for the type cast */ size = ((DB_BTREE_STAT*)sp)->bt_ndata; free(sp); return size; } PyObject* DB_subscript(DBObject* self, PyObject* keyobj) { int err; PyObject* retval; DBT key; DBT data; CHECK_DB_NOT_CLOSED(self); if (!make_key_dbt(self, keyobj, &key, NULL)) return NULL; CLEAR_DBT(data); if (CHECK_DBFLAG(self, DB_THREAD)) { /* Tell Berkeley DB to malloc the return value (thread safe) */ data.flags = DB_DBT_MALLOC; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->get(self->db, NULL, &key, &data, 0); MYDB_END_ALLOW_THREADS; if (err == DB_NOTFOUND || err == DB_KEYEMPTY) { PyErr_SetObject(PyExc_KeyError, keyobj); retval = NULL; } else if (makeDBError(err)) { retval = NULL; } else { retval = Build_PyString(data.data, data.size); FREE_DBT(data); } FREE_DBT(key); return retval; } static int DB_ass_sub(DBObject* self, PyObject* keyobj, PyObject* dataobj) { DBT key, data; int retval; int flags = 0; if (self->db == NULL) { PyObject *t = Py_BuildValue("(is)", 0, "DB object has been closed"); if (t) { PyErr_SetObject(DBError, t); Py_DECREF(t); } return -1; } if (!make_key_dbt(self, keyobj, &key, NULL)) return -1; if (dataobj != NULL) { if (!make_dbt(dataobj, &data)) retval = -1; else { if (self->setflags & (DB_DUP|DB_DUPSORT)) /* dictionaries shouldn't have duplicate keys */ flags = DB_NOOVERWRITE; retval = _DB_put(self, NULL, &key, &data, flags); if ((retval == -1) && (self->setflags & (DB_DUP|DB_DUPSORT))) { /* try deleting any old record that matches and then PUT it * again... */ _DB_delete(self, NULL, &key, 0); PyErr_Clear(); retval = _DB_put(self, NULL, &key, &data, flags); } } } else { /* dataobj == NULL, so delete the key */ retval = _DB_delete(self, NULL, &key, 0); } FREE_DBT(key); return retval; } static PyObject* _DB_has_key(DBObject* self, PyObject* keyobj, PyObject* txnobj) { int err; DBT key; DB_TXN *txn = NULL; CHECK_DB_NOT_CLOSED(self); if (!make_key_dbt(self, keyobj, &key, NULL)) return NULL; if (!checkTxnObj(txnobj, &txn)) { FREE_DBT(key); return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->db->exists(self->db, txn, &key, 0); MYDB_END_ALLOW_THREADS; FREE_DBT(key); /* ** DB_BUFFER_SMALL is only used if we use "get". ** We can drop it when we only use "exists", ** when we drop suport for Berkeley DB < 4.6. */ if (err == DB_BUFFER_SMALL || err == 0) { Py_INCREF(Py_True); return Py_True; } else if (err == DB_NOTFOUND || err == DB_KEYEMPTY) { Py_INCREF(Py_False); return Py_False; } makeDBError(err); return NULL; } static PyObject* DB_has_key(DBObject* self, PyObject* args, PyObject* kwargs) { PyObject* keyobj; PyObject* txnobj = NULL; static char* kwnames[] = {"key","txn", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|O:has_key", kwnames, &keyobj, &txnobj)) return NULL; return _DB_has_key(self, keyobj, txnobj); } static int DB_contains(DBObject* self, PyObject* keyobj) { PyObject* result; int result2 = 0; result = _DB_has_key(self, keyobj, NULL) ; if (result == NULL) { return -1; /* Propague exception */ } if (result != Py_False) { result2 = 1; } Py_DECREF(result); return result2; } #define _KEYS_LIST 1 #define _VALUES_LIST 2 #define _ITEMS_LIST 3 static PyObject* _DB_make_list(DBObject* self, DB_TXN* txn, int type) { int err, dbtype; DBT key; DBT data; DBC *cursor; PyObject* list; PyObject* item = NULL; CHECK_DB_NOT_CLOSED(self); CLEAR_DBT(key); CLEAR_DBT(data); dbtype = _DB_get_type(self); if (dbtype == -1) return NULL; list = PyList_New(0); if (list == NULL) return NULL; /* get a cursor */ MYDB_BEGIN_ALLOW_THREADS; err = self->db->cursor(self->db, txn, &cursor, 0); MYDB_END_ALLOW_THREADS; if (makeDBError(err)) { Py_DECREF(list); return NULL; } while (1) { /* use the cursor to traverse the DB, collecting items */ MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(cursor, &key, &data, DB_NEXT); MYDB_END_ALLOW_THREADS; if (err) { /* for any error, break out of the loop */ break; } switch (type) { case _KEYS_LIST: switch(dbtype) { case DB_BTREE: case DB_HASH: default: item = Build_PyString(key.data, key.size); break; case DB_RECNO: case DB_QUEUE: item = NUMBER_FromLong(*((db_recno_t*)key.data)); break; } break; case _VALUES_LIST: item = Build_PyString(data.data, data.size); break; case _ITEMS_LIST: switch(dbtype) { case DB_BTREE: case DB_HASH: default: item = BuildValue_SS(key.data, key.size, data.data, data.size); break; case DB_RECNO: case DB_QUEUE: item = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size); break; } break; default: PyErr_Format(PyExc_ValueError, "Unknown key type 0x%x", type); item = NULL; break; } if (item == NULL) { Py_DECREF(list); list = NULL; goto done; } if (PyList_Append(list, item)) { Py_DECREF(list); Py_DECREF(item); list = NULL; goto done; } Py_DECREF(item); } /* DB_NOTFOUND || DB_KEYEMPTY is okay, it means we got to the end */ if (err != DB_NOTFOUND && err != DB_KEYEMPTY && makeDBError(err)) { Py_DECREF(list); list = NULL; } done: MYDB_BEGIN_ALLOW_THREADS; _DBC_close(cursor); MYDB_END_ALLOW_THREADS; return list; } static PyObject* DB_keys(DBObject* self, PyObject* args) { PyObject* txnobj = NULL; DB_TXN *txn = NULL; if (!PyArg_UnpackTuple(args, "keys", 0, 1, &txnobj)) return NULL; if (!checkTxnObj(txnobj, &txn)) return NULL; return _DB_make_list(self, txn, _KEYS_LIST); } static PyObject* DB_items(DBObject* self, PyObject* args) { PyObject* txnobj = NULL; DB_TXN *txn = NULL; if (!PyArg_UnpackTuple(args, "items", 0, 1, &txnobj)) return NULL; if (!checkTxnObj(txnobj, &txn)) return NULL; return _DB_make_list(self, txn, _ITEMS_LIST); } static PyObject* DB_values(DBObject* self, PyObject* args) { PyObject* txnobj = NULL; DB_TXN *txn = NULL; if (!PyArg_UnpackTuple(args, "values", 0, 1, &txnobj)) return NULL; if (!checkTxnObj(txnobj, &txn)) return NULL; return _DB_make_list(self, txn, _VALUES_LIST); } /* --------------------------------------------------------------------- */ /* DBLogCursor methods */ static PyObject* DBLogCursor_close_internal(DBLogCursorObject* self) { int err = 0; if (self->logc != NULL) { EXTRACT_FROM_DOUBLE_LINKED_LIST(self); MYDB_BEGIN_ALLOW_THREADS; err = self->logc->close(self->logc, 0); MYDB_END_ALLOW_THREADS; self->logc = NULL; } RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBLogCursor_close(DBLogCursorObject* self) { return DBLogCursor_close_internal(self); } static PyObject* _DBLogCursor_get(DBLogCursorObject* self, int flag, DB_LSN *lsn2) { int err; DBT data; DB_LSN lsn = {0, 0}; PyObject *dummy, *retval; CLEAR_DBT(data); data.flags = DB_DBT_MALLOC; /* Berkeley DB must do the malloc */ CHECK_LOGCURSOR_NOT_CLOSED(self); if (lsn2) lsn = *lsn2; MYDB_BEGIN_ALLOW_THREADS; err = self->logc->get(self->logc, &lsn, &data, flag); MYDB_END_ALLOW_THREADS; if (err == DB_NOTFOUND) { Py_INCREF(Py_None); retval = Py_None; } else if (makeDBError(err)) { retval = NULL; } else { retval = dummy = BuildValue_S(data.data, data.size); if (dummy) { retval = Py_BuildValue("(ii)O", lsn.file, lsn.offset, dummy); Py_DECREF(dummy); } } FREE_DBT(data); return retval; } static PyObject* DBLogCursor_current(DBLogCursorObject* self) { return _DBLogCursor_get(self, DB_CURRENT, NULL); } static PyObject* DBLogCursor_first(DBLogCursorObject* self) { return _DBLogCursor_get(self, DB_FIRST, NULL); } static PyObject* DBLogCursor_last(DBLogCursorObject* self) { return _DBLogCursor_get(self, DB_LAST, NULL); } static PyObject* DBLogCursor_next(DBLogCursorObject* self) { return _DBLogCursor_get(self, DB_NEXT, NULL); } static PyObject* DBLogCursor_prev(DBLogCursorObject* self) { return _DBLogCursor_get(self, DB_PREV, NULL); } static PyObject* DBLogCursor_set(DBLogCursorObject* self, PyObject* args) { DB_LSN lsn; if (!PyArg_ParseTuple(args, "(ii):set", &lsn.file, &lsn.offset)) return NULL; return _DBLogCursor_get(self, DB_SET, &lsn); } /* --------------------------------------------------------------------- */ /* DBSite methods */ #if (DBVER >= 52) static PyObject* DBSite_close_internal(DBSiteObject* self) { int err = 0; if (self->site != NULL) { EXTRACT_FROM_DOUBLE_LINKED_LIST(self); MYDB_BEGIN_ALLOW_THREADS; err = self->site->close(self->site); MYDB_END_ALLOW_THREADS; self->site = NULL; } RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBSite_close(DBSiteObject* self) { return DBSite_close_internal(self); } static PyObject* DBSite_remove(DBSiteObject* self) { int err = 0; CHECK_SITE_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->site->remove(self->site); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBSite_get_eid(DBSiteObject* self) { int err = 0; int eid; CHECK_SITE_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->site->get_eid(self->site, &eid); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(eid); } static PyObject* DBSite_get_address(DBSiteObject* self) { int err = 0; const char *host; u_int port; CHECK_SITE_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->site->get_address(self->site, &host, &port); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return Py_BuildValue("(sI)", host, port); } static PyObject* DBSite_get_config(DBSiteObject* self, PyObject* args, PyObject* kwargs) { int err = 0; u_int32_t which, value; static char* kwnames[] = { "which", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i:get_config", kwnames, &which)) return NULL; CHECK_SITE_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->site->get_config(self->site, which, &value); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); if (value) { Py_INCREF(Py_True); return Py_True; } else { Py_INCREF(Py_False); return Py_False; } } static PyObject* DBSite_set_config(DBSiteObject* self, PyObject* args, PyObject* kwargs) { int err = 0; u_int32_t which, value; PyObject *valueO; static char* kwnames[] = { "which", "value", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "iO:set_config", kwnames, &which, &valueO)) return NULL; CHECK_SITE_NOT_CLOSED(self); value = PyObject_IsTrue(valueO); MYDB_BEGIN_ALLOW_THREADS; err = self->site->set_config(self->site, which, value); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } #endif /* --------------------------------------------------------------------- */ /* DBCursor methods */ static PyObject* DBC_close_internal(DBCursorObject* self) { int err = 0; if (self->dbc != NULL) { EXTRACT_FROM_DOUBLE_LINKED_LIST(self); if (self->txn) { EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(self); self->txn=NULL; } MYDB_BEGIN_ALLOW_THREADS; err = _DBC_close(self->dbc); MYDB_END_ALLOW_THREADS; self->dbc = NULL; } RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBC_close(DBCursorObject* self) { return DBC_close_internal(self); } static PyObject* DBC_count(DBCursorObject* self, PyObject* args) { int err = 0; db_recno_t count; int flags = 0; if (!PyArg_ParseTuple(args, "|i:count", &flags)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = _DBC_count(self->dbc, &count, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(count); } static PyObject* DBC_current(DBCursorObject* self, PyObject* args, PyObject *kwargs) { return _DBCursor_get(self,DB_CURRENT,args,kwargs,"|iii:current"); } static PyObject* DBC_delete(DBCursorObject* self, PyObject* args) { int err, flags=0; if (!PyArg_ParseTuple(args, "|i:delete", &flags)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = _DBC_del(self->dbc, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBC_dup(DBCursorObject* self, PyObject* args) { int err, flags =0; DBC* dbc = NULL; if (!PyArg_ParseTuple(args, "|i:dup", &flags)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = _DBC_dup(self->dbc, &dbc, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return (PyObject*) newDBCursorObject(dbc, self->txn, self->mydb); } static PyObject* DBC_first(DBCursorObject* self, PyObject* args, PyObject* kwargs) { return _DBCursor_get(self,DB_FIRST,args,kwargs,"|iii:first"); } static PyObject* DBC_get(DBCursorObject* self, PyObject* args, PyObject *kwargs) { int err, flags=0; PyObject* keyobj = NULL; PyObject* dataobj = NULL; PyObject* retval = NULL; int dlen = -1; int doff = -1; DBT key, data; static char* kwnames[] = { "key","data", "flags", "dlen", "doff", NULL }; CLEAR_DBT(key); CLEAR_DBT(data); if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i|ii:get", &kwnames[2], &flags, &dlen, &doff)) { PyErr_Clear(); if (!PyArg_ParseTupleAndKeywords(args, kwargs, "Oi|ii:get", &kwnames[1], &keyobj, &flags, &dlen, &doff)) { PyErr_Clear(); if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OOi|ii:get", kwnames, &keyobj, &dataobj, &flags, &dlen, &doff)) { return NULL; } } } CHECK_CURSOR_NOT_CLOSED(self); if (keyobj && !make_key_dbt(self->mydb, keyobj, &key, NULL)) return NULL; if ( (dataobj && !make_dbt(dataobj, &data)) || (!add_partial_dbt(&data, dlen, doff)) ) { FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(self->dbc, &key, &data, flags); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->mydb->moduleFlags.getReturnsNone) { Py_INCREF(Py_None); retval = Py_None; } else if (makeDBError(err)) { retval = NULL; } else { switch (_DB_get_type(self->mydb)) { case -1: retval = NULL; break; case DB_BTREE: case DB_HASH: default: retval = BuildValue_SS(key.data, key.size, data.data, data.size); break; case DB_RECNO: case DB_QUEUE: retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size); break; } } FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ return retval; } static PyObject* DBC_pget(DBCursorObject* self, PyObject* args, PyObject *kwargs) { int err, flags=0; PyObject* keyobj = NULL; PyObject* dataobj = NULL; PyObject* retval = NULL; int dlen = -1; int doff = -1; DBT key, pkey, data; static char* kwnames_keyOnly[] = { "key", "flags", "dlen", "doff", NULL }; static char* kwnames[] = { "key", "data", "flags", "dlen", "doff", NULL }; CLEAR_DBT(key); CLEAR_DBT(data); if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i|ii:pget", &kwnames[2], &flags, &dlen, &doff)) { PyErr_Clear(); if (!PyArg_ParseTupleAndKeywords(args, kwargs, "Oi|ii:pget", kwnames_keyOnly, &keyobj, &flags, &dlen, &doff)) { PyErr_Clear(); if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OOi|ii:pget", kwnames, &keyobj, &dataobj, &flags, &dlen, &doff)) { return NULL; } } } CHECK_CURSOR_NOT_CLOSED(self); if (keyobj && !make_key_dbt(self->mydb, keyobj, &key, NULL)) return NULL; if ( (dataobj && !make_dbt(dataobj, &data)) || (!add_partial_dbt(&data, dlen, doff)) ) { FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ return NULL; } CLEAR_DBT(pkey); pkey.flags = DB_DBT_MALLOC; MYDB_BEGIN_ALLOW_THREADS; err = _DBC_pget(self->dbc, &key, &pkey, &data, flags); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->mydb->moduleFlags.getReturnsNone) { Py_INCREF(Py_None); retval = Py_None; } else if (makeDBError(err)) { retval = NULL; } else { PyObject *pkeyObj; PyObject *dataObj; dataObj = Build_PyString(data.data, data.size); if (self->mydb->primaryDBType == DB_RECNO || self->mydb->primaryDBType == DB_QUEUE) pkeyObj = NUMBER_FromLong(*(int *)pkey.data); else pkeyObj = Build_PyString(pkey.data, pkey.size); if (key.data && key.size) /* return key, pkey and data */ { PyObject *keyObj; int type = _DB_get_type(self->mydb); if (type == DB_RECNO || type == DB_QUEUE) keyObj = NUMBER_FromLong(*(int *)key.data); else keyObj = Build_PyString(key.data, key.size); retval = PyTuple_Pack(3, keyObj, pkeyObj, dataObj); Py_DECREF(keyObj); FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ } else /* return just the pkey and data */ { retval = PyTuple_Pack(2, pkeyObj, dataObj); } Py_DECREF(dataObj); Py_DECREF(pkeyObj); FREE_DBT(pkey); } /* the only time REALLOC should be set is if we used an integer * key that make_key_dbt malloc'd for us. always free these. */ if (key.flags & DB_DBT_REALLOC) { /* 'make_key_dbt' could do a 'malloc' */ FREE_DBT(key); } return retval; } static PyObject* DBC_get_recno(DBCursorObject* self) { int err; db_recno_t recno; DBT key; DBT data; CHECK_CURSOR_NOT_CLOSED(self); CLEAR_DBT(key); CLEAR_DBT(data); MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(self->dbc, &key, &data, DB_GET_RECNO); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); recno = *((db_recno_t*)data.data); return NUMBER_FromLong(recno); } static PyObject* DBC_last(DBCursorObject* self, PyObject* args, PyObject *kwargs) { return _DBCursor_get(self,DB_LAST,args,kwargs,"|iii:last"); } static PyObject* DBC_next(DBCursorObject* self, PyObject* args, PyObject *kwargs) { return _DBCursor_get(self,DB_NEXT,args,kwargs,"|iii:next"); } static PyObject* DBC_prev(DBCursorObject* self, PyObject* args, PyObject *kwargs) { return _DBCursor_get(self,DB_PREV,args,kwargs,"|iii:prev"); } static PyObject* DBC_put(DBCursorObject* self, PyObject* args, PyObject* kwargs) { int err, flags = 0; PyObject* keyobj, *dataobj; DBT key, data; static char* kwnames[] = { "key", "data", "flags", "dlen", "doff", NULL }; int dlen = -1; int doff = -1; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|iii:put", kwnames, &keyobj, &dataobj, &flags, &dlen, &doff)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); if (!make_key_dbt(self->mydb, keyobj, &key, NULL)) return NULL; if (!make_dbt(dataobj, &data) || !add_partial_dbt(&data, dlen, doff) ) { FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = _DBC_put(self->dbc, &key, &data, flags); MYDB_END_ALLOW_THREADS; FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBC_set(DBCursorObject* self, PyObject* args, PyObject *kwargs) { int err, flags = 0; DBT key, data; PyObject* retval, *keyobj; static char* kwnames[] = { "key", "flags", "dlen", "doff", NULL }; int dlen = -1; int doff = -1; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|iii:set", kwnames, &keyobj, &flags, &dlen, &doff)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); if (!make_key_dbt(self->mydb, keyobj, &key, NULL)) return NULL; CLEAR_DBT(data); if (!add_partial_dbt(&data, dlen, doff)) { FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(self->dbc, &key, &data, flags|DB_SET); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->mydb->moduleFlags.cursorSetReturnsNone) { Py_INCREF(Py_None); retval = Py_None; } else if (makeDBError(err)) { retval = NULL; } else { switch (_DB_get_type(self->mydb)) { case -1: retval = NULL; break; case DB_BTREE: case DB_HASH: default: retval = BuildValue_SS(key.data, key.size, data.data, data.size); break; case DB_RECNO: case DB_QUEUE: retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size); break; } FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ } /* the only time REALLOC should be set is if we used an integer * key that make_key_dbt malloc'd for us. always free these. */ if (key.flags & DB_DBT_REALLOC) { FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ } return retval; } static PyObject* DBC_set_range(DBCursorObject* self, PyObject* args, PyObject* kwargs) { int err, flags = 0; DBT key, data; PyObject* retval, *keyobj; static char* kwnames[] = { "key", "flags", "dlen", "doff", NULL }; int dlen = -1; int doff = -1; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|iii:set_range", kwnames, &keyobj, &flags, &dlen, &doff)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); if (!make_key_dbt(self->mydb, keyobj, &key, NULL)) return NULL; CLEAR_DBT(data); if (!add_partial_dbt(&data, dlen, doff)) { FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(self->dbc, &key, &data, flags|DB_SET_RANGE); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->mydb->moduleFlags.cursorSetReturnsNone) { Py_INCREF(Py_None); retval = Py_None; } else if (makeDBError(err)) { retval = NULL; } else { switch (_DB_get_type(self->mydb)) { case -1: retval = NULL; break; case DB_BTREE: case DB_HASH: default: retval = BuildValue_SS(key.data, key.size, data.data, data.size); break; case DB_RECNO: case DB_QUEUE: retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size); break; } FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ } /* the only time REALLOC should be set is if we used an integer * key that make_key_dbt malloc'd for us. always free these. */ if (key.flags & DB_DBT_REALLOC) { FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ } return retval; } static PyObject* _DBC_get_set_both(DBCursorObject* self, PyObject* keyobj, PyObject* dataobj, int flags, unsigned int returnsNone) { int err; DBT key, data; PyObject* retval; /* the caller did this: CHECK_CURSOR_NOT_CLOSED(self); */ if (!make_key_dbt(self->mydb, keyobj, &key, NULL)) return NULL; if (!make_dbt(dataobj, &data)) { FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(self->dbc, &key, &data, flags|DB_GET_BOTH); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && returnsNone) { Py_INCREF(Py_None); retval = Py_None; } else if (makeDBError(err)) { retval = NULL; } else { switch (_DB_get_type(self->mydb)) { case -1: retval = NULL; break; case DB_BTREE: case DB_HASH: default: retval = BuildValue_SS(key.data, key.size, data.data, data.size); break; case DB_RECNO: case DB_QUEUE: retval = BuildValue_IS(*((db_recno_t*)key.data), data.data, data.size); break; } } FREE_DBT(key); /* 'make_key_dbt' could do a 'malloc' */ return retval; } static PyObject* DBC_get_both(DBCursorObject* self, PyObject* args) { int flags=0; PyObject *keyobj, *dataobj; if (!PyArg_ParseTuple(args, "OO|i:get_both", &keyobj, &dataobj, &flags)) return NULL; /* if the cursor is closed, self->mydb may be invalid */ CHECK_CURSOR_NOT_CLOSED(self); return _DBC_get_set_both(self, keyobj, dataobj, flags, self->mydb->moduleFlags.getReturnsNone); } /* Return size of entry */ static PyObject* DBC_get_current_size(DBCursorObject* self) { int err, flags=DB_CURRENT; PyObject* retval = NULL; DBT key, data; CHECK_CURSOR_NOT_CLOSED(self); CLEAR_DBT(key); CLEAR_DBT(data); /* We don't allocate any memory, forcing a DB_BUFFER_SMALL error and thus getting the record size. */ data.flags = DB_DBT_USERMEM; data.ulen = 0; MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(self->dbc, &key, &data, flags); MYDB_END_ALLOW_THREADS; if (err == DB_BUFFER_SMALL || !err) { /* DB_BUFFER_SMALL means positive size, !err means zero length value */ retval = NUMBER_FromLong((long)data.size); err = 0; } RETURN_IF_ERR(); return retval; } static PyObject* DBC_set_both(DBCursorObject* self, PyObject* args) { int flags=0; PyObject *keyobj, *dataobj; if (!PyArg_ParseTuple(args, "OO|i:set_both", &keyobj, &dataobj, &flags)) return NULL; /* if the cursor is closed, self->mydb may be invalid */ CHECK_CURSOR_NOT_CLOSED(self); return _DBC_get_set_both(self, keyobj, dataobj, flags, self->mydb->moduleFlags.cursorSetReturnsNone); } static PyObject* DBC_set_recno(DBCursorObject* self, PyObject* args, PyObject *kwargs) { int err, irecno, flags=0; db_recno_t recno; DBT key, data; PyObject* retval; int dlen = -1; int doff = -1; static char* kwnames[] = { "recno","flags", "dlen", "doff", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i|iii:set_recno", kwnames, &irecno, &flags, &dlen, &doff)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); CLEAR_DBT(key); recno = (db_recno_t) irecno; /* use allocated space so DB will be able to realloc room for the real * key */ key.data = malloc(sizeof(db_recno_t)); if (key.data == NULL) { PyErr_SetString(PyExc_MemoryError, "Key memory allocation failed"); return NULL; } key.size = sizeof(db_recno_t); key.ulen = key.size; memcpy(key.data, &recno, sizeof(db_recno_t)); key.flags = DB_DBT_REALLOC; CLEAR_DBT(data); if (!add_partial_dbt(&data, dlen, doff)) { FREE_DBT(key); return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(self->dbc, &key, &data, flags|DB_SET_RECNO); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->mydb->moduleFlags.cursorSetReturnsNone) { Py_INCREF(Py_None); retval = Py_None; } else if (makeDBError(err)) { retval = NULL; } else { /* Can only be used for BTrees, so no need to return int key */ retval = BuildValue_SS(key.data, key.size, data.data, data.size); } FREE_DBT(key); return retval; } static PyObject* DBC_consume(DBCursorObject* self, PyObject* args, PyObject *kwargs) { return _DBCursor_get(self,DB_CONSUME,args,kwargs,"|iii:consume"); } static PyObject* DBC_next_dup(DBCursorObject* self, PyObject* args, PyObject *kwargs) { return _DBCursor_get(self,DB_NEXT_DUP,args,kwargs,"|iii:next_dup"); } static PyObject* DBC_next_nodup(DBCursorObject* self, PyObject* args, PyObject *kwargs) { return _DBCursor_get(self,DB_NEXT_NODUP,args,kwargs,"|iii:next_nodup"); } static PyObject* DBC_prev_dup(DBCursorObject* self, PyObject* args, PyObject *kwargs) { return _DBCursor_get(self,DB_PREV_DUP,args,kwargs,"|iii:prev_dup"); } static PyObject* DBC_prev_nodup(DBCursorObject* self, PyObject* args, PyObject *kwargs) { return _DBCursor_get(self,DB_PREV_NODUP,args,kwargs,"|iii:prev_nodup"); } static PyObject* DBC_join_item(DBCursorObject* self, PyObject* args) { int err, flags=0; DBT key, data; PyObject* retval; if (!PyArg_ParseTuple(args, "|i:join_item", &flags)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); CLEAR_DBT(key); CLEAR_DBT(data); MYDB_BEGIN_ALLOW_THREADS; err = _DBC_get(self->dbc, &key, &data, flags | DB_JOIN_ITEM); MYDB_END_ALLOW_THREADS; if ((err == DB_NOTFOUND || err == DB_KEYEMPTY) && self->mydb->moduleFlags.getReturnsNone) { Py_INCREF(Py_None); retval = Py_None; } else if (makeDBError(err)) { retval = NULL; } else { retval = BuildValue_S(key.data, key.size); } return retval; } static PyObject* DBC_set_priority(DBCursorObject* self, PyObject* args, PyObject* kwargs) { int err, priority; static char* kwnames[] = { "priority", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i:set_priority", kwnames, &priority)) return NULL; CHECK_CURSOR_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->dbc->set_priority(self->dbc, priority); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBC_get_priority(DBCursorObject* self) { int err; DB_CACHE_PRIORITY priority; CHECK_CURSOR_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->dbc->get_priority(self->dbc, &priority); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(priority); } /* --------------------------------------------------------------------- */ /* DBEnv methods */ static PyObject* DBEnv_close_internal(DBEnvObject* self, int flags) { PyObject *dummy; int err; if (!self->closed) { /* Don't close more than once */ while(self->children_txns) { dummy = DBTxn_abort_discard_internal(self->children_txns, 0); Py_XDECREF(dummy); } while(self->children_dbs) { dummy = DB_close_internal(self->children_dbs, 0, 0); Py_XDECREF(dummy); } while(self->children_logcursors) { dummy = DBLogCursor_close_internal(self->children_logcursors); Py_XDECREF(dummy); } #if (DBVER >= 52) while(self->children_sites) { dummy = DBSite_close_internal(self->children_sites); Py_XDECREF(dummy); } #endif } self->closed = 1; if (self->db_env) { MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->close(self->db_env, flags); MYDB_END_ALLOW_THREADS; /* after calling DBEnv->close, regardless of error, this DBEnv * may not be accessed again (Berkeley DB docs). */ self->db_env = NULL; RETURN_IF_ERR(); } RETURN_NONE(); } static PyObject* DBEnv_close(DBEnvObject* self, PyObject* args) { int flags = 0; if (!PyArg_ParseTuple(args, "|i:close", &flags)) return NULL; return DBEnv_close_internal(self, flags); } static PyObject* DBEnv_open(DBEnvObject* self, PyObject* args) { int err, flags=0, mode=0660; char *db_home; if (!PyArg_ParseTuple(args, "z|ii:open", &db_home, &flags, &mode)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->open(self->db_env, db_home, flags, mode); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); self->closed = 0; self->flags = flags; RETURN_NONE(); } static PyObject* DBEnv_memp_stat(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; DB_MPOOL_STAT *gsp; DB_MPOOL_FSTAT **fsp, **fsp2; PyObject* d = NULL, *d2, *d3, *r; u_int32_t flags = 0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:memp_stat", kwnames, &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->memp_stat(self->db_env, &gsp, &fsp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); /* Turn the stat structure into a dictionary */ d = PyDict_New(); if (d == NULL) { if (gsp) free(gsp); return NULL; } #define MAKE_ENTRY(name) _addIntToDict(d, #name, gsp->st_##name) MAKE_ENTRY(gbytes); MAKE_ENTRY(bytes); MAKE_ENTRY(ncache); MAKE_ENTRY(max_ncache); MAKE_ENTRY(regsize); MAKE_ENTRY(mmapsize); MAKE_ENTRY(maxopenfd); MAKE_ENTRY(maxwrite); MAKE_ENTRY(maxwrite_sleep); MAKE_ENTRY(map); MAKE_ENTRY(cache_hit); MAKE_ENTRY(cache_miss); MAKE_ENTRY(page_create); MAKE_ENTRY(page_in); MAKE_ENTRY(page_out); MAKE_ENTRY(ro_evict); MAKE_ENTRY(rw_evict); MAKE_ENTRY(page_trickle); MAKE_ENTRY(pages); MAKE_ENTRY(page_clean); MAKE_ENTRY(page_dirty); MAKE_ENTRY(hash_buckets); MAKE_ENTRY(hash_searches); MAKE_ENTRY(hash_longest); MAKE_ENTRY(hash_examined); MAKE_ENTRY(hash_nowait); MAKE_ENTRY(hash_wait); MAKE_ENTRY(hash_max_nowait); MAKE_ENTRY(hash_max_wait); MAKE_ENTRY(region_wait); MAKE_ENTRY(region_nowait); MAKE_ENTRY(mvcc_frozen); MAKE_ENTRY(mvcc_thawed); MAKE_ENTRY(mvcc_freed); MAKE_ENTRY(alloc); MAKE_ENTRY(alloc_buckets); MAKE_ENTRY(alloc_max_buckets); MAKE_ENTRY(alloc_pages); MAKE_ENTRY(alloc_max_pages); MAKE_ENTRY(io_wait); #if (DBVER >= 48) MAKE_ENTRY(sync_interrupted); #endif #undef MAKE_ENTRY free(gsp); d2 = PyDict_New(); if (d2 == NULL) { Py_DECREF(d); if (fsp) free(fsp); return NULL; } #define MAKE_ENTRY(name) _addIntToDict(d3, #name, (*fsp2)->st_##name) for(fsp2=fsp;*fsp2; fsp2++) { d3 = PyDict_New(); if (d3 == NULL) { Py_DECREF(d); Py_DECREF(d2); if (fsp) free(fsp); return NULL; } MAKE_ENTRY(pagesize); MAKE_ENTRY(cache_hit); MAKE_ENTRY(cache_miss); MAKE_ENTRY(map); MAKE_ENTRY(page_create); MAKE_ENTRY(page_in); MAKE_ENTRY(page_out); if(PyDict_SetItemString(d2, (*fsp2)->file_name, d3)) { Py_DECREF(d); Py_DECREF(d2); Py_DECREF(d3); if (fsp) free(fsp); return NULL; } Py_DECREF(d3); } #undef MAKE_ENTRY free(fsp); r = PyTuple_Pack(2, d, d2); Py_DECREF(d); Py_DECREF(d2); return r; } static PyObject* DBEnv_memp_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:memp_stat_print", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->memp_stat_print(self->db_env, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_memp_trickle(DBEnvObject* self, PyObject* args) { int err, percent, nwrotep; if (!PyArg_ParseTuple(args, "i:memp_trickle", &percent)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->memp_trickle(self->db_env, percent, &nwrotep); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(nwrotep); } static PyObject* DBEnv_memp_sync(DBEnvObject* self, PyObject* args) { int err; DB_LSN lsn = {0, 0}; DB_LSN *lsn_p = NULL; if (!PyArg_ParseTuple(args, "|(ii):memp_sync", &lsn.file, &lsn.offset)) return NULL; if ((lsn.file!=0) || (lsn.offset!=0)) { lsn_p = &lsn; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->memp_sync(self->db_env, lsn_p); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_remove(DBEnvObject* self, PyObject* args) { int err, flags=0; char *db_home; if (!PyArg_ParseTuple(args, "s|i:remove", &db_home, &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->remove(self->db_env, db_home, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_dbremove(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; u_int32_t flags=0; char *file = NULL; char *database = NULL; PyObject *txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = { "file", "database", "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|zOi:dbremove", kwnames, &file, &database, &txnobj, &flags)) { return NULL; } if (!checkTxnObj(txnobj, &txn)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->dbremove(self->db_env, txn, file, database, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_dbrename(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; u_int32_t flags=0; char *file = NULL; char *database = NULL; char *newname = NULL; PyObject *txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = { "file", "database", "newname", "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "szs|Oi:dbrename", kwnames, &file, &database, &newname, &txnobj, &flags)) { return NULL; } if (!checkTxnObj(txnobj, &txn)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->dbrename(self->db_env, txn, file, database, newname, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_set_encrypt(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; u_int32_t flags=0; char *passwd = NULL; static char* kwnames[] = { "passwd", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|i:set_encrypt", kwnames, &passwd, &flags)) { return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_encrypt(self->db_env, passwd, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_encrypt_flags(DBEnvObject* self) { int err; u_int32_t flags; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_encrypt_flags(self->db_env, &flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(flags); } static PyObject* DBEnv_get_timeout(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; int flag; u_int32_t timeout; static char* kwnames[] = {"flag", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i:get_timeout", kwnames, &flag)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_timeout(self->db_env, &timeout, flag); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(timeout); } static PyObject* DBEnv_set_timeout(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; u_int32_t flags=0; u_int32_t timeout = 0; static char* kwnames[] = { "timeout", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii:set_timeout", kwnames, &timeout, &flags)) { return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_timeout(self->db_env, (db_timeout_t)timeout, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_set_shm_key(DBEnvObject* self, PyObject* args) { int err; long shm_key = 0; if (!PyArg_ParseTuple(args, "l:set_shm_key", &shm_key)) return NULL; CHECK_ENV_NOT_CLOSED(self); err = self->db_env->set_shm_key(self->db_env, shm_key); RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_shm_key(DBEnvObject* self) { int err; long shm_key; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_shm_key(self->db_env, &shm_key); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(shm_key); } static PyObject* DBEnv_set_cache_max(DBEnvObject* self, PyObject* args) { int err, gbytes, bytes; if (!PyArg_ParseTuple(args, "ii:set_cache_max", &gbytes, &bytes)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_cache_max(self->db_env, gbytes, bytes); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_cache_max(DBEnvObject* self) { int err; u_int32_t gbytes, bytes; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_cache_max(self->db_env, &gbytes, &bytes); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return Py_BuildValue("(ii)", gbytes, bytes); } static PyObject* DBEnv_set_thread_count(DBEnvObject* self, PyObject* args) { int err; u_int32_t count; if (!PyArg_ParseTuple(args, "i:set_thread_count", &count)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_thread_count(self->db_env, count); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_thread_count(DBEnvObject* self) { int err; u_int32_t count; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_thread_count(self->db_env, &count); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(count); } static PyObject* DBEnv_set_cachesize(DBEnvObject* self, PyObject* args) { int err, gbytes=0, bytes=0, ncache=0; if (!PyArg_ParseTuple(args, "ii|i:set_cachesize", &gbytes, &bytes, &ncache)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_cachesize(self->db_env, gbytes, bytes, ncache); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_cachesize(DBEnvObject* self) { int err; u_int32_t gbytes, bytes; int ncache; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_cachesize(self->db_env, &gbytes, &bytes, &ncache); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return Py_BuildValue("(iii)", gbytes, bytes, ncache); } static PyObject* DBEnv_set_flags(DBEnvObject* self, PyObject* args) { int err, flags=0, onoff=0; if (!PyArg_ParseTuple(args, "ii:set_flags", &flags, &onoff)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_flags(self->db_env, flags, onoff); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_flags(DBEnvObject* self) { int err; u_int32_t flags; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_flags(self->db_env, &flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(flags); } static PyObject* DBEnv_log_set_config(DBEnvObject* self, PyObject* args) { int err, flags, onoff; if (!PyArg_ParseTuple(args, "ii:log_set_config", &flags, &onoff)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->log_set_config(self->db_env, flags, onoff); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_log_get_config(DBEnvObject* self, PyObject* args) { int err, flag, onoff; if (!PyArg_ParseTuple(args, "i:log_get_config", &flag)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->log_get_config(self->db_env, flag, &onoff); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return PyBool_FromLong(onoff); } static PyObject* DBEnv_mutex_set_max(DBEnvObject* self, PyObject* args) { int err; int value; if (!PyArg_ParseTuple(args, "i:mutex_set_max", &value)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_set_max(self->db_env, value); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_mutex_get_max(DBEnvObject* self) { int err; u_int32_t value; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_get_max(self->db_env, &value); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(value); } static PyObject* DBEnv_mutex_set_align(DBEnvObject* self, PyObject* args) { int err; int align; if (!PyArg_ParseTuple(args, "i:mutex_set_align", &align)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_set_align(self->db_env, align); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_mutex_get_align(DBEnvObject* self) { int err; u_int32_t align; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_get_align(self->db_env, &align); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(align); } static PyObject* DBEnv_mutex_set_increment(DBEnvObject* self, PyObject* args) { int err; int increment; if (!PyArg_ParseTuple(args, "i:mutex_set_increment", &increment)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_set_increment(self->db_env, increment); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_mutex_get_increment(DBEnvObject* self) { int err; u_int32_t increment; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_get_increment(self->db_env, &increment); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(increment); } static PyObject* DBEnv_mutex_set_tas_spins(DBEnvObject* self, PyObject* args) { int err; int tas_spins; if (!PyArg_ParseTuple(args, "i:mutex_set_tas_spins", &tas_spins)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_set_tas_spins(self->db_env, tas_spins); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_mutex_get_tas_spins(DBEnvObject* self) { int err; u_int32_t tas_spins; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_get_tas_spins(self->db_env, &tas_spins); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(tas_spins); } static PyObject* DBEnv_set_data_dir(DBEnvObject* self, PyObject* args) { int err; char *dir; if (!PyArg_ParseTuple(args, "s:set_data_dir", &dir)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_data_dir(self->db_env, dir); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_data_dirs(DBEnvObject* self) { int err; PyObject *tuple; PyObject *item; const char **dirpp; int size, i; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_data_dirs(self->db_env, &dirpp); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); /* ** Calculate size. Python C API ** actually allows for tuple resizing, ** but this is simple enough. */ for (size=0; *(dirpp+size) ; size++); tuple = PyTuple_New(size); if (!tuple) return NULL; for (i=0; idb_env->set_lg_filemode(self->db_env, filemode); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lg_filemode(DBEnvObject* self) { int err, filemode; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lg_filemode(self->db_env, &filemode); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(filemode); } static PyObject* DBEnv_set_lg_bsize(DBEnvObject* self, PyObject* args) { int err, lg_bsize; if (!PyArg_ParseTuple(args, "i:set_lg_bsize", &lg_bsize)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_lg_bsize(self->db_env, lg_bsize); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lg_bsize(DBEnvObject* self) { int err; u_int32_t lg_bsize; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lg_bsize(self->db_env, &lg_bsize); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(lg_bsize); } static PyObject* DBEnv_set_lg_dir(DBEnvObject* self, PyObject* args) { int err; char *dir; if (!PyArg_ParseTuple(args, "s:set_lg_dir", &dir)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_lg_dir(self->db_env, dir); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lg_dir(DBEnvObject* self) { int err; const char *dirp; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lg_dir(self->db_env, &dirp); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return PyBytes_FromString(dirp); } static PyObject* DBEnv_set_lg_max(DBEnvObject* self, PyObject* args) { int err, lg_max; if (!PyArg_ParseTuple(args, "i:set_lg_max", &lg_max)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_lg_max(self->db_env, lg_max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lg_max(DBEnvObject* self) { int err; u_int32_t lg_max; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lg_max(self->db_env, &lg_max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(lg_max); } static PyObject* DBEnv_set_lg_regionmax(DBEnvObject* self, PyObject* args) { int err, lg_max; if (!PyArg_ParseTuple(args, "i:set_lg_regionmax", &lg_max)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_lg_regionmax(self->db_env, lg_max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lg_regionmax(DBEnvObject* self) { int err; u_int32_t lg_regionmax; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lg_regionmax(self->db_env, &lg_regionmax); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(lg_regionmax); } static PyObject* DBEnv_set_lk_partitions(DBEnvObject* self, PyObject* args) { int err, lk_partitions; if (!PyArg_ParseTuple(args, "i:set_lk_partitions", &lk_partitions)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_lk_partitions(self->db_env, lk_partitions); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lk_partitions(DBEnvObject* self) { int err; u_int32_t lk_partitions; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lk_partitions(self->db_env, &lk_partitions); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(lk_partitions); } static PyObject* DBEnv_set_lk_detect(DBEnvObject* self, PyObject* args) { int err, lk_detect; if (!PyArg_ParseTuple(args, "i:set_lk_detect", &lk_detect)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_lk_detect(self->db_env, lk_detect); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lk_detect(DBEnvObject* self) { int err; u_int32_t lk_detect; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lk_detect(self->db_env, &lk_detect); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(lk_detect); } static PyObject* DBEnv_set_lk_max_locks(DBEnvObject* self, PyObject* args) { int err, max; if (!PyArg_ParseTuple(args, "i:set_lk_max_locks", &max)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_lk_max_locks(self->db_env, max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lk_max_locks(DBEnvObject* self) { int err; u_int32_t lk_max; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lk_max_locks(self->db_env, &lk_max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(lk_max); } static PyObject* DBEnv_set_lk_max_lockers(DBEnvObject* self, PyObject* args) { int err, max; if (!PyArg_ParseTuple(args, "i:set_lk_max_lockers", &max)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_lk_max_lockers(self->db_env, max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lk_max_lockers(DBEnvObject* self) { int err; u_int32_t lk_max; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lk_max_lockers(self->db_env, &lk_max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(lk_max); } static PyObject* DBEnv_set_lk_max_objects(DBEnvObject* self, PyObject* args) { int err, max; if (!PyArg_ParseTuple(args, "i:set_lk_max_objects", &max)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_lk_max_objects(self->db_env, max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_lk_max_objects(DBEnvObject* self) { int err; u_int32_t lk_max; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_lk_max_objects(self->db_env, &lk_max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(lk_max); } static PyObject* DBEnv_get_mp_mmapsize(DBEnvObject* self) { int err; size_t mmapsize; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_mp_mmapsize(self->db_env, &mmapsize); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(mmapsize); } static PyObject* DBEnv_set_mp_mmapsize(DBEnvObject* self, PyObject* args) { int err, mp_mmapsize; if (!PyArg_ParseTuple(args, "i:set_mp_mmapsize", &mp_mmapsize)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_mp_mmapsize(self->db_env, mp_mmapsize); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_set_tmp_dir(DBEnvObject* self, PyObject* args) { int err; char *dir; if (!PyArg_ParseTuple(args, "s:set_tmp_dir", &dir)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_tmp_dir(self->db_env, dir); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_tmp_dir(DBEnvObject* self) { int err; const char *dirpp; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_tmp_dir(self->db_env, &dirpp); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return PyBytes_FromString(dirpp); } static PyObject* DBEnv_txn_recover(DBEnvObject* self) { int flags = DB_FIRST; int err, i; PyObject *list, *tuple, *gid; DBTxnObject *txn; #define PREPLIST_LEN 16 DB_PREPLIST preplist[PREPLIST_LEN]; #if (DBVER < 48) || (DBVER >= 52) long retp; #else u_int32_t retp; #endif CHECK_ENV_NOT_CLOSED(self); list=PyList_New(0); if (!list) return NULL; while (!0) { MYDB_BEGIN_ALLOW_THREADS err=self->db_env->txn_recover(self->db_env, preplist, PREPLIST_LEN, &retp, flags); #undef PREPLIST_LEN MYDB_END_ALLOW_THREADS if (err) { Py_DECREF(list); RETURN_IF_ERR(); } if (!retp) break; flags=DB_NEXT; /* Prepare for next loop pass */ for (i=0; iflag_prepare=1; /* Recover state */ tuple=PyTuple_New(2); if (!tuple) { Py_DECREF(list); Py_DECREF(gid); Py_DECREF(txn); return NULL; } if (PyTuple_SetItem(tuple, 0, gid)) { Py_DECREF(list); Py_DECREF(gid); Py_DECREF(txn); Py_DECREF(tuple); return NULL; } if (PyTuple_SetItem(tuple, 1, (PyObject *)txn)) { Py_DECREF(list); Py_DECREF(txn); Py_DECREF(tuple); /* This delete the "gid" also */ return NULL; } if (PyList_Append(list, tuple)) { Py_DECREF(list); Py_DECREF(tuple);/* This delete the "gid" and the "txn" also */ return NULL; } Py_DECREF(tuple); } } return list; } static PyObject* DBEnv_txn_begin(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int flags = 0; PyObject* txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = { "parent", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:txn_begin", kwnames, &txnobj, &flags)) return NULL; if (!checkTxnObj(txnobj, &txn)) return NULL; CHECK_ENV_NOT_CLOSED(self); return (PyObject*)newDBTxnObject(self, (DBTxnObject *)txnobj, NULL, flags); } static PyObject* DBEnv_txn_checkpoint(DBEnvObject* self, PyObject* args) { int err, kbyte=0, min=0, flags=0; if (!PyArg_ParseTuple(args, "|iii:txn_checkpoint", &kbyte, &min, &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->txn_checkpoint(self->db_env, kbyte, min, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_tx_max(DBEnvObject* self) { int err; u_int32_t max; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_tx_max(self->db_env, &max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return PyLong_FromUnsignedLong(max); } static PyObject* DBEnv_set_tx_max(DBEnvObject* self, PyObject* args) { int err, max; if (!PyArg_ParseTuple(args, "i:set_tx_max", &max)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_tx_max(self->db_env, max); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_tx_timestamp(DBEnvObject* self) { int err; time_t timestamp; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_tx_timestamp(self->db_env, ×tamp); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(timestamp); } static PyObject* DBEnv_set_tx_timestamp(DBEnvObject* self, PyObject* args) { int err; long stamp; time_t timestamp; if (!PyArg_ParseTuple(args, "l:set_tx_timestamp", &stamp)) return NULL; CHECK_ENV_NOT_CLOSED(self); timestamp = (time_t)stamp; MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_tx_timestamp(self->db_env, ×tamp); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_lock_detect(DBEnvObject* self, PyObject* args) { int err, atype, flags=0; int aborted = 0; if (!PyArg_ParseTuple(args, "i|i:lock_detect", &atype, &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->lock_detect(self->db_env, flags, atype, &aborted); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(aborted); } static PyObject* DBEnv_lock_get(DBEnvObject* self, PyObject* args) { int flags=0; int locker, lock_mode; DBT obj; PyObject* objobj; if (!PyArg_ParseTuple(args, "iOi|i:lock_get", &locker, &objobj, &lock_mode, &flags)) return NULL; if (!make_dbt(objobj, &obj)) return NULL; return (PyObject*)newDBLockObject(self, locker, &obj, lock_mode, flags); } static PyObject* DBEnv_lock_id(DBEnvObject* self) { int err; u_int32_t theID; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->lock_id(self->db_env, &theID); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong((long)theID); } static PyObject* DBEnv_lock_id_free(DBEnvObject* self, PyObject* args) { int err; u_int32_t theID; if (!PyArg_ParseTuple(args, "I:lock_id_free", &theID)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->lock_id_free(self->db_env, theID); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_lock_put(DBEnvObject* self, PyObject* args) { int err; DBLockObject* dblockobj; if (!PyArg_ParseTuple(args, "O!:lock_put", &DBLock_Type, &dblockobj)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->lock_put(self->db_env, &dblockobj->lock); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_fileid_reset(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; char *file; u_int32_t flags = 0; static char* kwnames[] = { "file", "flags", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "z|i:fileid_reset", kwnames, &file, &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->fileid_reset(self->db_env, file, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_lsn_reset(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; char *file; u_int32_t flags = 0; static char* kwnames[] = { "file", "flags", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "z|i:lsn_reset", kwnames, &file, &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->lsn_reset(self->db_env, file, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:stat_print", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->stat_print(self->db_env, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_log_stat(DBEnvObject* self, PyObject* args) { int err; DB_LOG_STAT* statp = NULL; PyObject* d = NULL; u_int32_t flags = 0; if (!PyArg_ParseTuple(args, "|i:log_stat", &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->log_stat(self->db_env, &statp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); /* Turn the stat structure into a dictionary */ d = PyDict_New(); if (d == NULL) { if (statp) free(statp); return NULL; } #define MAKE_ENTRY(name) _addIntToDict(d, #name, statp->st_##name) MAKE_ENTRY(magic); MAKE_ENTRY(version); MAKE_ENTRY(mode); MAKE_ENTRY(lg_bsize); MAKE_ENTRY(lg_size); MAKE_ENTRY(record); MAKE_ENTRY(w_mbytes); MAKE_ENTRY(w_bytes); MAKE_ENTRY(wc_mbytes); MAKE_ENTRY(wc_bytes); MAKE_ENTRY(wcount); MAKE_ENTRY(wcount_fill); MAKE_ENTRY(rcount); MAKE_ENTRY(scount); MAKE_ENTRY(cur_file); MAKE_ENTRY(cur_offset); MAKE_ENTRY(disk_file); MAKE_ENTRY(disk_offset); MAKE_ENTRY(maxcommitperflush); MAKE_ENTRY(mincommitperflush); MAKE_ENTRY(regsize); MAKE_ENTRY(region_wait); MAKE_ENTRY(region_nowait); #undef MAKE_ENTRY free(statp); return d; } /* DBEnv_log_stat */ static PyObject* DBEnv_log_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:log_stat_print", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->log_stat_print(self->db_env, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_lock_stat(DBEnvObject* self, PyObject* args) { int err; DB_LOCK_STAT* sp; PyObject* d = NULL; u_int32_t flags = 0; if (!PyArg_ParseTuple(args, "|i:lock_stat", &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->lock_stat(self->db_env, &sp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); /* Turn the stat structure into a dictionary */ d = PyDict_New(); if (d == NULL) { free(sp); return NULL; } #define MAKE_ENTRY(name) _addIntToDict(d, #name, sp->st_##name) MAKE_ENTRY(id); MAKE_ENTRY(cur_maxid); MAKE_ENTRY(nmodes); MAKE_ENTRY(maxlocks); MAKE_ENTRY(maxlockers); MAKE_ENTRY(maxobjects); MAKE_ENTRY(nlocks); MAKE_ENTRY(maxnlocks); MAKE_ENTRY(nlockers); MAKE_ENTRY(maxnlockers); MAKE_ENTRY(nobjects); MAKE_ENTRY(maxnobjects); MAKE_ENTRY(nrequests); MAKE_ENTRY(nreleases); MAKE_ENTRY(nupgrade); MAKE_ENTRY(ndowngrade); MAKE_ENTRY(lock_nowait); MAKE_ENTRY(lock_wait); MAKE_ENTRY(ndeadlocks); MAKE_ENTRY(locktimeout); MAKE_ENTRY(txntimeout); MAKE_ENTRY(nlocktimeouts); MAKE_ENTRY(ntxntimeouts); MAKE_ENTRY(objs_wait); MAKE_ENTRY(objs_nowait); MAKE_ENTRY(lockers_wait); MAKE_ENTRY(lockers_nowait); MAKE_ENTRY(lock_wait); MAKE_ENTRY(lock_nowait); MAKE_ENTRY(hash_len); MAKE_ENTRY(regsize); MAKE_ENTRY(region_wait); MAKE_ENTRY(region_nowait); #undef MAKE_ENTRY free(sp); return d; } static PyObject* DBEnv_lock_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:lock_stat_print", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->lock_stat_print(self->db_env, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_log_cursor(DBEnvObject* self) { int err; DB_LOGC* dblogc; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->log_cursor(self->db_env, &dblogc, 0); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return (PyObject*) newDBLogCursorObject(dblogc, self); } static PyObject* DBEnv_log_flush(DBEnvObject* self) { int err; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS err = self->db_env->log_flush(self->db_env, NULL); MYDB_END_ALLOW_THREADS RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_log_file(DBEnvObject* self, PyObject* args) { int err; DB_LSN lsn = {0, 0}; int size = 20; char *name = NULL; PyObject *retval; if (!PyArg_ParseTuple(args, "(ii):log_file", &lsn.file, &lsn.offset)) return NULL; CHECK_ENV_NOT_CLOSED(self); do { name = malloc(size); if (!name) { PyErr_NoMemory(); return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->log_file(self->db_env, &lsn, name, size); MYDB_END_ALLOW_THREADS; if (err == EINVAL) { free(name); size *= 2; } else if (err) { free(name); RETURN_IF_ERR(); assert(0); /* Unreachable... supposely */ return NULL; } /* ** If the final buffer we try is too small, we will ** get this exception: ** DBInvalidArgError: ** (22, 'Invalid argument -- DB_ENV->log_file: name buffer is too short') */ } while ((err == EINVAL) && (size<(1<<17))); RETURN_IF_ERR(); /* Maybe the size is not the problem */ retval = Py_BuildValue("s", name); free(name); return retval; } static PyObject* DBEnv_log_printf(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; char *string; PyObject *txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = {"string", "txn", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|O:log_printf", kwnames, &string, &txnobj)) return NULL; CHECK_ENV_NOT_CLOSED(self); if (!checkTxnObj(txnobj, &txn)) return NULL; /* ** Do not use the format string directly, to avoid attacks. */ MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->log_printf(self->db_env, txn, "%s", string); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_log_archive(DBEnvObject* self, PyObject* args) { int flags=0; int err; char **log_list = NULL; PyObject* list; PyObject* item = NULL; if (!PyArg_ParseTuple(args, "|i:log_archive", &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->log_archive(self->db_env, &log_list, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); list = PyList_New(0); if (list == NULL) { if (log_list) free(log_list); return NULL; } if (log_list) { char **log_list_start; for (log_list_start = log_list; *log_list != NULL; ++log_list) { item = PyBytes_FromString (*log_list); if (item == NULL) { Py_DECREF(list); list = NULL; break; } if (PyList_Append(list, item)) { Py_DECREF(list); list = NULL; Py_DECREF(item); break; } Py_DECREF(item); } free(log_list_start); } return list; } #if (DBVER >= 52) static PyObject* DBEnv_repmgr_site(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; DB_SITE* site; char *host; u_int port; static char* kwnames[] = {"host", "port", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "si:repmgr_site", kwnames, &host, &port)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_site(self->db_env, host, port, &site, 0); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return (PyObject*) newDBSiteObject(site, self); } static PyObject* DBEnv_repmgr_site_by_eid(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; DB_SITE* site; int eid; static char* kwnames[] = {"eid", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i:repmgr_site_by_eid", kwnames, &eid)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_site_by_eid(self->db_env, eid, &site); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return (PyObject*) newDBSiteObject(site, self); } #endif static PyObject* DBEnv_mutex_stat(DBEnvObject* self, PyObject* args) { int err; DB_MUTEX_STAT* statp = NULL; PyObject* d = NULL; u_int32_t flags = 0; if (!PyArg_ParseTuple(args, "|i:mutex_stat", &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_stat(self->db_env, &statp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); /* Turn the stat structure into a dictionary */ d = PyDict_New(); if (d == NULL) { if (statp) free(statp); return NULL; } #define MAKE_ENTRY(name) _addIntToDict(d, #name, statp->st_##name) MAKE_ENTRY(mutex_align); MAKE_ENTRY(mutex_tas_spins); MAKE_ENTRY(mutex_cnt); MAKE_ENTRY(mutex_free); MAKE_ENTRY(mutex_inuse); MAKE_ENTRY(mutex_inuse_max); MAKE_ENTRY(regsize); MAKE_ENTRY(region_wait); MAKE_ENTRY(region_nowait); #undef MAKE_ENTRY free(statp); return d; } static PyObject* DBEnv_mutex_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:mutex_stat_print", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->mutex_stat_print(self->db_env, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_txn_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:stat_print", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->txn_stat_print(self->db_env, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_txn_stat(DBEnvObject* self, PyObject* args) { int err; DB_TXN_STAT* sp; PyObject* d = NULL; u_int32_t flags=0; if (!PyArg_ParseTuple(args, "|i:txn_stat", &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->txn_stat(self->db_env, &sp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); /* Turn the stat structure into a dictionary */ d = PyDict_New(); if (d == NULL) { free(sp); return NULL; } #define MAKE_ENTRY(name) _addIntToDict(d, #name, sp->st_##name) #define MAKE_TIME_T_ENTRY(name) _addTimeTToDict(d, #name, sp->st_##name) #define MAKE_DB_LSN_ENTRY(name) _addDB_lsnToDict(d, #name, sp->st_##name) MAKE_DB_LSN_ENTRY(last_ckp); MAKE_TIME_T_ENTRY(time_ckp); MAKE_ENTRY(last_txnid); MAKE_ENTRY(maxtxns); MAKE_ENTRY(nactive); MAKE_ENTRY(maxnactive); MAKE_ENTRY(nsnapshot); MAKE_ENTRY(maxnsnapshot); MAKE_ENTRY(nbegins); MAKE_ENTRY(naborts); MAKE_ENTRY(ncommits); MAKE_ENTRY(nrestores); MAKE_ENTRY(regsize); MAKE_ENTRY(region_wait); MAKE_ENTRY(region_nowait); #undef MAKE_DB_LSN_ENTRY #undef MAKE_ENTRY #undef MAKE_TIME_T_ENTRY free(sp); return d; } static PyObject* DBEnv_set_get_returns_none(DBEnvObject* self, PyObject* args) { int flags=0; int oldValue=0; if (!PyArg_ParseTuple(args,"i:set_get_returns_none", &flags)) return NULL; CHECK_ENV_NOT_CLOSED(self); if (self->moduleFlags.getReturnsNone) ++oldValue; if (self->moduleFlags.cursorSetReturnsNone) ++oldValue; self->moduleFlags.getReturnsNone = (flags >= 1); self->moduleFlags.cursorSetReturnsNone = (flags >= 2); return NUMBER_FromLong(oldValue); } static PyObject* DBEnv_get_private(DBEnvObject* self) { /* We can give out the private field even if dbenv is closed */ Py_INCREF(self->private_obj); return self->private_obj; } static PyObject* DBEnv_set_private(DBEnvObject* self, PyObject* private_obj) { /* We can set the private field even if dbenv is closed */ Py_DECREF(self->private_obj); Py_INCREF(private_obj); self->private_obj = private_obj; RETURN_NONE(); } static PyObject* DBEnv_set_intermediate_dir_mode(DBEnvObject* self, PyObject* args) { int err; const char *mode; if (!PyArg_ParseTuple(args,"s:set_intermediate_dir_mode", &mode)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_intermediate_dir_mode(self->db_env, mode); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_intermediate_dir_mode(DBEnvObject* self) { int err; const char *mode; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_intermediate_dir_mode(self->db_env, &mode); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return Py_BuildValue("s", mode); } static PyObject* DBEnv_get_open_flags(DBEnvObject* self) { int err; unsigned int flags; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_open_flags(self->db_env, &flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(flags); } #if (DBVER < 48) static PyObject* DBEnv_set_rpc_server(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; char *host; long cl_timeout=0, sv_timeout=0; static char* kwnames[] = { "host", "cl_timeout", "sv_timeout", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|ll:set_rpc_server", kwnames, &host, &cl_timeout, &sv_timeout)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_rpc_server(self->db_env, NULL, host, cl_timeout, sv_timeout, 0); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } #endif static PyObject* DBEnv_set_mp_max_openfd(DBEnvObject* self, PyObject* args) { int err; int maxopenfd; if (!PyArg_ParseTuple(args, "i:set_mp_max_openfd", &maxopenfd)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_mp_max_openfd(self->db_env, maxopenfd); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_mp_max_openfd(DBEnvObject* self) { int err; int maxopenfd; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_mp_max_openfd(self->db_env, &maxopenfd); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(maxopenfd); } static PyObject* DBEnv_set_mp_max_write(DBEnvObject* self, PyObject* args) { int err; int maxwrite, maxwrite_sleep; if (!PyArg_ParseTuple(args, "ii:set_mp_max_write", &maxwrite, &maxwrite_sleep)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_mp_max_write(self->db_env, maxwrite, maxwrite_sleep); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_mp_max_write(DBEnvObject* self) { int err; int maxwrite; db_timeout_t maxwrite_sleep; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_mp_max_write(self->db_env, &maxwrite, &maxwrite_sleep); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return Py_BuildValue("(ii)", maxwrite, (int)maxwrite_sleep); } static PyObject* DBEnv_set_verbose(DBEnvObject* self, PyObject* args) { int err; int which, onoff; if (!PyArg_ParseTuple(args, "ii:set_verbose", &which, &onoff)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_verbose(self->db_env, which, onoff); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_get_verbose(DBEnvObject* self, PyObject* args) { int err; int which; int verbose; if (!PyArg_ParseTuple(args, "i:get_verbose", &which)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->get_verbose(self->db_env, which, &verbose); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return PyBool_FromLong(verbose); } static void _dbenv_event_notifyCallback(DB_ENV* db_env, u_int32_t event, void *event_info) { DBEnvObject *dbenv; PyObject* callback; PyObject* args; PyObject* result = NULL; MYDB_BEGIN_BLOCK_THREADS; dbenv = (DBEnvObject *)db_env->app_private; callback = dbenv->event_notifyCallback; if (callback) { if (event == DB_EVENT_REP_NEWMASTER) { args = Py_BuildValue("(Oii)", dbenv, event, *((int *)event_info)); } else { args = Py_BuildValue("(OiO)", dbenv, event, Py_None); } if (args) { result = PyEval_CallObject(callback, args); } if ((!args) || (!result)) { PyErr_Print(); } Py_XDECREF(args); Py_XDECREF(result); } MYDB_END_BLOCK_THREADS; } static PyObject* DBEnv_set_event_notify(DBEnvObject* self, PyObject* notifyFunc) { int err; CHECK_ENV_NOT_CLOSED(self); if (!PyCallable_Check(notifyFunc)) { makeTypeError("Callable", notifyFunc); return NULL; } Py_XDECREF(self->event_notifyCallback); Py_INCREF(notifyFunc); self->event_notifyCallback = notifyFunc; /* This is to workaround a problem with un-initialized threads (see comment in DB_associate) */ #ifdef WITH_THREAD PyEval_InitThreads(); #endif MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->set_event_notify(self->db_env, _dbenv_event_notifyCallback); MYDB_END_ALLOW_THREADS; if (err) { Py_DECREF(notifyFunc); self->event_notifyCallback = NULL; } RETURN_IF_ERR(); RETURN_NONE(); } /* --------------------------------------------------------------------- */ /* REPLICATION METHODS: Base Replication */ static PyObject* DBEnv_rep_process_message(DBEnvObject* self, PyObject* args) { int err; PyObject *control_py, *rec_py; DBT control, rec; int envid; DB_LSN lsn; if (!PyArg_ParseTuple(args, "OOi:rep_process_message", &control_py, &rec_py, &envid)) return NULL; CHECK_ENV_NOT_CLOSED(self); if (!make_dbt(control_py, &control)) return NULL; if (!make_dbt(rec_py, &rec)) return NULL; MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_process_message(self->db_env, &control, &rec, envid, &lsn); MYDB_END_ALLOW_THREADS; switch (err) { case DB_REP_NEWMASTER : return Py_BuildValue("(iO)", envid, Py_None); break; case DB_REP_DUPMASTER : case DB_REP_HOLDELECTION : case DB_REP_IGNORE : case DB_REP_JOIN_FAILURE : return Py_BuildValue("(iO)", err, Py_None); break; case DB_REP_NEWSITE : { PyObject *tmp, *r; if (!(tmp = PyBytes_FromStringAndSize(rec.data, rec.size))) { return NULL; } r = Py_BuildValue("(iO)", err, tmp); Py_DECREF(tmp); return r; break; } case DB_REP_NOTPERM : case DB_REP_ISPERM : return Py_BuildValue("(i(ll))", err, lsn.file, lsn.offset); break; } RETURN_IF_ERR(); return PyTuple_Pack(2, Py_None, Py_None); } static int _DBEnv_rep_transportCallback(DB_ENV* db_env, const DBT* control, const DBT* rec, const DB_LSN *lsn, int envid, u_int32_t flags) { DBEnvObject *dbenv; PyObject* rep_transport; PyObject* args; PyObject *a, *b; PyObject* result = NULL; int ret=0; MYDB_BEGIN_BLOCK_THREADS; dbenv = (DBEnvObject *)db_env->app_private; rep_transport = dbenv->rep_transport; /* ** The errors in 'a' or 'b' are detected in "Py_BuildValue". */ a = PyBytes_FromStringAndSize(control->data, control->size); b = PyBytes_FromStringAndSize(rec->data, rec->size); args = Py_BuildValue( "(OOO(ll)iI)", dbenv, a, b, lsn->file, lsn->offset, envid, flags); if (args) { result = PyEval_CallObject(rep_transport, args); } if ((!args) || (!result)) { PyErr_Print(); ret = -1; } Py_XDECREF(a); Py_XDECREF(b); Py_XDECREF(args); Py_XDECREF(result); MYDB_END_BLOCK_THREADS; return ret; } static PyObject* DBEnv_rep_set_transport(DBEnvObject* self, PyObject* args) { int err; int envid; PyObject *rep_transport; if (!PyArg_ParseTuple(args, "iO:rep_set_transport", &envid, &rep_transport)) return NULL; CHECK_ENV_NOT_CLOSED(self); if (!PyCallable_Check(rep_transport)) { makeTypeError("Callable", rep_transport); return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_set_transport(self->db_env, envid, &_DBEnv_rep_transportCallback); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); Py_DECREF(self->rep_transport); Py_INCREF(rep_transport); self->rep_transport = rep_transport; RETURN_NONE(); } static PyObject* DBEnv_rep_set_request(DBEnvObject* self, PyObject* args) { int err; unsigned int minimum, maximum; if (!PyArg_ParseTuple(args,"II:rep_set_request", &minimum, &maximum)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_set_request(self->db_env, minimum, maximum); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_get_request(DBEnvObject* self) { int err; u_int32_t minimum, maximum; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_get_request(self->db_env, &minimum, &maximum); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return Py_BuildValue("II", minimum, maximum); } static PyObject* DBEnv_rep_set_limit(DBEnvObject* self, PyObject* args) { int err; int limit; if (!PyArg_ParseTuple(args,"i:rep_set_limit", &limit)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_set_limit(self->db_env, 0, limit); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_get_limit(DBEnvObject* self) { int err; u_int32_t gbytes, bytes; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_get_limit(self->db_env, &gbytes, &bytes); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(bytes); } static PyObject* DBEnv_rep_set_config(DBEnvObject* self, PyObject* args) { int err; int which; int onoff; if (!PyArg_ParseTuple(args,"ii:rep_set_config", &which, &onoff)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_set_config(self->db_env, which, onoff); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_get_config(DBEnvObject* self, PyObject* args) { int err; int which; int onoff; if (!PyArg_ParseTuple(args, "i:rep_get_config", &which)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_get_config(self->db_env, which, &onoff); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return PyBool_FromLong(onoff); } static PyObject* DBEnv_rep_elect(DBEnvObject* self, PyObject* args) { int err; u_int32_t nsites, nvotes; if (!PyArg_ParseTuple(args, "II:rep_elect", &nsites, &nvotes)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_elect(self->db_env, nsites, nvotes, 0); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_start(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; PyObject *cdata_py = Py_None; DBT cdata; int flags; static char* kwnames[] = {"flags","cdata", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i|O:rep_start", kwnames, &flags, &cdata_py)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); if (!make_dbt(cdata_py, &cdata)) return NULL; MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_start(self->db_env, cdata.size ? &cdata : NULL, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_sync(DBEnvObject* self) { int err; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_sync(self->db_env, 0); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_set_nsites(DBEnvObject* self, PyObject* args) { int err; int nsites; if (!PyArg_ParseTuple(args, "i:rep_set_nsites", &nsites)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_set_nsites(self->db_env, nsites); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_get_nsites(DBEnvObject* self) { int err; u_int32_t nsites; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_get_nsites(self->db_env, &nsites); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(nsites); } static PyObject* DBEnv_rep_set_priority(DBEnvObject* self, PyObject* args) { int err; int priority; if (!PyArg_ParseTuple(args, "i:rep_set_priority", &priority)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_set_priority(self->db_env, priority); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_get_priority(DBEnvObject* self) { int err; u_int32_t priority; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_get_priority(self->db_env, &priority); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(priority); } static PyObject* DBEnv_rep_set_timeout(DBEnvObject* self, PyObject* args) { int err; int which, timeout; if (!PyArg_ParseTuple(args, "ii:rep_set_timeout", &which, &timeout)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_set_timeout(self->db_env, which, timeout); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_get_timeout(DBEnvObject* self, PyObject* args) { int err; int which; u_int32_t timeout; if (!PyArg_ParseTuple(args, "i:rep_get_timeout", &which)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_get_timeout(self->db_env, which, &timeout); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(timeout); } static PyObject* DBEnv_rep_set_clockskew(DBEnvObject* self, PyObject* args) { int err; unsigned int fast, slow; if (!PyArg_ParseTuple(args,"II:rep_set_clockskew", &fast, &slow)) return NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_set_clockskew(self->db_env, fast, slow); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_get_clockskew(DBEnvObject* self) { int err; unsigned int fast, slow; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_get_clockskew(self->db_env, &fast, &slow); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return Py_BuildValue("(II)", fast, slow); } static PyObject* DBEnv_rep_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:rep_stat_print", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_stat_print(self->db_env, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_rep_stat(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; DB_REP_STAT *statp; PyObject *stats; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:rep_stat", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->rep_stat(self->db_env, &statp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); stats=PyDict_New(); if (stats == NULL) { free(statp); return NULL; } #define MAKE_ENTRY(name) _addIntToDict(stats, #name, statp->st_##name) #define MAKE_DB_LSN_ENTRY(name) _addDB_lsnToDict(stats , #name, statp->st_##name) MAKE_ENTRY(bulk_fills); MAKE_ENTRY(bulk_overflows); MAKE_ENTRY(bulk_records); MAKE_ENTRY(bulk_transfers); MAKE_ENTRY(client_rerequests); MAKE_ENTRY(client_svc_miss); MAKE_ENTRY(client_svc_req); MAKE_ENTRY(dupmasters); MAKE_ENTRY(egen); MAKE_ENTRY(election_nvotes); MAKE_ENTRY(startup_complete); MAKE_ENTRY(pg_duplicated); MAKE_ENTRY(pg_records); MAKE_ENTRY(pg_requested); MAKE_ENTRY(next_pg); MAKE_ENTRY(waiting_pg); MAKE_ENTRY(election_cur_winner); MAKE_ENTRY(election_gen); MAKE_DB_LSN_ENTRY(election_lsn); MAKE_ENTRY(election_nsites); MAKE_ENTRY(election_priority); MAKE_ENTRY(election_sec); MAKE_ENTRY(election_usec); MAKE_ENTRY(election_status); MAKE_ENTRY(election_tiebreaker); MAKE_ENTRY(election_votes); MAKE_ENTRY(elections); MAKE_ENTRY(elections_won); MAKE_ENTRY(env_id); MAKE_ENTRY(env_priority); MAKE_ENTRY(gen); MAKE_ENTRY(log_duplicated); MAKE_ENTRY(log_queued); MAKE_ENTRY(log_queued_max); MAKE_ENTRY(log_queued_total); MAKE_ENTRY(log_records); MAKE_ENTRY(log_requested); MAKE_ENTRY(master); MAKE_ENTRY(master_changes); MAKE_ENTRY(max_lease_sec); MAKE_ENTRY(max_lease_usec); MAKE_DB_LSN_ENTRY(max_perm_lsn); MAKE_ENTRY(msgs_badgen); MAKE_ENTRY(msgs_processed); MAKE_ENTRY(msgs_recover); MAKE_ENTRY(msgs_send_failures); MAKE_ENTRY(msgs_sent); MAKE_ENTRY(newsites); MAKE_DB_LSN_ENTRY(next_lsn); MAKE_ENTRY(nsites); MAKE_ENTRY(nthrottles); MAKE_ENTRY(outdated); MAKE_ENTRY(startsync_delayed); MAKE_ENTRY(status); MAKE_ENTRY(txns_applied); MAKE_DB_LSN_ENTRY(waiting_lsn); #undef MAKE_DB_LSN_ENTRY #undef MAKE_ENTRY free(statp); return stats; } /* --------------------------------------------------------------------- */ /* REPLICATION METHODS: Replication Manager */ static PyObject* DBEnv_repmgr_start(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; int nthreads, flags; static char* kwnames[] = {"nthreads","flags", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii:repmgr_start", kwnames, &nthreads, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_start(self->db_env, nthreads, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } #if (DBVER < 52) static PyObject* DBEnv_repmgr_set_local_site(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; char *host; int port; int flags = 0; static char* kwnames[] = {"host", "port", "flags", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "si|i:repmgr_set_local_site", kwnames, &host, &port, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_set_local_site(self->db_env, host, port, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_repmgr_add_remote_site(DBEnvObject* self, PyObject* args, PyObject* kwargs) { int err; char *host; int port; int flags = 0; int eidp; static char* kwnames[] = {"host", "port", "flags", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "si|i:repmgr_add_remote_site", kwnames, &host, &port, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_add_remote_site(self->db_env, host, port, &eidp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(eidp); } #endif static PyObject* DBEnv_repmgr_set_ack_policy(DBEnvObject* self, PyObject* args) { int err; int ack_policy; if (!PyArg_ParseTuple(args, "i:repmgr_set_ack_policy", &ack_policy)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_set_ack_policy(self->db_env, ack_policy); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_repmgr_get_ack_policy(DBEnvObject* self) { int err; int ack_policy; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_get_ack_policy(self->db_env, &ack_policy); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); return NUMBER_FromLong(ack_policy); } static PyObject* DBEnv_repmgr_site_list(DBEnvObject* self) { int err; unsigned int countp; DB_REPMGR_SITE *listp; PyObject *stats, *key, *tuple; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_site_list(self->db_env, &countp, &listp); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); stats=PyDict_New(); if (stats == NULL) { free(listp); return NULL; } for(;countp--;) { key=NUMBER_FromLong(listp[countp].eid); if(!key) { Py_DECREF(stats); free(listp); return NULL; } tuple=Py_BuildValue("(sII)", listp[countp].host, listp[countp].port, listp[countp].status); if(!tuple) { Py_DECREF(key); Py_DECREF(stats); free(listp); return NULL; } if(PyDict_SetItem(stats, key, tuple)) { Py_DECREF(key); Py_DECREF(tuple); Py_DECREF(stats); free(listp); return NULL; } Py_DECREF(key); Py_DECREF(tuple); } free(listp); return stats; } static PyObject* DBEnv_repmgr_stat_print(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:repmgr_stat_print", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_stat_print(self->db_env, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBEnv_repmgr_stat(DBEnvObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; DB_REPMGR_STAT *statp; PyObject *stats; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:repmgr_stat", kwnames, &flags)) { return NULL; } CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->db_env->repmgr_stat(self->db_env, &statp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); stats=PyDict_New(); if (stats == NULL) { free(statp); return NULL; } #define MAKE_ENTRY(name) _addIntToDict(stats, #name, statp->st_##name) MAKE_ENTRY(perm_failed); MAKE_ENTRY(msgs_queued); MAKE_ENTRY(msgs_dropped); MAKE_ENTRY(connection_drop); MAKE_ENTRY(connect_fail); #undef MAKE_ENTRY free(statp); return stats; } /* --------------------------------------------------------------------- */ /* DBTxn methods */ static void _close_transaction_cursors(DBTxnObject* txn) { PyObject *dummy; while(txn->children_cursors) { PyErr_Warn(PyExc_RuntimeWarning, "Must close cursors before resolving a transaction."); dummy=DBC_close_internal(txn->children_cursors); Py_XDECREF(dummy); } } static void _promote_transaction_dbs_and_sequences(DBTxnObject *txn) { DBObject *db; DBSequenceObject *dbs; while (txn->children_dbs) { db=txn->children_dbs; EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(db); if (txn->parent_txn) { INSERT_IN_DOUBLE_LINKED_LIST_TXN(txn->parent_txn->children_dbs,db); db->txn=txn->parent_txn; } else { /* The db is already linked to its environment, ** so nothing to do. */ db->txn=NULL; } } while (txn->children_sequences) { dbs=txn->children_sequences; EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(dbs); if (txn->parent_txn) { INSERT_IN_DOUBLE_LINKED_LIST_TXN(txn->parent_txn->children_sequences,dbs); dbs->txn=txn->parent_txn; } else { /* The sequence is already linked to its ** parent db. Nothing to do. */ dbs->txn=NULL; } } } static PyObject* DBTxn_commit(DBTxnObject* self, PyObject* args) { int flags=0, err; DB_TXN *txn; if (!PyArg_ParseTuple(args, "|i:commit", &flags)) return NULL; _close_transaction_cursors(self); if (!self->txn) { PyObject *t = Py_BuildValue("(is)", 0, "DBTxn must not be used " "after txn_commit, txn_abort " "or txn_discard"); if (t) { PyErr_SetObject(DBError, t); Py_DECREF(t); } return NULL; } self->flag_prepare=0; txn = self->txn; self->txn = NULL; /* this DB_TXN is no longer valid after this call */ EXTRACT_FROM_DOUBLE_LINKED_LIST(self); MYDB_BEGIN_ALLOW_THREADS; err = txn->commit(txn, flags); MYDB_END_ALLOW_THREADS; _promote_transaction_dbs_and_sequences(self); RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBTxn_prepare(DBTxnObject* self, PyObject* args) { int err; char* gid=NULL; int gid_size=0; if (!PyArg_ParseTuple(args, "s#:prepare", &gid, &gid_size)) return NULL; if (gid_size != DB_GID_SIZE) { PyErr_SetString(PyExc_TypeError, "gid must be DB_GID_SIZE bytes long"); return NULL; } if (!self->txn) { PyObject *t = Py_BuildValue("(is)", 0,"DBTxn must not be used " "after txn_commit, txn_abort " "or txn_discard"); if (t) { PyErr_SetObject(DBError, t); Py_DECREF(t); } return NULL; } self->flag_prepare=1; /* Prepare state */ MYDB_BEGIN_ALLOW_THREADS; err = self->txn->prepare(self->txn, (u_int8_t*)gid); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBTxn_abort_discard_internal(DBTxnObject* self, int discard) { PyObject *dummy; int err=0; DB_TXN *txn; if (!self->txn) { PyObject *t = Py_BuildValue("(is)", 0, "DBTxn must not be used " "after txn_commit, txn_abort " "or txn_discard"); if (t) { PyErr_SetObject(DBError, t); Py_DECREF(t); } return NULL; } txn = self->txn; self->txn = NULL; /* this DB_TXN is no longer valid after this call */ _close_transaction_cursors(self); while (self->children_sequences) { dummy=DBSequence_close_internal(self->children_sequences,0,0); Py_XDECREF(dummy); } while (self->children_dbs) { dummy=DB_close_internal(self->children_dbs, 0, 0); Py_XDECREF(dummy); } EXTRACT_FROM_DOUBLE_LINKED_LIST(self); MYDB_BEGIN_ALLOW_THREADS; if (discard) { assert(!self->flag_prepare); err = txn->discard(txn,0); } else { /* ** If the transaction is in the "prepare" or "recover" state, ** we better do not implicitly abort it. */ if (!self->flag_prepare) { err = txn->abort(txn); } } MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBTxn_abort(DBTxnObject* self) { self->flag_prepare=0; _close_transaction_cursors(self); return DBTxn_abort_discard_internal(self,0); } static PyObject* DBTxn_discard(DBTxnObject* self) { self->flag_prepare=0; _close_transaction_cursors(self); return DBTxn_abort_discard_internal(self,1); } static PyObject* DBTxn_id(DBTxnObject* self) { int id; if (!self->txn) { PyObject *t = Py_BuildValue("(is)", 0, "DBTxn must not be used " "after txn_commit, txn_abort " "or txn_discard"); if (t) { PyErr_SetObject(DBError, t); Py_DECREF(t); } return NULL; } MYDB_BEGIN_ALLOW_THREADS; id = self->txn->id(self->txn); MYDB_END_ALLOW_THREADS; return NUMBER_FromLong(id); } static PyObject* DBTxn_set_timeout(DBTxnObject* self, PyObject* args, PyObject* kwargs) { int err; u_int32_t flags=0; u_int32_t timeout = 0; static char* kwnames[] = { "timeout", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii:set_timeout", kwnames, &timeout, &flags)) { return NULL; } MYDB_BEGIN_ALLOW_THREADS; err = self->txn->set_timeout(self->txn, (db_timeout_t)timeout, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBTxn_set_name(DBTxnObject* self, PyObject* args) { int err; const char *name; if (!PyArg_ParseTuple(args, "s:set_name", &name)) return NULL; MYDB_BEGIN_ALLOW_THREADS; err = self->txn->set_name(self->txn, name); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBTxn_get_name(DBTxnObject* self) { int err; const char *name; MYDB_BEGIN_ALLOW_THREADS; err = self->txn->get_name(self->txn, &name); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); #if (PY_VERSION_HEX < 0x03000000) if (!name) { return PyString_FromString(""); } return PyString_FromString(name); #else if (!name) { return PyUnicode_FromString(""); } return PyUnicode_FromString(name); #endif } /* --------------------------------------------------------------------- */ /* DBSequence methods */ static PyObject* DBSequence_close_internal(DBSequenceObject* self, int flags, int do_not_close) { int err=0; if (self->sequence!=NULL) { EXTRACT_FROM_DOUBLE_LINKED_LIST(self); if (self->txn) { EXTRACT_FROM_DOUBLE_LINKED_LIST_TXN(self); self->txn=NULL; } /* ** "do_not_close" is used to dispose all related objects in the ** tree, without actually releasing the "root" object. ** This is done, for example, because function calls like ** "DBSequence.remove()" implicitly close the underlying handle. So ** the handle doesn't need to be closed, but related objects ** must be cleaned up. */ if (!do_not_close) { MYDB_BEGIN_ALLOW_THREADS err = self->sequence->close(self->sequence, flags); MYDB_END_ALLOW_THREADS } self->sequence = NULL; RETURN_IF_ERR(); } RETURN_NONE(); } static PyObject* DBSequence_close(DBSequenceObject* self, PyObject* args) { int flags=0; if (!PyArg_ParseTuple(args,"|i:close", &flags)) return NULL; return DBSequence_close_internal(self,flags,0); } static PyObject* DBSequence_get(DBSequenceObject* self, PyObject* args, PyObject* kwargs) { int err, flags = 0; #if (DBVER >= 60) unsigned #endif int delta = 1; db_seq_t value; PyObject *txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = {"delta", "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, #if (DBVER >=60) "|IOi:get", #else "|iOi:get", #endif kwnames, &delta, &txnobj, &flags)) return NULL; CHECK_SEQUENCE_NOT_CLOSED(self) if (!checkTxnObj(txnobj, &txn)) return NULL; MYDB_BEGIN_ALLOW_THREADS err = self->sequence->get(self->sequence, txn, delta, &value, flags); MYDB_END_ALLOW_THREADS RETURN_IF_ERR(); return PyLong_FromLongLong(value); } static PyObject* DBSequence_get_dbp(DBSequenceObject* self) { CHECK_SEQUENCE_NOT_CLOSED(self) Py_INCREF(self->mydb); return (PyObject* )self->mydb; } static PyObject* DBSequence_get_key(DBSequenceObject* self) { int err; DBT key; PyObject *retval = NULL; key.flags = DB_DBT_MALLOC; CHECK_SEQUENCE_NOT_CLOSED(self) MYDB_BEGIN_ALLOW_THREADS err = self->sequence->get_key(self->sequence, &key); MYDB_END_ALLOW_THREADS if (!err) retval = Build_PyString(key.data, key.size); FREE_DBT(key); RETURN_IF_ERR(); return retval; } static PyObject* DBSequence_initial_value(DBSequenceObject* self, PyObject* args) { int err; PY_LONG_LONG value; db_seq_t value2; if (!PyArg_ParseTuple(args,"L:initial_value", &value)) return NULL; CHECK_SEQUENCE_NOT_CLOSED(self) value2=value; /* If truncation, compiler should show a warning */ MYDB_BEGIN_ALLOW_THREADS err = self->sequence->initial_value(self->sequence, value2); MYDB_END_ALLOW_THREADS RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBSequence_open(DBSequenceObject* self, PyObject* args, PyObject* kwargs) { int err, flags = 0; PyObject* keyobj; PyObject *txnobj = NULL; DB_TXN *txn = NULL; DBT key; static char* kwnames[] = {"key", "txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Oi:open", kwnames, &keyobj, &txnobj, &flags)) return NULL; if (!checkTxnObj(txnobj, &txn)) return NULL; if (!make_key_dbt(self->mydb, keyobj, &key, NULL)) return NULL; MYDB_BEGIN_ALLOW_THREADS err = self->sequence->open(self->sequence, txn, &key, flags); MYDB_END_ALLOW_THREADS FREE_DBT(key); RETURN_IF_ERR(); if (txn) { INSERT_IN_DOUBLE_LINKED_LIST_TXN(((DBTxnObject *)txnobj)->children_sequences,self); self->txn=(DBTxnObject *)txnobj; } RETURN_NONE(); } static PyObject* DBSequence_remove(DBSequenceObject* self, PyObject* args, PyObject* kwargs) { PyObject *dummy; int err, flags = 0; PyObject *txnobj = NULL; DB_TXN *txn = NULL; static char* kwnames[] = {"txn", "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:remove", kwnames, &txnobj, &flags)) return NULL; if (!checkTxnObj(txnobj, &txn)) return NULL; CHECK_SEQUENCE_NOT_CLOSED(self) MYDB_BEGIN_ALLOW_THREADS err = self->sequence->remove(self->sequence, txn, flags); MYDB_END_ALLOW_THREADS dummy=DBSequence_close_internal(self,flags,1); Py_XDECREF(dummy); RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBSequence_set_cachesize(DBSequenceObject* self, PyObject* args) { int err; #if (DBVER >= 60) unsigned #endif int size; if (!PyArg_ParseTuple(args, #if (DBVER >= 60) "I:set_cachesize", #else "i:set_cachesize", #endif &size)) return NULL; CHECK_SEQUENCE_NOT_CLOSED(self) MYDB_BEGIN_ALLOW_THREADS err = self->sequence->set_cachesize(self->sequence, size); MYDB_END_ALLOW_THREADS RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBSequence_get_cachesize(DBSequenceObject* self) { int err; #if (DBVER >= 60) unsigned #endif int size; CHECK_SEQUENCE_NOT_CLOSED(self) MYDB_BEGIN_ALLOW_THREADS err = self->sequence->get_cachesize(self->sequence, &size); MYDB_END_ALLOW_THREADS RETURN_IF_ERR(); return NUMBER_FromLong(size); } static PyObject* DBSequence_set_flags(DBSequenceObject* self, PyObject* args) { int err, flags = 0; if (!PyArg_ParseTuple(args,"i:set_flags", &flags)) return NULL; CHECK_SEQUENCE_NOT_CLOSED(self) MYDB_BEGIN_ALLOW_THREADS err = self->sequence->set_flags(self->sequence, flags); MYDB_END_ALLOW_THREADS RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBSequence_get_flags(DBSequenceObject* self) { unsigned int flags; int err; CHECK_SEQUENCE_NOT_CLOSED(self) MYDB_BEGIN_ALLOW_THREADS err = self->sequence->get_flags(self->sequence, &flags); MYDB_END_ALLOW_THREADS RETURN_IF_ERR(); return NUMBER_FromLong((int)flags); } static PyObject* DBSequence_set_range(DBSequenceObject* self, PyObject* args) { int err; PY_LONG_LONG min, max; db_seq_t min2, max2; if (!PyArg_ParseTuple(args,"(LL):set_range", &min, &max)) return NULL; CHECK_SEQUENCE_NOT_CLOSED(self) min2=min; /* If truncation, compiler should show a warning */ max2=max; MYDB_BEGIN_ALLOW_THREADS err = self->sequence->set_range(self->sequence, min2, max2); MYDB_END_ALLOW_THREADS RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBSequence_get_range(DBSequenceObject* self) { int err; PY_LONG_LONG min, max; db_seq_t min2, max2; CHECK_SEQUENCE_NOT_CLOSED(self) MYDB_BEGIN_ALLOW_THREADS err = self->sequence->get_range(self->sequence, &min2, &max2); MYDB_END_ALLOW_THREADS RETURN_IF_ERR(); min=min2; /* If truncation, compiler should show a warning */ max=max2; return Py_BuildValue("(LL)", min, max); } static PyObject* DBSequence_stat_print(DBSequenceObject* self, PyObject* args, PyObject *kwargs) { int err; int flags=0; static char* kwnames[] = { "flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:stat_print", kwnames, &flags)) { return NULL; } CHECK_SEQUENCE_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->sequence->stat_print(self->sequence, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); RETURN_NONE(); } static PyObject* DBSequence_stat(DBSequenceObject* self, PyObject* args, PyObject* kwargs) { int err, flags = 0; DB_SEQUENCE_STAT* sp = NULL; PyObject* dict_stat; static char* kwnames[] = {"flags", NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|i:stat", kwnames, &flags)) return NULL; CHECK_SEQUENCE_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; err = self->sequence->stat(self->sequence, &sp, flags); MYDB_END_ALLOW_THREADS; RETURN_IF_ERR(); if ((dict_stat = PyDict_New()) == NULL) { free(sp); return NULL; } #define MAKE_INT_ENTRY(name) _addIntToDict(dict_stat, #name, sp->st_##name) #if (DBVER >= 60) #define MAKE_UNSIGNED_INT_ENTRY(name) _addUnsignedIntToDict(dict_stat, #name, sp->st_##name) #endif #define MAKE_LONG_LONG_ENTRY(name) _addDb_seq_tToDict(dict_stat, #name, sp->st_##name) MAKE_INT_ENTRY(wait); MAKE_INT_ENTRY(nowait); MAKE_LONG_LONG_ENTRY(current); MAKE_LONG_LONG_ENTRY(value); MAKE_LONG_LONG_ENTRY(last_value); MAKE_LONG_LONG_ENTRY(min); MAKE_LONG_LONG_ENTRY(max); #if (DBVER >= 60) MAKE_UNSIGNED_INT_ENTRY(cache_size); #else MAKE_INT_ENTRY(cache_size); #endif MAKE_INT_ENTRY(flags); #undef MAKE_INT_ENTRY #undef MAKE_UNSIGNED_INT_ENTRY #undef MAKE_LONG_LONG_ENTRY free(sp); return dict_stat; } /* --------------------------------------------------------------------- */ /* Method definition tables and type objects */ static PyMethodDef DB_methods[] = { {"append", (PyCFunction)DB_append, METH_VARARGS|METH_KEYWORDS}, {"associate", (PyCFunction)DB_associate, METH_VARARGS|METH_KEYWORDS}, {"close", (PyCFunction)DB_close, METH_VARARGS}, {"compact", (PyCFunction)DB_compact, METH_VARARGS|METH_KEYWORDS}, {"consume", (PyCFunction)DB_consume, METH_VARARGS|METH_KEYWORDS}, {"consume_wait", (PyCFunction)DB_consume_wait, METH_VARARGS|METH_KEYWORDS}, {"cursor", (PyCFunction)DB_cursor, METH_VARARGS|METH_KEYWORDS}, {"delete", (PyCFunction)DB_delete, METH_VARARGS|METH_KEYWORDS}, {"fd", (PyCFunction)DB_fd, METH_NOARGS}, {"exists", (PyCFunction)DB_exists, METH_VARARGS|METH_KEYWORDS}, {"get", (PyCFunction)DB_get, METH_VARARGS|METH_KEYWORDS}, {"pget", (PyCFunction)DB_pget, METH_VARARGS|METH_KEYWORDS}, {"get_both", (PyCFunction)DB_get_both, METH_VARARGS|METH_KEYWORDS}, {"get_byteswapped", (PyCFunction)DB_get_byteswapped,METH_NOARGS}, {"get_size", (PyCFunction)DB_get_size, METH_VARARGS|METH_KEYWORDS}, {"get_type", (PyCFunction)DB_get_type, METH_NOARGS}, {"join", (PyCFunction)DB_join, METH_VARARGS}, {"key_range", (PyCFunction)DB_key_range, METH_VARARGS|METH_KEYWORDS}, {"has_key", (PyCFunction)DB_has_key, METH_VARARGS|METH_KEYWORDS}, {"items", (PyCFunction)DB_items, METH_VARARGS}, {"keys", (PyCFunction)DB_keys, METH_VARARGS}, {"open", (PyCFunction)DB_open, METH_VARARGS|METH_KEYWORDS}, {"put", (PyCFunction)DB_put, METH_VARARGS|METH_KEYWORDS}, {"remove", (PyCFunction)DB_remove, METH_VARARGS|METH_KEYWORDS}, {"rename", (PyCFunction)DB_rename, METH_VARARGS}, {"set_bt_minkey", (PyCFunction)DB_set_bt_minkey, METH_VARARGS}, {"get_bt_minkey", (PyCFunction)DB_get_bt_minkey, METH_NOARGS}, {"set_bt_compare", (PyCFunction)DB_set_bt_compare, METH_O}, {"set_cachesize", (PyCFunction)DB_set_cachesize, METH_VARARGS}, {"get_cachesize", (PyCFunction)DB_get_cachesize, METH_NOARGS}, {"set_dup_compare", (PyCFunction)DB_set_dup_compare, METH_O}, {"set_encrypt", (PyCFunction)DB_set_encrypt, METH_VARARGS|METH_KEYWORDS}, {"get_encrypt_flags", (PyCFunction)DB_get_encrypt_flags, METH_NOARGS}, {"set_flags", (PyCFunction)DB_set_flags, METH_VARARGS}, {"get_flags", (PyCFunction)DB_get_flags, METH_NOARGS}, {"get_transactional", (PyCFunction)DB_get_transactional, METH_NOARGS}, {"set_h_ffactor", (PyCFunction)DB_set_h_ffactor, METH_VARARGS}, {"get_h_ffactor", (PyCFunction)DB_get_h_ffactor, METH_NOARGS}, {"set_h_nelem", (PyCFunction)DB_set_h_nelem, METH_VARARGS}, {"get_h_nelem", (PyCFunction)DB_get_h_nelem, METH_NOARGS}, {"set_lorder", (PyCFunction)DB_set_lorder, METH_VARARGS}, {"get_lorder", (PyCFunction)DB_get_lorder, METH_NOARGS}, {"set_pagesize", (PyCFunction)DB_set_pagesize, METH_VARARGS}, {"get_pagesize", (PyCFunction)DB_get_pagesize, METH_NOARGS}, {"set_re_delim", (PyCFunction)DB_set_re_delim, METH_VARARGS}, {"get_re_delim", (PyCFunction)DB_get_re_delim, METH_NOARGS}, {"set_re_len", (PyCFunction)DB_set_re_len, METH_VARARGS}, {"get_re_len", (PyCFunction)DB_get_re_len, METH_NOARGS}, {"set_re_pad", (PyCFunction)DB_set_re_pad, METH_VARARGS}, {"get_re_pad", (PyCFunction)DB_get_re_pad, METH_NOARGS}, {"set_re_source", (PyCFunction)DB_set_re_source, METH_VARARGS}, {"get_re_source", (PyCFunction)DB_get_re_source, METH_NOARGS}, {"set_q_extentsize",(PyCFunction)DB_set_q_extentsize, METH_VARARGS}, {"get_q_extentsize",(PyCFunction)DB_get_q_extentsize, METH_NOARGS}, {"set_private", (PyCFunction)DB_set_private, METH_O}, {"get_private", (PyCFunction)DB_get_private, METH_NOARGS}, {"set_priority", (PyCFunction)DB_set_priority, METH_VARARGS}, {"get_priority", (PyCFunction)DB_get_priority, METH_NOARGS}, {"get_dbname", (PyCFunction)DB_get_dbname, METH_NOARGS}, {"get_open_flags", (PyCFunction)DB_get_open_flags, METH_NOARGS}, {"stat", (PyCFunction)DB_stat, METH_VARARGS|METH_KEYWORDS}, {"stat_print", (PyCFunction)DB_stat_print, METH_VARARGS|METH_KEYWORDS}, {"sync", (PyCFunction)DB_sync, METH_VARARGS}, {"truncate", (PyCFunction)DB_truncate, METH_VARARGS|METH_KEYWORDS}, {"type", (PyCFunction)DB_get_type, METH_NOARGS}, {"upgrade", (PyCFunction)DB_upgrade, METH_VARARGS}, {"values", (PyCFunction)DB_values, METH_VARARGS}, {"verify", (PyCFunction)DB_verify, METH_VARARGS|METH_KEYWORDS}, {"set_get_returns_none",(PyCFunction)DB_set_get_returns_none, METH_VARARGS}, {NULL, NULL} /* sentinel */ }; /* We need this to support __contains__() */ static PySequenceMethods DB_sequence = { 0, /* sq_length, mapping wins here */ 0, /* sq_concat */ 0, /* sq_repeat */ 0, /* sq_item */ 0, /* sq_slice */ 0, /* sq_ass_item */ 0, /* sq_ass_slice */ (objobjproc)DB_contains, /* sq_contains */ 0, /* sq_inplace_concat */ 0, /* sq_inplace_repeat */ }; static PyMappingMethods DB_mapping = { DB_length, /*mp_length*/ (binaryfunc)DB_subscript, /*mp_subscript*/ (objobjargproc)DB_ass_sub, /*mp_ass_subscript*/ }; static PyMethodDef DBCursor_methods[] = { {"close", (PyCFunction)DBC_close, METH_NOARGS}, {"count", (PyCFunction)DBC_count, METH_VARARGS}, {"current", (PyCFunction)DBC_current, METH_VARARGS|METH_KEYWORDS}, {"delete", (PyCFunction)DBC_delete, METH_VARARGS}, {"dup", (PyCFunction)DBC_dup, METH_VARARGS}, {"first", (PyCFunction)DBC_first, METH_VARARGS|METH_KEYWORDS}, {"get", (PyCFunction)DBC_get, METH_VARARGS|METH_KEYWORDS}, {"pget", (PyCFunction)DBC_pget, METH_VARARGS|METH_KEYWORDS}, {"get_recno", (PyCFunction)DBC_get_recno, METH_NOARGS}, {"last", (PyCFunction)DBC_last, METH_VARARGS|METH_KEYWORDS}, {"next", (PyCFunction)DBC_next, METH_VARARGS|METH_KEYWORDS}, {"prev", (PyCFunction)DBC_prev, METH_VARARGS|METH_KEYWORDS}, {"put", (PyCFunction)DBC_put, METH_VARARGS|METH_KEYWORDS}, {"set", (PyCFunction)DBC_set, METH_VARARGS|METH_KEYWORDS}, {"set_range", (PyCFunction)DBC_set_range, METH_VARARGS|METH_KEYWORDS}, {"get_both", (PyCFunction)DBC_get_both, METH_VARARGS}, {"get_current_size",(PyCFunction)DBC_get_current_size, METH_NOARGS}, {"set_both", (PyCFunction)DBC_set_both, METH_VARARGS}, {"set_recno", (PyCFunction)DBC_set_recno, METH_VARARGS|METH_KEYWORDS}, {"consume", (PyCFunction)DBC_consume, METH_VARARGS|METH_KEYWORDS}, {"next_dup", (PyCFunction)DBC_next_dup, METH_VARARGS|METH_KEYWORDS}, {"next_nodup", (PyCFunction)DBC_next_nodup, METH_VARARGS|METH_KEYWORDS}, {"prev_dup", (PyCFunction)DBC_prev_dup, METH_VARARGS|METH_KEYWORDS}, {"prev_nodup", (PyCFunction)DBC_prev_nodup, METH_VARARGS|METH_KEYWORDS}, {"join_item", (PyCFunction)DBC_join_item, METH_VARARGS}, {"set_priority", (PyCFunction)DBC_set_priority, METH_VARARGS|METH_KEYWORDS}, {"get_priority", (PyCFunction)DBC_get_priority, METH_NOARGS}, {NULL, NULL} /* sentinel */ }; static PyMethodDef DBLogCursor_methods[] = { {"close", (PyCFunction)DBLogCursor_close, METH_NOARGS}, {"current", (PyCFunction)DBLogCursor_current, METH_NOARGS}, {"first", (PyCFunction)DBLogCursor_first, METH_NOARGS}, {"last", (PyCFunction)DBLogCursor_last, METH_NOARGS}, {"next", (PyCFunction)DBLogCursor_next, METH_NOARGS}, {"prev", (PyCFunction)DBLogCursor_prev, METH_NOARGS}, {"set", (PyCFunction)DBLogCursor_set, METH_VARARGS}, {NULL, NULL} /* sentinel */ }; #if (DBVER >= 52) static PyMethodDef DBSite_methods[] = { {"get_config", (PyCFunction)DBSite_get_config, METH_VARARGS | METH_KEYWORDS}, {"set_config", (PyCFunction)DBSite_set_config, METH_VARARGS | METH_KEYWORDS}, {"remove", (PyCFunction)DBSite_remove, METH_NOARGS}, {"get_eid", (PyCFunction)DBSite_get_eid, METH_NOARGS}, {"get_address", (PyCFunction)DBSite_get_address, METH_NOARGS}, {"close", (PyCFunction)DBSite_close, METH_NOARGS}, {NULL, NULL} /* sentinel */ }; #endif static PyMethodDef DBEnv_methods[] = { {"close", (PyCFunction)DBEnv_close, METH_VARARGS}, {"open", (PyCFunction)DBEnv_open, METH_VARARGS}, {"remove", (PyCFunction)DBEnv_remove, METH_VARARGS}, {"dbremove", (PyCFunction)DBEnv_dbremove, METH_VARARGS|METH_KEYWORDS}, {"dbrename", (PyCFunction)DBEnv_dbrename, METH_VARARGS|METH_KEYWORDS}, {"set_thread_count", (PyCFunction)DBEnv_set_thread_count, METH_VARARGS}, {"get_thread_count", (PyCFunction)DBEnv_get_thread_count, METH_NOARGS}, {"set_encrypt", (PyCFunction)DBEnv_set_encrypt, METH_VARARGS|METH_KEYWORDS}, {"get_encrypt_flags", (PyCFunction)DBEnv_get_encrypt_flags, METH_NOARGS}, {"get_timeout", (PyCFunction)DBEnv_get_timeout, METH_VARARGS|METH_KEYWORDS}, {"set_timeout", (PyCFunction)DBEnv_set_timeout, METH_VARARGS|METH_KEYWORDS}, {"set_shm_key", (PyCFunction)DBEnv_set_shm_key, METH_VARARGS}, {"get_shm_key", (PyCFunction)DBEnv_get_shm_key, METH_NOARGS}, {"set_cache_max", (PyCFunction)DBEnv_set_cache_max, METH_VARARGS}, {"get_cache_max", (PyCFunction)DBEnv_get_cache_max, METH_NOARGS}, {"set_cachesize", (PyCFunction)DBEnv_set_cachesize, METH_VARARGS}, {"get_cachesize", (PyCFunction)DBEnv_get_cachesize, METH_NOARGS}, {"memp_trickle", (PyCFunction)DBEnv_memp_trickle, METH_VARARGS}, {"memp_sync", (PyCFunction)DBEnv_memp_sync, METH_VARARGS}, {"memp_stat", (PyCFunction)DBEnv_memp_stat, METH_VARARGS|METH_KEYWORDS}, {"memp_stat_print", (PyCFunction)DBEnv_memp_stat_print, METH_VARARGS|METH_KEYWORDS}, {"mutex_set_max", (PyCFunction)DBEnv_mutex_set_max, METH_VARARGS}, {"mutex_get_max", (PyCFunction)DBEnv_mutex_get_max, METH_NOARGS}, {"mutex_set_align", (PyCFunction)DBEnv_mutex_set_align, METH_VARARGS}, {"mutex_get_align", (PyCFunction)DBEnv_mutex_get_align, METH_NOARGS}, {"mutex_set_increment", (PyCFunction)DBEnv_mutex_set_increment, METH_VARARGS}, {"mutex_get_increment", (PyCFunction)DBEnv_mutex_get_increment, METH_NOARGS}, {"mutex_set_tas_spins", (PyCFunction)DBEnv_mutex_set_tas_spins, METH_VARARGS}, {"mutex_get_tas_spins", (PyCFunction)DBEnv_mutex_get_tas_spins, METH_NOARGS}, {"mutex_stat", (PyCFunction)DBEnv_mutex_stat, METH_VARARGS}, {"mutex_stat_print", (PyCFunction)DBEnv_mutex_stat_print, METH_VARARGS|METH_KEYWORDS}, {"set_data_dir", (PyCFunction)DBEnv_set_data_dir, METH_VARARGS}, {"get_data_dirs", (PyCFunction)DBEnv_get_data_dirs, METH_NOARGS}, {"get_flags", (PyCFunction)DBEnv_get_flags, METH_NOARGS}, {"set_flags", (PyCFunction)DBEnv_set_flags, METH_VARARGS}, {"log_set_config", (PyCFunction)DBEnv_log_set_config, METH_VARARGS}, {"log_get_config", (PyCFunction)DBEnv_log_get_config, METH_VARARGS}, {"set_lg_bsize", (PyCFunction)DBEnv_set_lg_bsize, METH_VARARGS}, {"get_lg_bsize", (PyCFunction)DBEnv_get_lg_bsize, METH_NOARGS}, {"set_lg_dir", (PyCFunction)DBEnv_set_lg_dir, METH_VARARGS}, {"get_lg_dir", (PyCFunction)DBEnv_get_lg_dir, METH_NOARGS}, {"set_lg_max", (PyCFunction)DBEnv_set_lg_max, METH_VARARGS}, {"get_lg_max", (PyCFunction)DBEnv_get_lg_max, METH_NOARGS}, {"set_lg_regionmax",(PyCFunction)DBEnv_set_lg_regionmax, METH_VARARGS}, {"get_lg_regionmax",(PyCFunction)DBEnv_get_lg_regionmax, METH_NOARGS}, {"set_lg_filemode", (PyCFunction)DBEnv_set_lg_filemode, METH_VARARGS}, {"get_lg_filemode", (PyCFunction)DBEnv_get_lg_filemode, METH_NOARGS}, {"set_lk_partitions", (PyCFunction)DBEnv_set_lk_partitions, METH_VARARGS}, {"get_lk_partitions", (PyCFunction)DBEnv_get_lk_partitions, METH_NOARGS}, {"set_lk_detect", (PyCFunction)DBEnv_set_lk_detect, METH_VARARGS}, {"get_lk_detect", (PyCFunction)DBEnv_get_lk_detect, METH_NOARGS}, {"set_lk_max_locks", (PyCFunction)DBEnv_set_lk_max_locks, METH_VARARGS}, {"get_lk_max_locks", (PyCFunction)DBEnv_get_lk_max_locks, METH_NOARGS}, {"set_lk_max_lockers", (PyCFunction)DBEnv_set_lk_max_lockers, METH_VARARGS}, {"get_lk_max_lockers", (PyCFunction)DBEnv_get_lk_max_lockers, METH_NOARGS}, {"set_lk_max_objects", (PyCFunction)DBEnv_set_lk_max_objects, METH_VARARGS}, {"get_lk_max_objects", (PyCFunction)DBEnv_get_lk_max_objects, METH_NOARGS}, {"stat_print", (PyCFunction)DBEnv_stat_print, METH_VARARGS|METH_KEYWORDS}, {"set_mp_mmapsize", (PyCFunction)DBEnv_set_mp_mmapsize, METH_VARARGS}, {"get_mp_mmapsize", (PyCFunction)DBEnv_get_mp_mmapsize, METH_NOARGS}, {"set_tmp_dir", (PyCFunction)DBEnv_set_tmp_dir, METH_VARARGS}, {"get_tmp_dir", (PyCFunction)DBEnv_get_tmp_dir, METH_NOARGS}, {"txn_begin", (PyCFunction)DBEnv_txn_begin, METH_VARARGS|METH_KEYWORDS}, {"txn_checkpoint", (PyCFunction)DBEnv_txn_checkpoint, METH_VARARGS}, {"txn_stat", (PyCFunction)DBEnv_txn_stat, METH_VARARGS}, {"txn_stat_print", (PyCFunction)DBEnv_txn_stat_print, METH_VARARGS|METH_KEYWORDS}, {"get_tx_max", (PyCFunction)DBEnv_get_tx_max, METH_NOARGS}, {"get_tx_timestamp", (PyCFunction)DBEnv_get_tx_timestamp, METH_NOARGS}, {"set_tx_max", (PyCFunction)DBEnv_set_tx_max, METH_VARARGS}, {"set_tx_timestamp", (PyCFunction)DBEnv_set_tx_timestamp, METH_VARARGS}, {"lock_detect", (PyCFunction)DBEnv_lock_detect, METH_VARARGS}, {"lock_get", (PyCFunction)DBEnv_lock_get, METH_VARARGS}, {"lock_id", (PyCFunction)DBEnv_lock_id, METH_NOARGS}, {"lock_id_free", (PyCFunction)DBEnv_lock_id_free, METH_VARARGS}, {"lock_put", (PyCFunction)DBEnv_lock_put, METH_VARARGS}, {"lock_stat", (PyCFunction)DBEnv_lock_stat, METH_VARARGS}, {"lock_stat_print", (PyCFunction)DBEnv_lock_stat_print, METH_VARARGS|METH_KEYWORDS}, {"log_cursor", (PyCFunction)DBEnv_log_cursor, METH_NOARGS}, {"log_file", (PyCFunction)DBEnv_log_file, METH_VARARGS}, {"log_printf", (PyCFunction)DBEnv_log_printf, METH_VARARGS|METH_KEYWORDS}, {"log_archive", (PyCFunction)DBEnv_log_archive, METH_VARARGS}, {"log_flush", (PyCFunction)DBEnv_log_flush, METH_NOARGS}, {"log_stat", (PyCFunction)DBEnv_log_stat, METH_VARARGS}, {"log_stat_print", (PyCFunction)DBEnv_log_stat_print, METH_VARARGS|METH_KEYWORDS}, {"fileid_reset", (PyCFunction)DBEnv_fileid_reset, METH_VARARGS|METH_KEYWORDS}, {"lsn_reset", (PyCFunction)DBEnv_lsn_reset, METH_VARARGS|METH_KEYWORDS}, {"set_get_returns_none",(PyCFunction)DBEnv_set_get_returns_none, METH_VARARGS}, {"txn_recover", (PyCFunction)DBEnv_txn_recover, METH_NOARGS}, #if (DBVER < 48) {"set_rpc_server", (PyCFunction)DBEnv_set_rpc_server, METH_VARARGS||METH_KEYWORDS}, #endif {"set_mp_max_openfd", (PyCFunction)DBEnv_set_mp_max_openfd, METH_VARARGS}, {"get_mp_max_openfd", (PyCFunction)DBEnv_get_mp_max_openfd, METH_NOARGS}, {"set_mp_max_write", (PyCFunction)DBEnv_set_mp_max_write, METH_VARARGS}, {"get_mp_max_write", (PyCFunction)DBEnv_get_mp_max_write, METH_NOARGS}, {"set_verbose", (PyCFunction)DBEnv_set_verbose, METH_VARARGS}, {"get_verbose", (PyCFunction)DBEnv_get_verbose, METH_VARARGS}, {"set_private", (PyCFunction)DBEnv_set_private, METH_O}, {"get_private", (PyCFunction)DBEnv_get_private, METH_NOARGS}, {"get_open_flags", (PyCFunction)DBEnv_get_open_flags, METH_NOARGS}, {"set_intermediate_dir_mode", (PyCFunction)DBEnv_set_intermediate_dir_mode, METH_VARARGS}, {"get_intermediate_dir_mode", (PyCFunction)DBEnv_get_intermediate_dir_mode, METH_NOARGS}, {"rep_start", (PyCFunction)DBEnv_rep_start, METH_VARARGS|METH_KEYWORDS}, {"rep_set_transport", (PyCFunction)DBEnv_rep_set_transport, METH_VARARGS}, {"rep_process_message", (PyCFunction)DBEnv_rep_process_message, METH_VARARGS}, {"rep_elect", (PyCFunction)DBEnv_rep_elect, METH_VARARGS}, {"rep_set_config", (PyCFunction)DBEnv_rep_set_config, METH_VARARGS}, {"rep_get_config", (PyCFunction)DBEnv_rep_get_config, METH_VARARGS}, {"rep_sync", (PyCFunction)DBEnv_rep_sync, METH_NOARGS}, {"rep_set_limit", (PyCFunction)DBEnv_rep_set_limit, METH_VARARGS}, {"rep_get_limit", (PyCFunction)DBEnv_rep_get_limit, METH_NOARGS}, {"rep_set_request", (PyCFunction)DBEnv_rep_set_request, METH_VARARGS}, {"rep_get_request", (PyCFunction)DBEnv_rep_get_request, METH_NOARGS}, {"set_event_notify", (PyCFunction)DBEnv_set_event_notify, METH_O}, {"rep_set_nsites", (PyCFunction)DBEnv_rep_set_nsites, METH_VARARGS}, {"rep_get_nsites", (PyCFunction)DBEnv_rep_get_nsites, METH_NOARGS}, {"rep_set_priority", (PyCFunction)DBEnv_rep_set_priority, METH_VARARGS}, {"rep_get_priority", (PyCFunction)DBEnv_rep_get_priority, METH_NOARGS}, {"rep_set_timeout", (PyCFunction)DBEnv_rep_set_timeout, METH_VARARGS}, {"rep_get_timeout", (PyCFunction)DBEnv_rep_get_timeout, METH_VARARGS}, {"rep_set_clockskew", (PyCFunction)DBEnv_rep_set_clockskew, METH_VARARGS}, {"rep_get_clockskew", (PyCFunction)DBEnv_rep_get_clockskew, METH_VARARGS}, {"rep_stat", (PyCFunction)DBEnv_rep_stat, METH_VARARGS|METH_KEYWORDS}, {"rep_stat_print", (PyCFunction)DBEnv_rep_stat_print, METH_VARARGS|METH_KEYWORDS}, {"repmgr_start", (PyCFunction)DBEnv_repmgr_start, METH_VARARGS|METH_KEYWORDS}, #if (DBVER < 52) {"repmgr_set_local_site", (PyCFunction)DBEnv_repmgr_set_local_site, METH_VARARGS|METH_KEYWORDS}, {"repmgr_add_remote_site", (PyCFunction)DBEnv_repmgr_add_remote_site, METH_VARARGS|METH_KEYWORDS}, #endif {"repmgr_set_ack_policy", (PyCFunction)DBEnv_repmgr_set_ack_policy, METH_VARARGS}, {"repmgr_get_ack_policy", (PyCFunction)DBEnv_repmgr_get_ack_policy, METH_NOARGS}, {"repmgr_site_list", (PyCFunction)DBEnv_repmgr_site_list, METH_NOARGS}, {"repmgr_stat", (PyCFunction)DBEnv_repmgr_stat, METH_VARARGS|METH_KEYWORDS}, {"repmgr_stat_print", (PyCFunction)DBEnv_repmgr_stat_print, METH_VARARGS|METH_KEYWORDS}, #if (DBVER >= 52) {"repmgr_site", (PyCFunction)DBEnv_repmgr_site, METH_VARARGS | METH_KEYWORDS}, {"repmgr_site_by_eid", (PyCFunction)DBEnv_repmgr_site_by_eid, METH_VARARGS | METH_KEYWORDS}, #endif {NULL, NULL} /* sentinel */ }; static PyMethodDef DBTxn_methods[] = { {"commit", (PyCFunction)DBTxn_commit, METH_VARARGS}, {"prepare", (PyCFunction)DBTxn_prepare, METH_VARARGS}, {"discard", (PyCFunction)DBTxn_discard, METH_NOARGS}, {"abort", (PyCFunction)DBTxn_abort, METH_NOARGS}, {"id", (PyCFunction)DBTxn_id, METH_NOARGS}, {"set_timeout", (PyCFunction)DBTxn_set_timeout, METH_VARARGS|METH_KEYWORDS}, {"set_name", (PyCFunction)DBTxn_set_name, METH_VARARGS}, {"get_name", (PyCFunction)DBTxn_get_name, METH_NOARGS}, {NULL, NULL} /* sentinel */ }; static PyMethodDef DBSequence_methods[] = { {"close", (PyCFunction)DBSequence_close, METH_VARARGS}, {"get", (PyCFunction)DBSequence_get, METH_VARARGS|METH_KEYWORDS}, {"get_dbp", (PyCFunction)DBSequence_get_dbp, METH_NOARGS}, {"get_key", (PyCFunction)DBSequence_get_key, METH_NOARGS}, {"initial_value", (PyCFunction)DBSequence_initial_value, METH_VARARGS}, {"open", (PyCFunction)DBSequence_open, METH_VARARGS|METH_KEYWORDS}, {"remove", (PyCFunction)DBSequence_remove, METH_VARARGS|METH_KEYWORDS}, {"set_cachesize", (PyCFunction)DBSequence_set_cachesize, METH_VARARGS}, {"get_cachesize", (PyCFunction)DBSequence_get_cachesize, METH_NOARGS}, {"set_flags", (PyCFunction)DBSequence_set_flags, METH_VARARGS}, {"get_flags", (PyCFunction)DBSequence_get_flags, METH_NOARGS}, {"set_range", (PyCFunction)DBSequence_set_range, METH_VARARGS}, {"get_range", (PyCFunction)DBSequence_get_range, METH_NOARGS}, {"stat", (PyCFunction)DBSequence_stat, METH_VARARGS|METH_KEYWORDS}, {"stat_print", (PyCFunction)DBSequence_stat_print, METH_VARARGS|METH_KEYWORDS}, {NULL, NULL} /* sentinel */ }; static PyObject* DBEnv_db_home_get(DBEnvObject* self) { const char *home = NULL; CHECK_ENV_NOT_CLOSED(self); MYDB_BEGIN_ALLOW_THREADS; self->db_env->get_home(self->db_env, &home); MYDB_END_ALLOW_THREADS; if (home == NULL) { RETURN_NONE(); } return PyBytes_FromString(home); } static PyGetSetDef DBEnv_getsets[] = { {"db_home", (getter)DBEnv_db_home_get, NULL,}, {NULL} }; statichere PyTypeObject DB_Type = { #if (PY_VERSION_HEX < 0x03000000) PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ #else PyVarObject_HEAD_INIT(NULL, 0) #endif "DB", /*tp_name*/ sizeof(DBObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ /* methods */ (destructor)DB_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ &DB_sequence,/*tp_as_sequence*/ &DB_mapping,/*tp_as_mapping*/ 0, /*tp_hash*/ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ #if (PY_VERSION_HEX < 0x03000000) Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ #else Py_TPFLAGS_DEFAULT, /* tp_flags */ #endif 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(DBObject, in_weakreflist), /* tp_weaklistoffset */ 0, /*tp_iter*/ 0, /*tp_iternext*/ DB_methods, /*tp_methods*/ 0, /*tp_members*/ }; statichere PyTypeObject DBCursor_Type = { #if (PY_VERSION_HEX < 0x03000000) PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ #else PyVarObject_HEAD_INIT(NULL, 0) #endif "DBCursor", /*tp_name*/ sizeof(DBCursorObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ /* methods */ (destructor)DBCursor_dealloc,/*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ #if (PY_VERSION_HEX < 0x03000000) Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ #else Py_TPFLAGS_DEFAULT, /* tp_flags */ #endif 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(DBCursorObject, in_weakreflist), /* tp_weaklistoffset */ 0, /*tp_iter*/ 0, /*tp_iternext*/ DBCursor_methods, /*tp_methods*/ 0, /*tp_members*/ }; statichere PyTypeObject DBLogCursor_Type = { #if (PY_VERSION_HEX < 0x03000000) PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ #else PyVarObject_HEAD_INIT(NULL, 0) #endif "DBLogCursor", /*tp_name*/ sizeof(DBLogCursorObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ /* methods */ (destructor)DBLogCursor_dealloc,/*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ #if (PY_VERSION_HEX < 0x03000000) Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ #else Py_TPFLAGS_DEFAULT, /* tp_flags */ #endif 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(DBLogCursorObject, in_weakreflist), /* tp_weaklistoffset */ 0, /*tp_iter*/ 0, /*tp_iternext*/ DBLogCursor_methods, /*tp_methods*/ 0, /*tp_members*/ }; #if (DBVER >= 52) statichere PyTypeObject DBSite_Type = { #if (PY_VERSION_HEX < 0x03000000) PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ #else PyVarObject_HEAD_INIT(NULL, 0) #endif "DBSite", /*tp_name*/ sizeof(DBSiteObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ /* methods */ (destructor)DBSite_dealloc,/*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ #if (PY_VERSION_HEX < 0x03000000) Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ #else Py_TPFLAGS_DEFAULT, /* tp_flags */ #endif 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(DBSiteObject, in_weakreflist), /* tp_weaklistoffset */ 0, /*tp_iter*/ 0, /*tp_iternext*/ DBSite_methods, /*tp_methods*/ 0, /*tp_members*/ }; #endif statichere PyTypeObject DBEnv_Type = { #if (PY_VERSION_HEX < 0x03000000) PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ #else PyVarObject_HEAD_INIT(NULL, 0) #endif "DBEnv", /*tp_name*/ sizeof(DBEnvObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ /* methods */ (destructor)DBEnv_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ #if (PY_VERSION_HEX < 0x03000000) Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ #else Py_TPFLAGS_DEFAULT, /* tp_flags */ #endif 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(DBEnvObject, in_weakreflist), /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ DBEnv_methods, /* tp_methods */ 0, /* tp_members */ DBEnv_getsets, /* tp_getsets */ }; statichere PyTypeObject DBTxn_Type = { #if (PY_VERSION_HEX < 0x03000000) PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ #else PyVarObject_HEAD_INIT(NULL, 0) #endif "DBTxn", /*tp_name*/ sizeof(DBTxnObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ /* methods */ (destructor)DBTxn_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ #if (PY_VERSION_HEX < 0x03000000) Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ #else Py_TPFLAGS_DEFAULT, /* tp_flags */ #endif 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(DBTxnObject, in_weakreflist), /* tp_weaklistoffset */ 0, /*tp_iter*/ 0, /*tp_iternext*/ DBTxn_methods, /*tp_methods*/ 0, /*tp_members*/ }; statichere PyTypeObject DBLock_Type = { #if (PY_VERSION_HEX < 0x03000000) PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ #else PyVarObject_HEAD_INIT(NULL, 0) #endif "DBLock", /*tp_name*/ sizeof(DBLockObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ /* methods */ (destructor)DBLock_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ #if (PY_VERSION_HEX < 0x03000000) Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ #else Py_TPFLAGS_DEFAULT, /* tp_flags */ #endif 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(DBLockObject, in_weakreflist), /* tp_weaklistoffset */ }; statichere PyTypeObject DBSequence_Type = { #if (PY_VERSION_HEX < 0x03000000) PyObject_HEAD_INIT(NULL) 0, /*ob_size*/ #else PyVarObject_HEAD_INIT(NULL, 0) #endif "DBSequence", /*tp_name*/ sizeof(DBSequenceObject), /*tp_basicsize*/ 0, /*tp_itemsize*/ /* methods */ (destructor)DBSequence_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash*/ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ #if (PY_VERSION_HEX < 0x03000000) Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ #else Py_TPFLAGS_DEFAULT, /* tp_flags */ #endif 0, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(DBSequenceObject, in_weakreflist), /* tp_weaklistoffset */ 0, /*tp_iter*/ 0, /*tp_iternext*/ DBSequence_methods, /*tp_methods*/ 0, /*tp_members*/ }; /* --------------------------------------------------------------------- */ /* Module-level functions */ static PyObject* DB_construct(PyObject* self, PyObject* args, PyObject* kwargs) { PyObject* dbenvobj = NULL; int flags = 0; static char* kwnames[] = { "dbEnv", "flags", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|Oi:DB", kwnames, &dbenvobj, &flags)) return NULL; if (dbenvobj == Py_None) dbenvobj = NULL; else if (dbenvobj && !DBEnvObject_Check(dbenvobj)) { makeTypeError("DBEnv", dbenvobj); return NULL; } return (PyObject* )newDBObject((DBEnvObject*)dbenvobj, flags); } static PyObject* DBEnv_construct(PyObject* self, PyObject* args) { int flags = 0; if (!PyArg_ParseTuple(args, "|i:DbEnv", &flags)) return NULL; return (PyObject* )newDBEnvObject(flags); } static PyObject* DBSequence_construct(PyObject* self, PyObject* args, PyObject* kwargs) { PyObject* dbobj; int flags = 0; static char* kwnames[] = { "db", "flags", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|i:DBSequence", kwnames, &dbobj, &flags)) return NULL; if (!DBObject_Check(dbobj)) { makeTypeError("DB", dbobj); return NULL; } return (PyObject* )newDBSequenceObject((DBObject*)dbobj, flags); } static char bsddb_version_doc[] = "Returns a tuple of major, minor, and patch release numbers of the\n\ underlying DB library."; static PyObject* bsddb_version(PyObject* self) { int major, minor, patch; /* This should be instantaneous, no need to release the GIL */ db_version(&major, &minor, &patch); return Py_BuildValue("(iii)", major, minor, patch); } #if (DBVER >= 50) static PyObject* bsddb_version_full(PyObject* self) { char *version_string; int family, release, major, minor, patch; /* This should be instantaneous, no need to release the GIL */ version_string = db_full_version(&family, &release, &major, &minor, &patch); return Py_BuildValue("(siiiii)", version_string, family, release, major, minor, patch); } #endif /* List of functions defined in the module */ static PyMethodDef bsddb_methods[] = { {"DB", (PyCFunction)DB_construct, METH_VARARGS | METH_KEYWORDS }, {"DBEnv", (PyCFunction)DBEnv_construct, METH_VARARGS}, {"DBSequence", (PyCFunction)DBSequence_construct, METH_VARARGS | METH_KEYWORDS }, {"version", (PyCFunction)bsddb_version, METH_NOARGS, bsddb_version_doc}, #if (DBVER >= 50) {"full_version", (PyCFunction)bsddb_version_full, METH_NOARGS}, #endif {NULL, NULL} /* sentinel */ }; /* API structure */ static BSDDB_api bsddb_api; /* --------------------------------------------------------------------- */ /* Module initialization */ /* Convenience routine to export an integer value. * Errors are silently ignored, for better or for worse... */ #define ADD_INT(dict, NAME) _addIntToDict(dict, #NAME, NAME) /* ** We can rename the module at import time, so the string allocated ** must be big enough, and any use of the name must use this particular ** string. */ #define MODULE_NAME_MAX_LEN 11 static char _bsddbModuleName[MODULE_NAME_MAX_LEN+1] = "_bsddb"; #if (PY_VERSION_HEX >= 0x03000000) static struct PyModuleDef bsddbmodule = { PyModuleDef_HEAD_INIT, _bsddbModuleName, /* Name of module */ NULL, /* module documentation, may be NULL */ -1, /* size of per-interpreter state of the module, or -1 if the module keeps state in global variables. */ bsddb_methods, NULL, /* Reload */ NULL, /* Traverse */ NULL, /* Clear */ NULL /* Free */ }; #endif #if (PY_VERSION_HEX < 0x03000000) DL_EXPORT(void) init_bsddb(void) #else PyMODINIT_FUNC PyInit__bsddb(void) /* Note the two underscores */ #endif { PyObject* m; PyObject* d; PyObject* py_api; PyObject* pybsddb_version_s; PyObject* db_version_s; #if (PY_VERSION_HEX < 0x03000000) pybsddb_version_s = PyString_FromString(PY_BSDDB_VERSION); db_version_s = PyString_FromString(DB_VERSION_STRING); #else /* This data should be ascii, so UTF-8 conversion is fine */ pybsddb_version_s = PyUnicode_FromString(PY_BSDDB_VERSION); db_version_s = PyUnicode_FromString(DB_VERSION_STRING); #endif /* Initialize object types */ if ((PyType_Ready(&DB_Type) < 0) || (PyType_Ready(&DBCursor_Type) < 0) || (PyType_Ready(&DBLogCursor_Type) < 0) || (PyType_Ready(&DBEnv_Type) < 0) || (PyType_Ready(&DBTxn_Type) < 0) || (PyType_Ready(&DBLock_Type) < 0) || (PyType_Ready(&DBSequence_Type) < 0) #if (DBVER >= 52) || (PyType_Ready(&DBSite_Type) < 0) #endif ) { #if (PY_VERSION_HEX < 0x03000000) return; #else return NULL; #endif } /* Create the module and add the functions */ #if (PY_VERSION_HEX < 0x03000000) m = Py_InitModule(_bsddbModuleName, bsddb_methods); #else m=PyModule_Create(&bsddbmodule); #endif if (m == NULL) { #if (PY_VERSION_HEX < 0x03000000) return; #else return NULL; #endif } /* Add some symbolic constants to the module */ d = PyModule_GetDict(m); PyDict_SetItemString(d, "__version__", pybsddb_version_s); PyDict_SetItemString(d, "DB_VERSION_STRING", db_version_s); Py_DECREF(pybsddb_version_s); pybsddb_version_s = NULL; Py_DECREF(db_version_s); db_version_s = NULL; ADD_INT(d, DB_VERSION_MAJOR); ADD_INT(d, DB_VERSION_MINOR); ADD_INT(d, DB_VERSION_PATCH); ADD_INT(d, DB_MAX_PAGES); ADD_INT(d, DB_MAX_RECORDS); #if (DBVER < 48) ADD_INT(d, DB_RPCCLIENT); #endif #if (DBVER < 48) ADD_INT(d, DB_XA_CREATE); #endif ADD_INT(d, DB_CREATE); ADD_INT(d, DB_NOMMAP); ADD_INT(d, DB_THREAD); ADD_INT(d, DB_MULTIVERSION); ADD_INT(d, DB_FORCE); ADD_INT(d, DB_INIT_CDB); ADD_INT(d, DB_INIT_LOCK); ADD_INT(d, DB_INIT_LOG); ADD_INT(d, DB_INIT_MPOOL); ADD_INT(d, DB_INIT_TXN); ADD_INT(d, DB_JOINENV); #if (DBVER >= 48) ADD_INT(d, DB_GID_SIZE); #else ADD_INT(d, DB_XIDDATASIZE); /* Allow new code to work in old BDB releases */ _addIntToDict(d, "DB_GID_SIZE", DB_XIDDATASIZE); #endif ADD_INT(d, DB_RECOVER); ADD_INT(d, DB_RECOVER_FATAL); ADD_INT(d, DB_TXN_NOSYNC); ADD_INT(d, DB_USE_ENVIRON); ADD_INT(d, DB_USE_ENVIRON_ROOT); ADD_INT(d, DB_LOCKDOWN); ADD_INT(d, DB_PRIVATE); ADD_INT(d, DB_SYSTEM_MEM); ADD_INT(d, DB_TXN_SYNC); ADD_INT(d, DB_TXN_NOWAIT); #if (DBVER >= 51) ADD_INT(d, DB_TXN_BULK); #endif #if (DBVER >= 48) ADD_INT(d, DB_CURSOR_BULK); #endif ADD_INT(d, DB_TXN_WAIT); ADD_INT(d, DB_EXCL); ADD_INT(d, DB_FCNTL_LOCKING); ADD_INT(d, DB_ODDFILESIZE); ADD_INT(d, DB_RDWRMASTER); ADD_INT(d, DB_RDONLY); ADD_INT(d, DB_TRUNCATE); ADD_INT(d, DB_EXTENT); ADD_INT(d, DB_CDB_ALLDB); ADD_INT(d, DB_VERIFY); ADD_INT(d, DB_UPGRADE); ADD_INT(d, DB_PRINTABLE); ADD_INT(d, DB_AGGRESSIVE); ADD_INT(d, DB_NOORDERCHK); ADD_INT(d, DB_ORDERCHKONLY); ADD_INT(d, DB_PR_PAGE); ADD_INT(d, DB_PR_RECOVERYTEST); ADD_INT(d, DB_SALVAGE); ADD_INT(d, DB_LOCK_NORUN); ADD_INT(d, DB_LOCK_DEFAULT); ADD_INT(d, DB_LOCK_OLDEST); ADD_INT(d, DB_LOCK_RANDOM); ADD_INT(d, DB_LOCK_YOUNGEST); ADD_INT(d, DB_LOCK_MAXLOCKS); ADD_INT(d, DB_LOCK_MINLOCKS); ADD_INT(d, DB_LOCK_MINWRITE); ADD_INT(d, DB_LOCK_EXPIRE); ADD_INT(d, DB_LOCK_MAXWRITE); _addIntToDict(d, "DB_LOCK_CONFLICT", 0); ADD_INT(d, DB_LOCK_DUMP); ADD_INT(d, DB_LOCK_GET); ADD_INT(d, DB_LOCK_INHERIT); ADD_INT(d, DB_LOCK_PUT); ADD_INT(d, DB_LOCK_PUT_ALL); ADD_INT(d, DB_LOCK_PUT_OBJ); ADD_INT(d, DB_LOCK_NG); ADD_INT(d, DB_LOCK_READ); ADD_INT(d, DB_LOCK_WRITE); ADD_INT(d, DB_LOCK_NOWAIT); ADD_INT(d, DB_LOCK_WAIT); ADD_INT(d, DB_LOCK_IWRITE); ADD_INT(d, DB_LOCK_IREAD); ADD_INT(d, DB_LOCK_IWR); ADD_INT(d, DB_LOCK_READ_UNCOMMITTED); ADD_INT(d, DB_LOCK_WWRITE); ADD_INT(d, DB_LOCK_RECORD); ADD_INT(d, DB_LOCK_UPGRADE); ADD_INT(d, DB_LOCK_SWITCH); ADD_INT(d, DB_LOCK_UPGRADE_WRITE); ADD_INT(d, DB_LOCK_NOWAIT); ADD_INT(d, DB_LOCK_RECORD); ADD_INT(d, DB_LOCK_UPGRADE); ADD_INT(d, DB_LSTAT_ABORTED); ADD_INT(d, DB_LSTAT_FREE); ADD_INT(d, DB_LSTAT_HELD); ADD_INT(d, DB_LSTAT_PENDING); ADD_INT(d, DB_LSTAT_WAITING); ADD_INT(d, DB_ARCH_ABS); ADD_INT(d, DB_ARCH_DATA); ADD_INT(d, DB_ARCH_LOG); ADD_INT(d, DB_ARCH_REMOVE); ADD_INT(d, DB_BTREE); ADD_INT(d, DB_HASH); ADD_INT(d, DB_RECNO); ADD_INT(d, DB_QUEUE); ADD_INT(d, DB_UNKNOWN); ADD_INT(d, DB_DUP); ADD_INT(d, DB_DUPSORT); ADD_INT(d, DB_RECNUM); ADD_INT(d, DB_RENUMBER); ADD_INT(d, DB_REVSPLITOFF); ADD_INT(d, DB_SNAPSHOT); ADD_INT(d, DB_INORDER); ADD_INT(d, DB_JOIN_NOSORT); ADD_INT(d, DB_AFTER); ADD_INT(d, DB_APPEND); ADD_INT(d, DB_BEFORE); ADD_INT(d, DB_CONSUME); ADD_INT(d, DB_CONSUME_WAIT); ADD_INT(d, DB_CURRENT); ADD_INT(d, DB_FAST_STAT); ADD_INT(d, DB_FIRST); ADD_INT(d, DB_FLUSH); ADD_INT(d, DB_GET_BOTH); ADD_INT(d, DB_GET_BOTH_RANGE); ADD_INT(d, DB_GET_RECNO); ADD_INT(d, DB_JOIN_ITEM); ADD_INT(d, DB_KEYFIRST); ADD_INT(d, DB_KEYLAST); ADD_INT(d, DB_LAST); ADD_INT(d, DB_NEXT); ADD_INT(d, DB_NEXT_DUP); ADD_INT(d, DB_NEXT_NODUP); ADD_INT(d, DB_NODUPDATA); ADD_INT(d, DB_NOOVERWRITE); ADD_INT(d, DB_NOSYNC); ADD_INT(d, DB_POSITION); ADD_INT(d, DB_PREV); ADD_INT(d, DB_PREV_NODUP); ADD_INT(d, DB_PREV_DUP); ADD_INT(d, DB_SET); ADD_INT(d, DB_SET_RANGE); ADD_INT(d, DB_SET_RECNO); ADD_INT(d, DB_WRITECURSOR); ADD_INT(d, DB_OPFLAGS_MASK); ADD_INT(d, DB_RMW); ADD_INT(d, DB_DIRTY_READ); ADD_INT(d, DB_MULTIPLE); ADD_INT(d, DB_MULTIPLE_KEY); ADD_INT(d, DB_IMMUTABLE_KEY); ADD_INT(d, DB_READ_UNCOMMITTED); ADD_INT(d, DB_READ_COMMITTED); ADD_INT(d, DB_FREELIST_ONLY); ADD_INT(d, DB_FREE_SPACE); ADD_INT(d, DB_DONOTINDEX); ADD_INT(d, DB_KEYEMPTY); ADD_INT(d, DB_KEYEXIST); ADD_INT(d, DB_LOCK_DEADLOCK); ADD_INT(d, DB_LOCK_NOTGRANTED); ADD_INT(d, DB_NOSERVER); #if (DBVER < 52) ADD_INT(d, DB_NOSERVER_HOME); ADD_INT(d, DB_NOSERVER_ID); #endif ADD_INT(d, DB_NOTFOUND); ADD_INT(d, DB_OLD_VERSION); ADD_INT(d, DB_RUNRECOVERY); ADD_INT(d, DB_VERIFY_BAD); ADD_INT(d, DB_PAGE_NOTFOUND); ADD_INT(d, DB_SECONDARY_BAD); ADD_INT(d, DB_STAT_CLEAR); ADD_INT(d, DB_REGION_INIT); ADD_INT(d, DB_NOLOCKING); ADD_INT(d, DB_YIELDCPU); ADD_INT(d, DB_PANIC_ENVIRONMENT); ADD_INT(d, DB_NOPANIC); ADD_INT(d, DB_OVERWRITE); ADD_INT(d, DB_STAT_SUBSYSTEM); ADD_INT(d, DB_STAT_MEMP_HASH); ADD_INT(d, DB_STAT_LOCK_CONF); ADD_INT(d, DB_STAT_LOCK_LOCKERS); ADD_INT(d, DB_STAT_LOCK_OBJECTS); ADD_INT(d, DB_STAT_LOCK_PARAMS); #if (DBVER >= 48) ADD_INT(d, DB_OVERWRITE_DUP); #endif ADD_INT(d, DB_FOREIGN_ABORT); ADD_INT(d, DB_FOREIGN_CASCADE); ADD_INT(d, DB_FOREIGN_NULLIFY); ADD_INT(d, DB_REGISTER); ADD_INT(d, DB_EID_INVALID); ADD_INT(d, DB_EID_BROADCAST); ADD_INT(d, DB_TIME_NOTGRANTED); ADD_INT(d, DB_TXN_NOT_DURABLE); ADD_INT(d, DB_TXN_WRITE_NOSYNC); ADD_INT(d, DB_DIRECT_DB); ADD_INT(d, DB_INIT_REP); ADD_INT(d, DB_ENCRYPT); ADD_INT(d, DB_CHKSUM); ADD_INT(d, DB_LOG_DIRECT); ADD_INT(d, DB_LOG_DSYNC); ADD_INT(d, DB_LOG_IN_MEMORY); ADD_INT(d, DB_LOG_AUTO_REMOVE); ADD_INT(d, DB_LOG_ZERO); #if (DBVER >= 60) ADD_INT(d, DB_LOG_BLOB); #endif ADD_INT(d, DB_DSYNC_DB); ADD_INT(d, DB_TXN_SNAPSHOT); ADD_INT(d, DB_VERB_DEADLOCK); ADD_INT(d, DB_VERB_FILEOPS); ADD_INT(d, DB_VERB_FILEOPS_ALL); ADD_INT(d, DB_VERB_RECOVERY); ADD_INT(d, DB_VERB_REGISTER); ADD_INT(d, DB_VERB_REPLICATION); ADD_INT(d, DB_VERB_WAITSFOR); #if (DBVER >= 50) ADD_INT(d, DB_VERB_REP_SYSTEM); #endif ADD_INT(d, DB_VERB_REP_ELECT); ADD_INT(d, DB_VERB_REP_LEASE); ADD_INT(d, DB_VERB_REP_MISC); ADD_INT(d, DB_VERB_REP_MSGS); ADD_INT(d, DB_VERB_REP_SYNC); ADD_INT(d, DB_VERB_REPMGR_CONNFAIL); ADD_INT(d, DB_VERB_REPMGR_MISC); ADD_INT(d, DB_EVENT_PANIC); ADD_INT(d, DB_EVENT_REP_CLIENT); ADD_INT(d, DB_EVENT_REP_ELECTED); ADD_INT(d, DB_EVENT_REP_MASTER); ADD_INT(d, DB_EVENT_REP_NEWMASTER); ADD_INT(d, DB_EVENT_REP_PERM_FAILED); ADD_INT(d, DB_EVENT_REP_STARTUPDONE); ADD_INT(d, DB_EVENT_WRITE_FAILED); #if (DBVER >= 50) ADD_INT(d, DB_REPMGR_CONF_ELECTIONS); ADD_INT(d, DB_EVENT_REP_MASTER_FAILURE); ADD_INT(d, DB_EVENT_REP_DUPMASTER); ADD_INT(d, DB_EVENT_REP_ELECTION_FAILED); #endif #if (DBVER >= 48) ADD_INT(d, DB_EVENT_REG_ALIVE); ADD_INT(d, DB_EVENT_REG_PANIC); #endif #if (DBVER >= 60) ADD_INT(d, DB_EVENT_REP_AUTOTAKEOVER_FAILED); #endif #if (DBVER >=52) ADD_INT(d, DB_EVENT_REP_SITE_ADDED); ADD_INT(d, DB_EVENT_REP_SITE_REMOVED); ADD_INT(d, DB_EVENT_REP_LOCAL_SITE_REMOVED); ADD_INT(d, DB_EVENT_REP_CONNECT_BROKEN); ADD_INT(d, DB_EVENT_REP_CONNECT_ESTD); ADD_INT(d, DB_EVENT_REP_CONNECT_TRY_FAILED); ADD_INT(d, DB_EVENT_REP_INIT_DONE); ADD_INT(d, DB_MEM_LOCK); ADD_INT(d, DB_MEM_LOCKOBJECT); ADD_INT(d, DB_MEM_LOCKER); ADD_INT(d, DB_MEM_LOGID); ADD_INT(d, DB_MEM_TRANSACTION); ADD_INT(d, DB_MEM_THREAD); ADD_INT(d, DB_BOOTSTRAP_HELPER); ADD_INT(d, DB_GROUP_CREATOR); ADD_INT(d, DB_LEGACY); ADD_INT(d, DB_LOCAL_SITE); ADD_INT(d, DB_REPMGR_PEER); #endif ADD_INT(d, DB_REP_DUPMASTER); ADD_INT(d, DB_REP_HOLDELECTION); ADD_INT(d, DB_REP_IGNORE); ADD_INT(d, DB_REP_JOIN_FAILURE); ADD_INT(d, DB_REP_ISPERM); ADD_INT(d, DB_REP_NOTPERM); ADD_INT(d, DB_REP_NEWSITE); ADD_INT(d, DB_REP_MASTER); ADD_INT(d, DB_REP_CLIENT); ADD_INT(d, DB_REP_PERMANENT); #if (DBVER >= 50) ADD_INT(d, DB_REP_CONF_AUTOINIT); #else ADD_INT(d, DB_REP_CONF_NOAUTOINIT); #endif /* 5.0 */ ADD_INT(d, DB_REP_CONF_DELAYCLIENT); ADD_INT(d, DB_REP_CONF_BULK); ADD_INT(d, DB_REP_CONF_NOWAIT); ADD_INT(d, DB_REP_ANYWHERE); ADD_INT(d, DB_REP_REREQUEST); ADD_INT(d, DB_REP_NOBUFFER); ADD_INT(d, DB_REP_LEASE_EXPIRED); ADD_INT(d, DB_IGNORE_LEASE); ADD_INT(d, DB_REP_CONF_LEASE); ADD_INT(d, DB_REPMGR_CONF_2SITE_STRICT); ADD_INT(d, DB_REP_ELECTION); ADD_INT(d, DB_REP_ACK_TIMEOUT); ADD_INT(d, DB_REP_CONNECTION_RETRY); ADD_INT(d, DB_REP_ELECTION_TIMEOUT); ADD_INT(d, DB_REP_ELECTION_RETRY); ADD_INT(d, DB_REP_CHECKPOINT_DELAY); ADD_INT(d, DB_REP_FULL_ELECTION_TIMEOUT); ADD_INT(d, DB_REP_LEASE_TIMEOUT); ADD_INT(d, DB_REP_HEARTBEAT_MONITOR); ADD_INT(d, DB_REP_HEARTBEAT_SEND); ADD_INT(d, DB_REPMGR_PEER); ADD_INT(d, DB_REPMGR_ACKS_ALL); ADD_INT(d, DB_REPMGR_ACKS_ALL_PEERS); ADD_INT(d, DB_REPMGR_ACKS_NONE); ADD_INT(d, DB_REPMGR_ACKS_ONE); ADD_INT(d, DB_REPMGR_ACKS_ONE_PEER); ADD_INT(d, DB_REPMGR_ACKS_QUORUM); ADD_INT(d, DB_REPMGR_CONNECTED); ADD_INT(d, DB_REPMGR_DISCONNECTED); ADD_INT(d, DB_STAT_ALL); #if (DBVER >= 51) ADD_INT(d, DB_REPMGR_ACKS_ALL_AVAILABLE); #endif #if (DBVER >= 48) ADD_INT(d, DB_REP_CONF_INMEM); #endif #if (DBVER >= 60) ADD_INT(d, DB_REPMGR_ISVIEW); #endif #if (DBVER >= 60) ADD_INT(d, DB_DBT_BLOB); #endif #if (DBVER >= 60) ADD_INT(d, DB_STREAM_READ); ADD_INT(d, DB_STREAM_WRITE); ADD_INT(d, DB_STREAM_SYNC_WRITE); #endif ADD_INT(d, DB_TIMEOUT); #if (DBVER >= 50) ADD_INT(d, DB_FORCESYNC); #endif #if (DBVER >= 48) ADD_INT(d, DB_FAILCHK); #endif #if (DBVER >= 51) ADD_INT(d, DB_HOTBACKUP_IN_PROGRESS); #endif ADD_INT(d, DB_BUFFER_SMALL); ADD_INT(d, DB_SEQ_DEC); ADD_INT(d, DB_SEQ_INC); ADD_INT(d, DB_SEQ_WRAP); ADD_INT(d, DB_ENCRYPT_AES); ADD_INT(d, DB_AUTO_COMMIT); ADD_INT(d, DB_PRIORITY_VERY_LOW); ADD_INT(d, DB_PRIORITY_LOW); ADD_INT(d, DB_PRIORITY_DEFAULT); ADD_INT(d, DB_PRIORITY_HIGH); ADD_INT(d, DB_PRIORITY_VERY_HIGH); ADD_INT(d, DB_PRIORITY_UNCHANGED); ADD_INT(d, EINVAL); ADD_INT(d, EACCES); ADD_INT(d, ENOSPC); ADD_INT(d, ENOMEM); ADD_INT(d, EAGAIN); ADD_INT(d, EBUSY); ADD_INT(d, EEXIST); ADD_INT(d, ENOENT); ADD_INT(d, EPERM); ADD_INT(d, DB_SET_LOCK_TIMEOUT); ADD_INT(d, DB_SET_TXN_TIMEOUT); #if (DBVER >= 48) ADD_INT(d, DB_SET_REG_TIMEOUT); #endif /* The exception name must be correct for pickled exception * * objects to unpickle properly. */ #define PYBSDDB_EXCEPTION_BASE "bsddb3.db." /* All the rest of the exceptions derive only from DBError */ #define MAKE_EX(name) name = PyErr_NewException(PYBSDDB_EXCEPTION_BASE #name, DBError, NULL); \ PyDict_SetItemString(d, #name, name) /* The base exception class is DBError */ DBError = NULL; /* used in MAKE_EX so that it derives from nothing */ MAKE_EX(DBError); { PyObject* bases; bases = PyTuple_Pack(2, DBError, PyExc_KeyError); #define MAKE_EX2(name) name = PyErr_NewException(PYBSDDB_EXCEPTION_BASE #name, bases, NULL); \ PyDict_SetItemString(d, #name, name) MAKE_EX2(DBNotFoundError); MAKE_EX2(DBKeyEmptyError); #undef MAKE_EX2 Py_XDECREF(bases); } MAKE_EX(DBCursorClosedError); MAKE_EX(DBKeyExistError); MAKE_EX(DBLockDeadlockError); MAKE_EX(DBLockNotGrantedError); MAKE_EX(DBOldVersionError); MAKE_EX(DBRunRecoveryError); MAKE_EX(DBVerifyBadError); MAKE_EX(DBNoServerError); #if (DBVER < 52) MAKE_EX(DBNoServerHomeError); MAKE_EX(DBNoServerIDError); #endif MAKE_EX(DBPageNotFoundError); MAKE_EX(DBSecondaryBadError); MAKE_EX(DBInvalidArgError); MAKE_EX(DBAccessError); MAKE_EX(DBNoSpaceError); MAKE_EX(DBNoMemoryError); MAKE_EX(DBAgainError); MAKE_EX(DBBusyError); MAKE_EX(DBFileExistsError); MAKE_EX(DBNoSuchFileError); MAKE_EX(DBPermissionsError); MAKE_EX(DBRepHandleDeadError); MAKE_EX(DBRepLockoutError); MAKE_EX(DBRepUnavailError); MAKE_EX(DBRepLeaseExpiredError); MAKE_EX(DBForeignConflictError); #undef MAKE_EX /* Initialise the C API structure and add it to the module */ bsddb_api.api_version = PYBSDDB_API_VERSION; bsddb_api.db_type = &DB_Type; bsddb_api.dbcursor_type = &DBCursor_Type; bsddb_api.dblogcursor_type = &DBLogCursor_Type; bsddb_api.dbenv_type = &DBEnv_Type; bsddb_api.dbtxn_type = &DBTxn_Type; bsddb_api.dblock_type = &DBLock_Type; bsddb_api.dbsequence_type = &DBSequence_Type; bsddb_api.makeDBError = makeDBError; #if (PY_VERSION_HEX < 0x02070000) py_api = PyCObject_FromVoidPtr((void*)&bsddb_api, NULL); #else { /* ** The data must outlive the call!!. So, the static definition. ** The buffer must be big enough... */ static char py_api_name[MODULE_NAME_MAX_LEN+10]; strcpy(py_api_name, _bsddbModuleName); strcat(py_api_name, ".api"); py_api = PyCapsule_New((void*)&bsddb_api, py_api_name, NULL); } #endif /* Check error control */ /* ** PyErr_NoMemory(); ** py_api = NULL; */ if (py_api) { PyDict_SetItemString(d, "api", py_api); Py_DECREF(py_api); } else { /* Something bad happened! */ PyErr_WriteUnraisable(m); if(PyErr_Warn(PyExc_RuntimeWarning, "_pybsddb C API will be not available")) { PyErr_WriteUnraisable(m); } PyErr_Clear(); } /* Check for errors */ if (PyErr_Occurred()) { PyErr_Print(); Py_FatalError("can't initialize module _pybsddb"); Py_DECREF(m); m = NULL; } #if (PY_VERSION_HEX < 0x03000000) return; #else return m; #endif } /* allow this module to be named _pybsddb so that it can be installed * and imported on top of python >= 2.3 that includes its own older * copy of the library named _bsddb without importing the old version. */ #if (PY_VERSION_HEX < 0x03000000) DL_EXPORT(void) init_pybsddb(void) #else PyMODINIT_FUNC PyInit__pybsddb(void) /* Note the two underscores */ #endif { strncpy(_bsddbModuleName, "_pybsddb", MODULE_NAME_MAX_LEN); #if (PY_VERSION_HEX < 0x03000000) init_bsddb(); #else return PyInit__bsddb(); /* Note the two underscores */ #endif } bsddb3-6.1.0/Modules/bsddb.h0000644000000000000000000002440012363167637015472 0ustar rootroot00000000000000/*---------------------------------------------------------------------- Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA and Andrew Kuchling. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: o Redistributions of source code must retain the above copyright notice, this list of conditions, and the disclaimer that follows. o Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following disclaimer in the documentation and/or other materials provided with the distribution. o Neither the name of Digital Creations nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DIGITAL CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ------------------------------------------------------------------------*/ /* * Handwritten code to wrap version 3.x of the Berkeley DB library, * written to replace a SWIG-generated file. It has since been updated * to compile with Berkeley DB versions 3.2 through 4.2. * * This module was started by Andrew Kuchling to remove the dependency * on SWIG in a package by Gregory P. Smith who based his work on a * similar package by Robin Dunn which wrapped * Berkeley DB 2.7.x. * * Development of this module then returned full circle back to Robin Dunn * who worked on behalf of Digital Creations to complete the wrapping of * the DB 3.x API and to build a solid unit test suite. Robin has * since gone onto other projects (wxPython). * * Gregory P. Smith is once again the maintainer. * * Since January 2008, new maintainer is Jesus Cea . * Jesus Cea licenses this code to PSF under a Contributor Agreement. * * Use the pybsddb@jcea.es mailing list for all questions. * Things can change faster than the header of this file is updated. * * http://www.jcea.es/programacion/pybsddb.htm * * This module contains 8 types: * * DB (Database) * DBCursor (Database Cursor) * DBEnv (database environment) * DBTxn (An explicit database transaction) * DBLock (A lock handle) * DBSequence (Sequence) * DBSite (Site) * DBLogCursor (Log Cursor) * */ /* --------------------------------------------------------------------- */ /* * Portions of this module, associated unit tests and build scripts are the * result of a contract with The Written Word (http://thewrittenword.com/) * Many thanks go out to them for causing me to raise the bar on quality and * functionality, resulting in a better bsddb3 package for all of us to use. * * --Robin */ /* --------------------------------------------------------------------- */ /* * Work to split it up into a separate header and to add a C API was * contributed by Duncan Grisby . See here: * http://sourceforge.net/tracker/index.php?func=detail&aid=1551895&group_id=13900&atid=313900 */ /* --------------------------------------------------------------------- */ #ifndef _BSDDB_H_ #define _BSDDB_H_ #include /* 40 = 4.0, 33 = 3.3; this will break if the minor revision is > 9 */ #define DBVER (DB_VERSION_MAJOR * 10 + DB_VERSION_MINOR) #if DB_VERSION_MINOR > 9 #error "eek! DBVER can't handle minor versions > 9" #endif #define PY_BSDDB_VERSION "6.1.0" /* Python object definitions */ struct behaviourFlags { /* What is the default behaviour when DB->get or DBCursor->get returns a DB_NOTFOUND || DB_KEYEMPTY error? Return None or raise an exception? */ unsigned int getReturnsNone : 1; /* What is the default behaviour for DBCursor.set* methods when DBCursor->get * returns a DB_NOTFOUND || DB_KEYEMPTY error? Return None or raise? */ unsigned int cursorSetReturnsNone : 1; }; struct DBObject; /* Forward declaration */ struct DBCursorObject; /* Forward declaration */ struct DBLogCursorObject; /* Forward declaration */ struct DBTxnObject; /* Forward declaration */ struct DBSequenceObject; /* Forward declaration */ #if (DBVER >= 52) struct DBSiteObject; /* Forward declaration */ #endif typedef struct { PyObject_HEAD DB_ENV* db_env; u_int32_t flags; /* saved flags from open() */ int closed; struct behaviourFlags moduleFlags; PyObject* event_notifyCallback; struct DBObject *children_dbs; struct DBTxnObject *children_txns; struct DBLogCursorObject *children_logcursors; #if (DBVER >= 52) struct DBSiteObject *children_sites; #endif PyObject *private_obj; PyObject *rep_transport; PyObject *in_weakreflist; /* List of weak references */ } DBEnvObject; typedef struct DBObject { PyObject_HEAD DB* db; DBEnvObject* myenvobj; /* PyObject containing the DB_ENV */ u_int32_t flags; /* saved flags from open() */ u_int32_t setflags; /* saved flags from set_flags() */ struct behaviourFlags moduleFlags; struct DBTxnObject *txn; struct DBCursorObject *children_cursors; struct DBSequenceObject *children_sequences; struct DBObject **sibling_prev_p; struct DBObject *sibling_next; struct DBObject **sibling_prev_p_txn; struct DBObject *sibling_next_txn; PyObject* associateCallback; PyObject* btCompareCallback; PyObject* dupCompareCallback; int primaryDBType; PyObject *private_obj; PyObject *in_weakreflist; /* List of weak references */ } DBObject; typedef struct DBCursorObject { PyObject_HEAD DBC* dbc; struct DBCursorObject **sibling_prev_p; struct DBCursorObject *sibling_next; struct DBCursorObject **sibling_prev_p_txn; struct DBCursorObject *sibling_next_txn; DBObject* mydb; struct DBTxnObject *txn; PyObject *in_weakreflist; /* List of weak references */ } DBCursorObject; typedef struct DBTxnObject { PyObject_HEAD DB_TXN* txn; DBEnvObject* env; int flag_prepare; struct DBTxnObject *parent_txn; struct DBTxnObject **sibling_prev_p; struct DBTxnObject *sibling_next; struct DBTxnObject *children_txns; struct DBObject *children_dbs; struct DBSequenceObject *children_sequences; struct DBCursorObject *children_cursors; PyObject *in_weakreflist; /* List of weak references */ } DBTxnObject; typedef struct DBLogCursorObject { PyObject_HEAD DB_LOGC* logc; DBEnvObject* env; struct DBLogCursorObject **sibling_prev_p; struct DBLogCursorObject *sibling_next; PyObject *in_weakreflist; /* List of weak references */ } DBLogCursorObject; #if (DBVER >= 52) typedef struct DBSiteObject { PyObject_HEAD DB_SITE *site; DBEnvObject *env; struct DBSiteObject **sibling_prev_p; struct DBSiteObject *sibling_next; PyObject *in_weakreflist; /* List of weak references */ } DBSiteObject; #endif typedef struct { PyObject_HEAD DB_LOCK lock; int lock_initialized; /* Signal if we actually have a lock */ PyObject *in_weakreflist; /* List of weak references */ } DBLockObject; typedef struct DBSequenceObject { PyObject_HEAD DB_SEQUENCE* sequence; DBObject* mydb; struct DBTxnObject *txn; struct DBSequenceObject **sibling_prev_p; struct DBSequenceObject *sibling_next; struct DBSequenceObject **sibling_prev_p_txn; struct DBSequenceObject *sibling_next_txn; PyObject *in_weakreflist; /* List of weak references */ } DBSequenceObject; /* API structure for use by C code */ /* To access the structure from an external module, use code like the following (error checking missed out for clarity): // If you are using Python before 2.7: BSDDB_api* bsddb_api; PyObject* mod; PyObject* cobj; mod = PyImport_ImportModule("bsddb3._pybsddb"); cobj = PyObject_GetAttrString(mod, "api"); bsddb_api = (BSDDB_api*)PyCObject_AsVoidPtr(cobj); Py_DECREF(cobj); Py_DECREF(mod); // If you are using Python 2.7 or up: (except Python 3.0, unsupported) BSDDB_api* bsddb_api; bsddb_api = (void **)PyCapsule_Import("bsddb3._pybsddb.api", 1); Check "api_version" number before trying to use the API. The structure's members must not be changed. */ #define PYBSDDB_API_VERSION 1 typedef struct { unsigned int api_version; /* Type objects */ PyTypeObject* db_type; PyTypeObject* dbcursor_type; PyTypeObject* dblogcursor_type; PyTypeObject* dbenv_type; PyTypeObject* dbtxn_type; PyTypeObject* dblock_type; PyTypeObject* dbsequence_type; /* Functions */ int (*makeDBError)(int err); } BSDDB_api; #ifndef COMPILING_BSDDB_C /* If not inside _bsddb.c, define type check macros that use the api structure. The calling code must have a value named bsddb_api pointing to the api structure. */ #define DBObject_Check(v) ((v)->ob_type == bsddb_api->db_type) #define DBCursorObject_Check(v) ((v)->ob_type == bsddb_api->dbcursor_type) #define DBEnvObject_Check(v) ((v)->ob_type == bsddb_api->dbenv_type) #define DBTxnObject_Check(v) ((v)->ob_type == bsddb_api->dbtxn_type) #define DBLockObject_Check(v) ((v)->ob_type == bsddb_api->dblock_type) #define DBSequenceObject_Check(v) \ ((bsddb_api->dbsequence_type) && \ ((v)->ob_type == bsddb_api->dbsequence_type)) #endif /* COMPILING_BSDDB_C */ #endif /* _BSDDB_H_ */ bsddb3-6.1.0/Lib/0000755000000000000000000000000012363235112013321 5ustar rootroot00000000000000bsddb3-6.1.0/Lib/bsddb/0000755000000000000000000000000012363235112014377 5ustar rootroot00000000000000bsddb3-6.1.0/Lib/bsddb/dbrecio.py0000644000000000000000000001227412247674231016400 0ustar rootroot00000000000000 """ File-like objects that read from or write to a bsddb record. This implements (nearly) all stdio methods. f = DBRecIO(db, key, txn=None) f.close() # explicitly release resources held flag = f.isatty() # always false pos = f.tell() # get current position f.seek(pos) # set current position f.seek(pos, mode) # mode 0: absolute; 1: relative; 2: relative to EOF buf = f.read() # read until EOF buf = f.read(n) # read up to n bytes f.truncate([size]) # truncate file at to at most size (default: current pos) f.write(buf) # write at current position f.writelines(list) # for line in list: f.write(line) Notes: - fileno() is left unimplemented so that code which uses it triggers an exception early. - There's a simple test set (see end of this file) - not yet updated for DBRecIO. - readline() is not implemented yet. From: Itamar Shtull-Trauring """ import errno import string class DBRecIO: def __init__(self, db, key, txn=None): self.db = db self.key = key self.txn = txn self.len = None self.pos = 0 self.closed = 0 self.softspace = 0 def close(self): if not self.closed: self.closed = 1 del self.db, self.txn def isatty(self): if self.closed: raise ValueError, "I/O operation on closed file" return 0 def seek(self, pos, mode = 0): if self.closed: raise ValueError, "I/O operation on closed file" if mode == 1: pos = pos + self.pos elif mode == 2: pos = pos + self.len self.pos = max(0, pos) def tell(self): if self.closed: raise ValueError, "I/O operation on closed file" return self.pos def read(self, n = -1): if self.closed: raise ValueError, "I/O operation on closed file" if n < 0: newpos = self.len else: newpos = min(self.pos+n, self.len) dlen = newpos - self.pos r = self.db.get(self.key, txn=self.txn, dlen=dlen, doff=self.pos) self.pos = newpos return r __fixme = """ def readline(self, length=None): if self.closed: raise ValueError, "I/O operation on closed file" if self.buflist: self.buf = self.buf + string.joinfields(self.buflist, '') self.buflist = [] i = string.find(self.buf, '\n', self.pos) if i < 0: newpos = self.len else: newpos = i+1 if length is not None: if self.pos + length < newpos: newpos = self.pos + length r = self.buf[self.pos:newpos] self.pos = newpos return r def readlines(self, sizehint = 0): total = 0 lines = [] line = self.readline() while line: lines.append(line) total += len(line) if 0 < sizehint <= total: break line = self.readline() return lines """ def truncate(self, size=None): if self.closed: raise ValueError, "I/O operation on closed file" if size is None: size = self.pos elif size < 0: raise IOError(errno.EINVAL, "Negative size not allowed") elif size < self.pos: self.pos = size self.db.put(self.key, "", txn=self.txn, dlen=self.len-size, doff=size) def write(self, s): if self.closed: raise ValueError, "I/O operation on closed file" if not s: return if self.pos > self.len: self.buflist.append('\0'*(self.pos - self.len)) self.len = self.pos newpos = self.pos + len(s) self.db.put(self.key, s, txn=self.txn, dlen=len(s), doff=self.pos) self.pos = newpos def writelines(self, list): self.write(string.joinfields(list, '')) def flush(self): if self.closed: raise ValueError, "I/O operation on closed file" """ # A little test suite def _test(): import sys if sys.argv[1:]: file = sys.argv[1] else: file = '/etc/passwd' lines = open(file, 'r').readlines() text = open(file, 'r').read() f = StringIO() for line in lines[:-2]: f.write(line) f.writelines(lines[-2:]) if f.getvalue() != text: raise RuntimeError, 'write failed' length = f.tell() print 'File length =', length f.seek(len(lines[0])) f.write(lines[1]) f.seek(0) print 'First line =', repr(f.readline()) here = f.tell() line = f.readline() print 'Second line =', repr(line) f.seek(-len(line), 1) line2 = f.read(len(line)) if line != line2: raise RuntimeError, 'bad result after seek back' f.seek(len(line2), 1) list = f.readlines() line = list[-1] f.seek(f.tell() - len(line)) line2 = f.read() if line != line2: raise RuntimeError, 'bad result after seek back from EOF' print 'Read', len(list), 'more lines' print 'File length =', f.tell() if f.tell() != length: raise RuntimeError, 'bad length' f.close() if __name__ == '__main__': _test() """ bsddb3-6.1.0/Lib/bsddb/dbtables.py0000644000000000000000000007262712363167637016567 0ustar rootroot00000000000000#----------------------------------------------------------------------- # # Copyright (C) 2000, 2001 by Autonomous Zone Industries # Copyright (C) 2002 Gregory P. Smith # # License: This is free software. You may use this software for any # purpose including modification/redistribution, so long as # this header remains intact and that you do not claim any # rights of ownership or authorship of this software. This # software has been tested, but no warranty is expressed or # implied. # # -- Gregory P. Smith # This provides a simple database table interface built on top of # the Python Berkeley DB 3 interface. # import re import sys import copy import random import struct if sys.version_info[0] >= 3 : import pickle else : import warnings with warnings.catch_warnings() : warnings.filterwarnings("ignore", category=DeprecationWarning) import cPickle as pickle from bsddb3 import db class TableDBError(StandardError): pass class TableAlreadyExists(TableDBError): pass class Cond: """This condition matches everything""" def __call__(self, s): return 1 class ExactCond(Cond): """Acts as an exact match condition function""" def __init__(self, strtomatch): self.strtomatch = strtomatch def __call__(self, s): return s == self.strtomatch class PrefixCond(Cond): """Acts as a condition function for matching a string prefix""" def __init__(self, prefix): self.prefix = prefix def __call__(self, s): return s[:len(self.prefix)] == self.prefix class PostfixCond(Cond): """Acts as a condition function for matching a string postfix""" def __init__(self, postfix): self.postfix = postfix def __call__(self, s): return s[-len(self.postfix):] == self.postfix class LikeCond(Cond): """ Acts as a function that will match using an SQL 'LIKE' style string. Case insensitive and % signs are wild cards. This isn't perfect but it should work for the simple common cases. """ def __init__(self, likestr, re_flags=re.IGNORECASE): # escape python re characters chars_to_escape = '.*+()[]?' for char in chars_to_escape : likestr = likestr.replace(char, '\\'+char) # convert %s to wildcards self.likestr = likestr.replace('%', '.*') self.re = re.compile('^'+self.likestr+'$', re_flags) def __call__(self, s): return self.re.match(s) # # keys used to store database metadata # _table_names_key = '__TABLE_NAMES__' # list of the tables in this db _columns = '._COLUMNS__' # table_name+this key contains a list of columns def _columns_key(table): return table + _columns # # these keys are found within table sub databases # _data = '._DATA_.' # this+column+this+rowid key contains table data _rowid = '._ROWID_.' # this+rowid+this key contains a unique entry for each # row in the table. (no data is stored) _rowid_str_len = 8 # length in bytes of the unique rowid strings def _data_key(table, col, rowid): return table + _data + col + _data + rowid def _search_col_data_key(table, col): return table + _data + col + _data def _search_all_data_key(table): return table + _data def _rowid_key(table, rowid): return table + _rowid + rowid + _rowid def _search_rowid_key(table): return table + _rowid def contains_metastrings(s) : """Verify that the given string does not contain any metadata strings that might interfere with dbtables database operation. """ if (s.find(_table_names_key) >= 0 or s.find(_columns) >= 0 or s.find(_data) >= 0 or s.find(_rowid) >= 0): # Then return 1 else: return 0 class bsdTableDB : def __init__(self, filename, dbhome, create=0, truncate=0, mode=0600, recover=0, dbflags=0): """bsdTableDB(filename, dbhome, create=0, truncate=0, mode=0600) Open database name in the dbhome Berkeley DB directory. Use keyword arguments when calling this constructor. """ self.db = None myflags = db.DB_THREAD if create: myflags |= db.DB_CREATE flagsforenv = (db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_LOG | db.DB_INIT_TXN | dbflags) # DB_AUTO_COMMIT isn't a valid flag for env.open() try: dbflags |= db.DB_AUTO_COMMIT except AttributeError: pass if recover: flagsforenv = flagsforenv | db.DB_RECOVER self.env = db.DBEnv() # enable auto deadlock avoidance self.env.set_lk_detect(db.DB_LOCK_DEFAULT) self.env.open(dbhome, myflags | flagsforenv) if truncate: myflags |= db.DB_TRUNCATE self.db = db.DB(self.env) # this code relies on DBCursor.set* methods to raise exceptions # rather than returning None self.db.set_get_returns_none(1) # allow duplicate entries [warning: be careful w/ metadata] self.db.set_flags(db.DB_DUP) self.db.open(filename, db.DB_BTREE, dbflags | myflags, mode) self.dbfilename = filename if sys.version_info[0] >= 3 : class cursor_py3k(object) : def __init__(self, dbcursor) : self._dbcursor = dbcursor def close(self) : return self._dbcursor.close() def set_range(self, search) : v = self._dbcursor.set_range(bytes(search, "iso8859-1")) if v is not None : v = (v[0].decode("iso8859-1"), v[1].decode("iso8859-1")) return v def __next__(self) : v = getattr(self._dbcursor, "next")() if v is not None : v = (v[0].decode("iso8859-1"), v[1].decode("iso8859-1")) return v class db_py3k(object) : def __init__(self, db) : self._db = db def cursor(self, txn=None) : return cursor_py3k(self._db.cursor(txn=txn)) def has_key(self, key, txn=None) : return getattr(self._db,"has_key")(bytes(key, "iso8859-1"), txn=txn) def put(self, key, value, flags=0, txn=None) : key = bytes(key, "iso8859-1") if value is not None : value = bytes(value, "iso8859-1") return self._db.put(key, value, flags=flags, txn=txn) def put_bytes(self, key, value, txn=None) : key = bytes(key, "iso8859-1") return self._db.put(key, value, txn=txn) def get(self, key, txn=None, flags=0) : key = bytes(key, "iso8859-1") v = self._db.get(key, txn=txn, flags=flags) if v is not None : v = v.decode("iso8859-1") return v def get_bytes(self, key, txn=None, flags=0) : key = bytes(key, "iso8859-1") return self._db.get(key, txn=txn, flags=flags) def delete(self, key, txn=None) : key = bytes(key, "iso8859-1") return self._db.delete(key, txn=txn) def close (self) : return self._db.close() self.db = db_py3k(self.db) else : # Python 2.x pass # Initialize the table names list if this is a new database txn = self.env.txn_begin() try: if not getattr(self.db, "has_key")(_table_names_key, txn): getattr(self.db, "put_bytes", self.db.put) \ (_table_names_key, pickle.dumps([], 1), txn=txn) # Yes, bare except except: txn.abort() raise else: txn.commit() # TODO verify more of the database's metadata? self.__tablecolumns = {} def __del__(self): self.close() def close(self): if self.db is not None: self.db.close() self.db = None if self.env is not None: self.env.close() self.env = None def checkpoint(self, mins=0): self.env.txn_checkpoint(mins) def sync(self): self.db.sync() def _db_print(self) : """Print the database to stdout for debugging""" print "******** Printing raw database for debugging ********" cur = self.db.cursor() try: key, data = cur.first() while 1: print repr({key: data}) next = cur.next() if next: key, data = next else: cur.close() return except db.DBNotFoundError: cur.close() def CreateTable(self, table, columns): """CreateTable(table, columns) - Create a new table in the database. raises TableDBError if it already exists or for other DB errors. """ assert isinstance(columns, list) txn = None try: # checking sanity of the table and column names here on # table creation will prevent problems elsewhere. if contains_metastrings(table): raise ValueError( "bad table name: contains reserved metastrings") for column in columns : if contains_metastrings(column): raise ValueError( "bad column name: contains reserved metastrings") columnlist_key = _columns_key(table) if getattr(self.db, "has_key")(columnlist_key): raise TableAlreadyExists, "table already exists" txn = self.env.txn_begin() # store the table's column info getattr(self.db, "put_bytes", self.db.put)(columnlist_key, pickle.dumps(columns, 1), txn=txn) # add the table name to the tablelist tablelist = pickle.loads(getattr(self.db, "get_bytes", self.db.get) (_table_names_key, txn=txn, flags=db.DB_RMW)) tablelist.append(table) # delete 1st, in case we opened with DB_DUP self.db.delete(_table_names_key, txn=txn) getattr(self.db, "put_bytes", self.db.put)(_table_names_key, pickle.dumps(tablelist, 1), txn=txn) txn.commit() txn = None except db.DBError, dberror: if txn: txn.abort() if sys.version_info < (2, 6) : raise TableDBError, dberror[1] else : raise TableDBError, dberror.args[1] def ListTableColumns(self, table): """Return a list of columns in the given table. [] if the table doesn't exist. """ assert isinstance(table, str) if contains_metastrings(table): raise ValueError, "bad table name: contains reserved metastrings" columnlist_key = _columns_key(table) if not getattr(self.db, "has_key")(columnlist_key): return [] pickledcolumnlist = getattr(self.db, "get_bytes", self.db.get)(columnlist_key) if pickledcolumnlist: return pickle.loads(pickledcolumnlist) else: return [] def ListTables(self): """Return a list of tables in this database.""" pickledtablelist = getattr(self.db, "get_bytes", self.db.get)(_table_names_key) if pickledtablelist: return pickle.loads(pickledtablelist) else: return [] def CreateOrExtendTable(self, table, columns): """CreateOrExtendTable(table, columns) Create a new table in the database. If a table of this name already exists, extend it to have any additional columns present in the given list as well as all of its current columns. """ assert isinstance(columns, list) try: self.CreateTable(table, columns) except TableAlreadyExists: # the table already existed, add any new columns txn = None try: columnlist_key = _columns_key(table) txn = self.env.txn_begin() # load the current column list oldcolumnlist = pickle.loads( getattr(self.db, "get_bytes", self.db.get)(columnlist_key, txn=txn, flags=db.DB_RMW)) # create a hash table for fast lookups of column names in the # loop below oldcolumnhash = {} for c in oldcolumnlist: oldcolumnhash[c] = c # create a new column list containing both the old and new # column names newcolumnlist = copy.copy(oldcolumnlist) for c in columns: if not c in oldcolumnhash: newcolumnlist.append(c) # store the table's new extended column list if newcolumnlist != oldcolumnlist : # delete the old one first since we opened with DB_DUP self.db.delete(columnlist_key, txn=txn) getattr(self.db, "put_bytes", self.db.put)(columnlist_key, pickle.dumps(newcolumnlist, 1), txn=txn) txn.commit() txn = None self.__load_column_info(table) except db.DBError, dberror: if txn: txn.abort() if sys.version_info < (2, 6) : raise TableDBError, dberror[1] else : raise TableDBError, dberror.args[1] def __load_column_info(self, table) : """initialize the self.__tablecolumns dict""" # check the column names try: tcolpickles = getattr(self.db, "get_bytes", self.db.get)(_columns_key(table)) except db.DBNotFoundError: raise TableDBError, "unknown table: %r" % (table,) if not tcolpickles: raise TableDBError, "unknown table: %r" % (table,) self.__tablecolumns[table] = pickle.loads(tcolpickles) def __new_rowid(self, table, txn) : """Create a new unique row identifier""" unique = 0 while not unique: # Generate a random 64-bit row ID string # (note: might have <64 bits of true randomness # but it's plenty for our database id needs!) blist = [] for x in xrange(_rowid_str_len): blist.append(random.randint(0,255)) newid = struct.pack('B'*_rowid_str_len, *blist) if sys.version_info[0] >= 3 : newid = newid.decode("iso8859-1") # 8 bits # Guarantee uniqueness by adding this key to the database try: self.db.put(_rowid_key(table, newid), None, txn=txn, flags=db.DB_NOOVERWRITE) except db.DBKeyExistError: pass else: unique = 1 return newid def Insert(self, table, rowdict) : """Insert(table, datadict) - Insert a new row into the table using the keys+values from rowdict as the column values. """ txn = None try: if not getattr(self.db, "has_key")(_columns_key(table)): raise TableDBError, "unknown table" # check the validity of each column name if not table in self.__tablecolumns: self.__load_column_info(table) for column in rowdict.keys() : if not self.__tablecolumns[table].count(column): raise TableDBError, "unknown column: %r" % (column,) # get a unique row identifier for this row txn = self.env.txn_begin() rowid = self.__new_rowid(table, txn=txn) # insert the row values into the table database for column, dataitem in rowdict.items(): # store the value self.db.put(_data_key(table, column, rowid), dataitem, txn=txn) txn.commit() txn = None except db.DBError, dberror: # WIBNI we could just abort the txn and re-raise the exception? # But no, because TableDBError is not related to DBError via # inheritance, so it would be backwards incompatible. Do the next # best thing. info = sys.exc_info() if txn: txn.abort() self.db.delete(_rowid_key(table, rowid)) if sys.version_info < (2, 6) : raise TableDBError, dberror[1], info[2] else : raise TableDBError, dberror.args[1], info[2] def Modify(self, table, conditions={}, mappings={}): """Modify(table, conditions={}, mappings={}) - Modify items in rows matching 'conditions' using mapping functions in 'mappings' * table - the table name * conditions - a dictionary keyed on column names containing a condition callable expecting the data string as an argument and returning a boolean. * mappings - a dictionary keyed on column names containing a condition callable expecting the data string as an argument and returning the new string for that column. """ try: matching_rowids = self.__Select(table, [], conditions) # modify only requested columns columns = mappings.keys() for rowid in matching_rowids.keys(): txn = None try: for column in columns: txn = self.env.txn_begin() # modify the requested column try: dataitem = self.db.get( _data_key(table, column, rowid), txn=txn) self.db.delete( _data_key(table, column, rowid), txn=txn) except db.DBNotFoundError: # XXXXXXX row key somehow didn't exist, assume no # error dataitem = None dataitem = mappings[column](dataitem) if dataitem is not None: self.db.put( _data_key(table, column, rowid), dataitem, txn=txn) txn.commit() txn = None # catch all exceptions here since we call unknown callables except: if txn: txn.abort() raise except db.DBError, dberror: if sys.version_info < (2, 6) : raise TableDBError, dberror[1] else : raise TableDBError, dberror.args[1] def Delete(self, table, conditions={}): """Delete(table, conditions) - Delete items matching the given conditions from the table. * conditions - a dictionary keyed on column names containing condition functions expecting the data string as an argument and returning a boolean. """ try: matching_rowids = self.__Select(table, [], conditions) # delete row data from all columns columns = self.__tablecolumns[table] for rowid in matching_rowids.keys(): txn = None try: txn = self.env.txn_begin() for column in columns: # delete the data key try: self.db.delete(_data_key(table, column, rowid), txn=txn) except db.DBNotFoundError: # XXXXXXX column may not exist, assume no error pass try: self.db.delete(_rowid_key(table, rowid), txn=txn) except db.DBNotFoundError: # XXXXXXX row key somehow didn't exist, assume no error pass txn.commit() txn = None except db.DBError, dberror: if txn: txn.abort() raise except db.DBError, dberror: if sys.version_info < (2, 6) : raise TableDBError, dberror[1] else : raise TableDBError, dberror.args[1] def Select(self, table, columns, conditions={}): """Select(table, columns, conditions) - retrieve specific row data Returns a list of row column->value mapping dictionaries. * columns - a list of which column data to return. If columns is None, all columns will be returned. * conditions - a dictionary keyed on column names containing callable conditions expecting the data string as an argument and returning a boolean. """ try: if not table in self.__tablecolumns: self.__load_column_info(table) if columns is None: columns = self.__tablecolumns[table] matching_rowids = self.__Select(table, columns, conditions) except db.DBError, dberror: if sys.version_info < (2, 6) : raise TableDBError, dberror[1] else : raise TableDBError, dberror.args[1] # return the matches as a list of dictionaries return matching_rowids.values() def __Select(self, table, columns, conditions): """__Select() - Used to implement Select and Delete (above) Returns a dictionary keyed on rowids containing dicts holding the row data for columns listed in the columns param that match the given conditions. * conditions is a dictionary keyed on column names containing callable conditions expecting the data string as an argument and returning a boolean. """ # check the validity of each column name if not table in self.__tablecolumns: self.__load_column_info(table) if columns is None: columns = self.tablecolumns[table] for column in (columns + conditions.keys()): if not self.__tablecolumns[table].count(column): raise TableDBError, "unknown column: %r" % (column,) # keyed on rows that match so far, containings dicts keyed on # column names containing the data for that row and column. matching_rowids = {} # keys are rowids that do not match rejected_rowids = {} # attempt to sort the conditions in such a way as to minimize full # column lookups def cmp_conditions(atuple, btuple): a = atuple[1] b = btuple[1] if type(a) is type(b): # Needed for python 3. "cmp" vanished in 3.0.1 def cmp(a, b) : if a==b : return 0 if a 0: for rowid, rowdata in matching_rowids.items(): for column in columns: if column in rowdata: continue try: rowdata[column] = self.db.get( _data_key(table, column, rowid)) except db.DBError, dberror: if sys.version_info < (2, 6) : if dberror[0] != db.DB_NOTFOUND: raise else : if dberror.args[0] != db.DB_NOTFOUND: raise rowdata[column] = None # return the matches return matching_rowids def Drop(self, table): """Remove an entire table from the database""" txn = None try: txn = self.env.txn_begin() # delete the column list self.db.delete(_columns_key(table), txn=txn) cur = self.db.cursor(txn) # delete all keys containing this tables column and row info table_key = _search_all_data_key(table) while 1: try: key, data = cur.set_range(table_key) except db.DBNotFoundError: break # only delete items in this table if key[:len(table_key)] != table_key: break cur.delete() # delete all rowids used by this table table_key = _search_rowid_key(table) while 1: try: key, data = cur.set_range(table_key) except db.DBNotFoundError: break # only delete items in this table if key[:len(table_key)] != table_key: break cur.delete() cur.close() # delete the tablename from the table name list tablelist = pickle.loads( getattr(self.db, "get_bytes", self.db.get)(_table_names_key, txn=txn, flags=db.DB_RMW)) try: tablelist.remove(table) except ValueError: # hmm, it wasn't there, oh well, that's what we want. pass # delete 1st, incase we opened with DB_DUP self.db.delete(_table_names_key, txn=txn) getattr(self.db, "put_bytes", self.db.put)(_table_names_key, pickle.dumps(tablelist, 1), txn=txn) txn.commit() txn = None if table in self.__tablecolumns: del self.__tablecolumns[table] except db.DBError, dberror: if txn: txn.abort() raise TableDBError(dberror.args[1]) bsddb3-6.1.0/Lib/bsddb/dbobj.py0000644000000000000000000002527512363167637016064 0ustar rootroot00000000000000#------------------------------------------------------------------------- # This file contains real Python object wrappers for DB and DBEnv # C "objects" that can be usefully subclassed. The previous SWIG # based interface allowed this thanks to SWIG's shadow classes. # -- Gregory P. Smith #------------------------------------------------------------------------- # # (C) Copyright 2001 Autonomous Zone Industries # # License: This is free software. You may use this software for any # purpose including modification/redistribution, so long as # this header remains intact and that you do not claim any # rights of ownership or authorship of this software. This # software has been tested, but no warranty is expressed or # implied. # # # TODO it would be *really nice* to have an automatic shadow class populator # so that new methods don't need to be added here manually after being # added to _bsddb.c. # import sys absolute_import = (sys.version_info[0] >= 3) if absolute_import : from . import db else : import db import collections MutableMapping = collections.MutableMapping class DBEnv: def __init__(self, *args, **kwargs): self._cobj = db.DBEnv(*args, **kwargs) def close(self, *args, **kwargs): return self._cobj.close(*args, **kwargs) def open(self, *args, **kwargs): return self._cobj.open(*args, **kwargs) def remove(self, *args, **kwargs): return self._cobj.remove(*args, **kwargs) def set_shm_key(self, *args, **kwargs): return self._cobj.set_shm_key(*args, **kwargs) def set_cachesize(self, *args, **kwargs): return self._cobj.set_cachesize(*args, **kwargs) def set_data_dir(self, *args, **kwargs): return self._cobj.set_data_dir(*args, **kwargs) def set_flags(self, *args, **kwargs): return self._cobj.set_flags(*args, **kwargs) def set_lg_bsize(self, *args, **kwargs): return self._cobj.set_lg_bsize(*args, **kwargs) def set_lg_dir(self, *args, **kwargs): return self._cobj.set_lg_dir(*args, **kwargs) def set_lg_max(self, *args, **kwargs): return self._cobj.set_lg_max(*args, **kwargs) def set_lk_detect(self, *args, **kwargs): return self._cobj.set_lk_detect(*args, **kwargs) def set_lk_max_locks(self, *args, **kwargs): return self._cobj.set_lk_max_locks(*args, **kwargs) def set_lk_max_lockers(self, *args, **kwargs): return self._cobj.set_lk_max_lockers(*args, **kwargs) def set_lk_max_objects(self, *args, **kwargs): return self._cobj.set_lk_max_objects(*args, **kwargs) def set_mp_mmapsize(self, *args, **kwargs): return self._cobj.set_mp_mmapsize(*args, **kwargs) def set_timeout(self, *args, **kwargs): return self._cobj.set_timeout(*args, **kwargs) def set_tmp_dir(self, *args, **kwargs): return self._cobj.set_tmp_dir(*args, **kwargs) def txn_begin(self, *args, **kwargs): return self._cobj.txn_begin(*args, **kwargs) def txn_checkpoint(self, *args, **kwargs): return self._cobj.txn_checkpoint(*args, **kwargs) def txn_stat(self, *args, **kwargs): return self._cobj.txn_stat(*args, **kwargs) def set_tx_max(self, *args, **kwargs): return self._cobj.set_tx_max(*args, **kwargs) def set_tx_timestamp(self, *args, **kwargs): return self._cobj.set_tx_timestamp(*args, **kwargs) def lock_detect(self, *args, **kwargs): return self._cobj.lock_detect(*args, **kwargs) def lock_get(self, *args, **kwargs): return self._cobj.lock_get(*args, **kwargs) def lock_id(self, *args, **kwargs): return self._cobj.lock_id(*args, **kwargs) def lock_put(self, *args, **kwargs): return self._cobj.lock_put(*args, **kwargs) def lock_stat(self, *args, **kwargs): return self._cobj.lock_stat(*args, **kwargs) def log_archive(self, *args, **kwargs): return self._cobj.log_archive(*args, **kwargs) def set_get_returns_none(self, *args, **kwargs): return self._cobj.set_get_returns_none(*args, **kwargs) def log_stat(self, *args, **kwargs): return self._cobj.log_stat(*args, **kwargs) def dbremove(self, *args, **kwargs): return self._cobj.dbremove(*args, **kwargs) def dbrename(self, *args, **kwargs): return self._cobj.dbrename(*args, **kwargs) def set_encrypt(self, *args, **kwargs): return self._cobj.set_encrypt(*args, **kwargs) def fileid_reset(self, *args, **kwargs): return self._cobj.fileid_reset(*args, **kwargs) def lsn_reset(self, *args, **kwargs): return self._cobj.lsn_reset(*args, **kwargs) class DB(MutableMapping): def __init__(self, dbenv, *args, **kwargs): # give it the proper DBEnv C object that its expecting self._cobj = db.DB(*((dbenv._cobj,) + args), **kwargs) # TODO are there other dict methods that need to be overridden? def __len__(self): return len(self._cobj) def __getitem__(self, arg): return self._cobj[arg] def __setitem__(self, key, value): self._cobj[key] = value def __delitem__(self, arg): del self._cobj[arg] def __iter__(self) : return self._cobj.__iter__() def append(self, *args, **kwargs): return self._cobj.append(*args, **kwargs) def associate(self, *args, **kwargs): return self._cobj.associate(*args, **kwargs) def close(self, *args, **kwargs): return self._cobj.close(*args, **kwargs) def consume(self, *args, **kwargs): return self._cobj.consume(*args, **kwargs) def consume_wait(self, *args, **kwargs): return self._cobj.consume_wait(*args, **kwargs) def cursor(self, *args, **kwargs): return self._cobj.cursor(*args, **kwargs) def delete(self, *args, **kwargs): return self._cobj.delete(*args, **kwargs) def fd(self, *args, **kwargs): return self._cobj.fd(*args, **kwargs) def get(self, *args, **kwargs): return self._cobj.get(*args, **kwargs) def pget(self, *args, **kwargs): return self._cobj.pget(*args, **kwargs) def get_both(self, *args, **kwargs): return self._cobj.get_both(*args, **kwargs) def get_byteswapped(self, *args, **kwargs): return self._cobj.get_byteswapped(*args, **kwargs) def get_size(self, *args, **kwargs): return self._cobj.get_size(*args, **kwargs) def get_type(self, *args, **kwargs): return self._cobj.get_type(*args, **kwargs) def join(self, *args, **kwargs): return self._cobj.join(*args, **kwargs) def key_range(self, *args, **kwargs): return self._cobj.key_range(*args, **kwargs) def has_key(self, *args, **kwargs): return self._cobj.has_key(*args, **kwargs) def items(self, *args, **kwargs): return self._cobj.items(*args, **kwargs) def keys(self, *args, **kwargs): return self._cobj.keys(*args, **kwargs) def open(self, *args, **kwargs): return self._cobj.open(*args, **kwargs) def put(self, *args, **kwargs): return self._cobj.put(*args, **kwargs) def remove(self, *args, **kwargs): return self._cobj.remove(*args, **kwargs) def rename(self, *args, **kwargs): return self._cobj.rename(*args, **kwargs) def set_bt_minkey(self, *args, **kwargs): return self._cobj.set_bt_minkey(*args, **kwargs) def set_bt_compare(self, *args, **kwargs): return self._cobj.set_bt_compare(*args, **kwargs) def set_cachesize(self, *args, **kwargs): return self._cobj.set_cachesize(*args, **kwargs) def set_dup_compare(self, *args, **kwargs) : return self._cobj.set_dup_compare(*args, **kwargs) def set_flags(self, *args, **kwargs): return self._cobj.set_flags(*args, **kwargs) def set_h_ffactor(self, *args, **kwargs): return self._cobj.set_h_ffactor(*args, **kwargs) def set_h_nelem(self, *args, **kwargs): return self._cobj.set_h_nelem(*args, **kwargs) def set_lorder(self, *args, **kwargs): return self._cobj.set_lorder(*args, **kwargs) def set_pagesize(self, *args, **kwargs): return self._cobj.set_pagesize(*args, **kwargs) def set_re_delim(self, *args, **kwargs): return self._cobj.set_re_delim(*args, **kwargs) def set_re_len(self, *args, **kwargs): return self._cobj.set_re_len(*args, **kwargs) def set_re_pad(self, *args, **kwargs): return self._cobj.set_re_pad(*args, **kwargs) def set_re_source(self, *args, **kwargs): return self._cobj.set_re_source(*args, **kwargs) def set_q_extentsize(self, *args, **kwargs): return self._cobj.set_q_extentsize(*args, **kwargs) def stat(self, *args, **kwargs): return self._cobj.stat(*args, **kwargs) def sync(self, *args, **kwargs): return self._cobj.sync(*args, **kwargs) def type(self, *args, **kwargs): return self._cobj.type(*args, **kwargs) def upgrade(self, *args, **kwargs): return self._cobj.upgrade(*args, **kwargs) def values(self, *args, **kwargs): return self._cobj.values(*args, **kwargs) def verify(self, *args, **kwargs): return self._cobj.verify(*args, **kwargs) def set_get_returns_none(self, *args, **kwargs): return self._cobj.set_get_returns_none(*args, **kwargs) def set_encrypt(self, *args, **kwargs): return self._cobj.set_encrypt(*args, **kwargs) class DBSequence: def __init__(self, *args, **kwargs): self._cobj = db.DBSequence(*args, **kwargs) def close(self, *args, **kwargs): return self._cobj.close(*args, **kwargs) def get(self, *args, **kwargs): return self._cobj.get(*args, **kwargs) def get_dbp(self, *args, **kwargs): return self._cobj.get_dbp(*args, **kwargs) def get_key(self, *args, **kwargs): return self._cobj.get_key(*args, **kwargs) def init_value(self, *args, **kwargs): return self._cobj.init_value(*args, **kwargs) def open(self, *args, **kwargs): return self._cobj.open(*args, **kwargs) def remove(self, *args, **kwargs): return self._cobj.remove(*args, **kwargs) def stat(self, *args, **kwargs): return self._cobj.stat(*args, **kwargs) def set_cachesize(self, *args, **kwargs): return self._cobj.set_cachesize(*args, **kwargs) def set_flags(self, *args, **kwargs): return self._cobj.set_flags(*args, **kwargs) def set_range(self, *args, **kwargs): return self._cobj.set_range(*args, **kwargs) def get_cachesize(self, *args, **kwargs): return self._cobj.get_cachesize(*args, **kwargs) def get_flags(self, *args, **kwargs): return self._cobj.get_flags(*args, **kwargs) def get_range(self, *args, **kwargs): return self._cobj.get_range(*args, **kwargs) bsddb3-6.1.0/Lib/bsddb/__init__.py0000644000000000000000000003454512363167637016543 0ustar rootroot00000000000000#---------------------------------------------------------------------- # Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA # and Andrew Kuchling. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # o Redistributions of source code must retain the above copyright # notice, this list of conditions, and the disclaimer that follows. # # o Redistributions in binary form must reproduce the above copyright # notice, this list of conditions, and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # o Neither the name of Digital Creations nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS # IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED # TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DIGITAL # CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS # OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR # TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH # DAMAGE. #---------------------------------------------------------------------- """Support for Berkeley DB 4.3 through 5.3 with a simple interface. For the full featured object oriented interface use the bsddb.db module instead. It mirrors the Oracle Berkeley DB C API. """ import sys absolute_import = (sys.version_info[0] >= 3) try: if absolute_import : from . import _pybsddb else : import _pybsddb from bsddb3.dbutils import DeadlockWrap as _DeadlockWrap except ImportError: # Remove ourselves from sys.modules import sys del sys.modules[__name__] raise # bsddb3 calls it db, but provide _db for backwards compatibility db = _db = _pybsddb __version__ = db.__version__ error = db.DBError # So bsddb.error will mean something... #---------------------------------------------------------------------- import sys, os from weakref import ref import collections MutableMapping = collections.MutableMapping class _iter_mixin(MutableMapping): def _make_iter_cursor(self): cur = _DeadlockWrap(self.db.cursor) key = id(cur) self._cursor_refs[key] = ref(cur, self._gen_cref_cleaner(key)) return cur def _gen_cref_cleaner(self, key): # use generate the function for the weakref callback here # to ensure that we do not hold a strict reference to cur # in the callback. return lambda ref: self._cursor_refs.pop(key, None) def __iter__(self): self._kill_iteration = False self._in_iter += 1 try: try: cur = self._make_iter_cursor() # FIXME-20031102-greg: race condition. cursor could # be closed by another thread before this call. # since we're only returning keys, we call the cursor # methods with flags=0, dlen=0, dofs=0 key = _DeadlockWrap(cur.first, 0,0,0)[0] yield key next = getattr(cur, "next") while 1: try: key = _DeadlockWrap(next, 0,0,0)[0] yield key except _db.DBCursorClosedError: if self._kill_iteration: raise RuntimeError('Database changed size ' 'during iteration.') cur = self._make_iter_cursor() # FIXME-20031101-greg: race condition. cursor could # be closed by another thread before this call. _DeadlockWrap(cur.set, key,0,0,0) next = getattr(cur, "next") except _db.DBNotFoundError: pass except _db.DBCursorClosedError: # the database was modified during iteration. abort. pass finally : self._in_iter -= 1 def iteritems(self): if not self.db: return self._kill_iteration = False self._in_iter += 1 try: try: cur = self._make_iter_cursor() # FIXME-20031102-greg: race condition. cursor could # be closed by another thread before this call. kv = _DeadlockWrap(cur.first) key = kv[0] yield kv next = getattr(cur, "next") while 1: try: kv = _DeadlockWrap(next) key = kv[0] yield kv except _db.DBCursorClosedError: if self._kill_iteration: raise RuntimeError('Database changed size ' 'during iteration.') cur = self._make_iter_cursor() # FIXME-20031101-greg: race condition. cursor could # be closed by another thread before this call. _DeadlockWrap(cur.set, key,0,0,0) next = getattr(cur, "next") except _db.DBNotFoundError: pass except _db.DBCursorClosedError: # the database was modified during iteration. abort. pass finally : self._in_iter -= 1 class _DBWithCursor(_iter_mixin): """ A simple wrapper around DB that makes it look like the bsddbobject in the old module. It uses a cursor as needed to provide DB traversal. """ def __init__(self, db): self.db = db self.db.set_get_returns_none(0) # FIXME-20031101-greg: I believe there is still the potential # for deadlocks in a multithreaded environment if someone # attempts to use the any of the cursor interfaces in one # thread while doing a put or delete in another thread. The # reason is that _checkCursor and _closeCursors are not atomic # operations. Doing our own locking around self.dbc, # self.saved_dbc_key and self._cursor_refs could prevent this. # TODO: A test case demonstrating the problem needs to be written. # self.dbc is a DBCursor object used to implement the # first/next/previous/last/set_location methods. self.dbc = None self.saved_dbc_key = None # a collection of all DBCursor objects currently allocated # by the _iter_mixin interface. self._cursor_refs = {} self._in_iter = 0 self._kill_iteration = False def __del__(self): self.close() def _checkCursor(self): if self.dbc is None: self.dbc = _DeadlockWrap(self.db.cursor) if self.saved_dbc_key is not None: _DeadlockWrap(self.dbc.set, self.saved_dbc_key) self.saved_dbc_key = None # This method is needed for all non-cursor DB calls to avoid # Berkeley DB deadlocks (due to being opened with DB_INIT_LOCK # and DB_THREAD to be thread safe) when intermixing database # operations that use the cursor internally with those that don't. def _closeCursors(self, save=1): if self.dbc: c = self.dbc self.dbc = None if save: try: self.saved_dbc_key = _DeadlockWrap(c.current, 0,0,0)[0] except db.DBError: pass _DeadlockWrap(c.close) del c for cref in self._cursor_refs.values(): c = cref() if c is not None: _DeadlockWrap(c.close) def _checkOpen(self): if self.db is None: raise error, "BSDDB object has already been closed" def isOpen(self): return self.db is not None def __len__(self): self._checkOpen() return _DeadlockWrap(lambda: len(self.db)) # len(self.db) def __repr__(self) : if self.isOpen() : return repr(dict(_DeadlockWrap(self.db.items))) return repr(dict()) def __getitem__(self, key): self._checkOpen() return _DeadlockWrap(lambda: self.db[key]) # self.db[key] def __setitem__(self, key, value): self._checkOpen() self._closeCursors() if self._in_iter and key not in self: self._kill_iteration = True def wrapF(): self.db[key] = value _DeadlockWrap(wrapF) # self.db[key] = value def __delitem__(self, key): self._checkOpen() self._closeCursors() if self._in_iter and key in self: self._kill_iteration = True def wrapF(): del self.db[key] _DeadlockWrap(wrapF) # del self.db[key] def close(self): self._closeCursors(save=0) if self.dbc is not None: _DeadlockWrap(self.dbc.close) v = 0 if self.db is not None: v = _DeadlockWrap(self.db.close) self.dbc = None self.db = None return v def keys(self): self._checkOpen() return _DeadlockWrap(self.db.keys) def has_key(self, key): self._checkOpen() return _DeadlockWrap(self.db.has_key, key) def set_location(self, key): self._checkOpen() self._checkCursor() return _DeadlockWrap(self.dbc.set_range, key) def next(self): # Renamed by "2to3" self._checkOpen() self._checkCursor() rv = _DeadlockWrap(getattr(self.dbc, "next")) return rv if sys.version_info[0] >= 3 : # For "2to3" conversion next = __next__ def previous(self): self._checkOpen() self._checkCursor() rv = _DeadlockWrap(self.dbc.prev) return rv def first(self): self._checkOpen() # fix 1725856: don't needlessly try to restore our cursor position self.saved_dbc_key = None self._checkCursor() rv = _DeadlockWrap(self.dbc.first) return rv def last(self): self._checkOpen() # fix 1725856: don't needlessly try to restore our cursor position self.saved_dbc_key = None self._checkCursor() rv = _DeadlockWrap(self.dbc.last) return rv def sync(self): self._checkOpen() return _DeadlockWrap(self.db.sync) #---------------------------------------------------------------------- # Compatibility object factory functions def hashopen(file, flag='c', mode=0666, pgsize=None, ffactor=None, nelem=None, cachesize=None, lorder=None, hflags=0): flags = _checkflag(flag, file) e = _openDBEnv(cachesize) d = db.DB(e) d.set_flags(hflags) if pgsize is not None: d.set_pagesize(pgsize) if lorder is not None: d.set_lorder(lorder) if ffactor is not None: d.set_h_ffactor(ffactor) if nelem is not None: d.set_h_nelem(nelem) d.open(file, db.DB_HASH, flags, mode) return _DBWithCursor(d) #---------------------------------------------------------------------- def btopen(file, flag='c', mode=0666, btflags=0, cachesize=None, maxkeypage=None, minkeypage=None, pgsize=None, lorder=None): flags = _checkflag(flag, file) e = _openDBEnv(cachesize) d = db.DB(e) if pgsize is not None: d.set_pagesize(pgsize) if lorder is not None: d.set_lorder(lorder) d.set_flags(btflags) if minkeypage is not None: d.set_bt_minkey(minkeypage) if maxkeypage is not None: d.set_bt_maxkey(maxkeypage) d.open(file, db.DB_BTREE, flags, mode) return _DBWithCursor(d) #---------------------------------------------------------------------- def rnopen(file, flag='c', mode=0666, rnflags=0, cachesize=None, pgsize=None, lorder=None, rlen=None, delim=None, source=None, pad=None): flags = _checkflag(flag, file) e = _openDBEnv(cachesize) d = db.DB(e) if pgsize is not None: d.set_pagesize(pgsize) if lorder is not None: d.set_lorder(lorder) d.set_flags(rnflags) if delim is not None: d.set_re_delim(delim) if rlen is not None: d.set_re_len(rlen) if source is not None: d.set_re_source(source) if pad is not None: d.set_re_pad(pad) d.open(file, db.DB_RECNO, flags, mode) return _DBWithCursor(d) #---------------------------------------------------------------------- def _openDBEnv(cachesize): e = db.DBEnv() if cachesize is not None: if cachesize >= 20480: e.set_cachesize(0, cachesize) else: raise error, "cachesize must be >= 20480" e.set_lk_detect(db.DB_LOCK_DEFAULT) e.open('.', db.DB_PRIVATE | db.DB_CREATE | db.DB_THREAD | db.DB_INIT_LOCK | db.DB_INIT_MPOOL) return e def _checkflag(flag, file): if flag == 'r': flags = db.DB_RDONLY elif flag == 'rw': flags = 0 elif flag == 'w': flags = db.DB_CREATE elif flag == 'c': flags = db.DB_CREATE elif flag == 'n': flags = db.DB_CREATE #flags = db.DB_CREATE | db.DB_TRUNCATE # we used db.DB_TRUNCATE flag for this before but Berkeley DB # 4.2.52 changed to disallowed truncate with txn environments. if file is not None and os.path.isfile(file): os.unlink(file) else: raise error, "flags should be one of 'r', 'w', 'c' or 'n'" return flags | db.DB_THREAD #---------------------------------------------------------------------- # This is a silly little hack that allows apps to continue to use the # DB_THREAD flag even on systems without threads without freaking out # Berkeley DB. # # This assumes that if Python was built with thread support then # Berkeley DB was too. try: # 2to3 automatically changes "import thread" to "import _thread" import thread as T del T except ImportError: db.DB_THREAD = 0 #---------------------------------------------------------------------- bsddb3-6.1.0/Lib/bsddb/dbshelve.py0000644000000000000000000002463612363167637016600 0ustar rootroot00000000000000#!/usr/bin/env python #------------------------------------------------------------------------ # Copyright (c) 1997-2001 by Total Control Software # All Rights Reserved #------------------------------------------------------------------------ # # Module Name: dbShelve.py # # Description: A reimplementation of the standard shelve.py that # forces the use of cPickle, and DB. # # Creation Date: 11/3/97 3:39:04PM # # License: This is free software. You may use this software for any # purpose including modification/redistribution, so long as # this header remains intact and that you do not claim any # rights of ownership or authorship of this software. This # software has been tested, but no warranty is expressed or # implied. # # 13-Dec-2000: Updated to be used with the new bsddb3 package. # Added DBShelfCursor class. # #------------------------------------------------------------------------ """Manage shelves of pickled objects using bsddb database files for the storage. """ #------------------------------------------------------------------------ import sys absolute_import = (sys.version_info[0] >= 3) if absolute_import : from . import db else : import db if sys.version_info[0] >= 3 : import cPickle # Will be converted to "pickle" by "2to3" else : import warnings with warnings.catch_warnings() : warnings.filterwarnings("ignore", category=DeprecationWarning) import cPickle HIGHEST_PROTOCOL = cPickle.HIGHEST_PROTOCOL def _dumps(object, protocol): return cPickle.dumps(object, protocol=protocol) import collections MutableMapping = collections.MutableMapping #------------------------------------------------------------------------ def open(filename, flags=db.DB_CREATE, mode=0660, filetype=db.DB_HASH, dbenv=None, dbname=None): """ A simple factory function for compatibility with the standard shelve.py module. It can be used like this, where key is a string and data is a pickleable object: from bsddb import dbshelve db = dbshelve.open(filename) db[key] = data db.close() """ if type(flags) == type(''): sflag = flags if sflag == 'r': flags = db.DB_RDONLY elif sflag == 'rw': flags = 0 elif sflag == 'w': flags = db.DB_CREATE elif sflag == 'c': flags = db.DB_CREATE elif sflag == 'n': flags = db.DB_TRUNCATE | db.DB_CREATE else: raise db.DBError, "flags should be one of 'r', 'w', 'c' or 'n' or use the bsddb.db.DB_* flags" d = DBShelf(dbenv) d.open(filename, dbname, filetype, flags, mode) return d #--------------------------------------------------------------------------- class DBShelveError(db.DBError): pass class DBShelf(MutableMapping): """A shelf to hold pickled objects, built upon a bsddb DB object. It automatically pickles/unpickles data objects going to/from the DB. """ def __init__(self, dbenv=None): self.db = db.DB(dbenv) self._closed = True if HIGHEST_PROTOCOL: self.protocol = HIGHEST_PROTOCOL else: self.protocol = 1 def __del__(self): self.close() def __getattr__(self, name): """Many methods we can just pass through to the DB object. (See below) """ return getattr(self.db, name) #----------------------------------- # Dictionary access methods def __len__(self): return len(self.db) def __getitem__(self, key): data = self.db[key] return cPickle.loads(data) def __setitem__(self, key, value): data = _dumps(value, self.protocol) self.db[key] = data def __delitem__(self, key): del self.db[key] def keys(self, txn=None): if txn is not None: return self.db.keys(txn) else: return self.db.keys() def __iter__(self) : # XXX: Load all keys in memory :-( for k in self.db.keys() : yield k # Do this when "DB" support iteration # Or is it enough to pass thru "getattr"? # # def __iter__(self) : # return self.db.__iter__() def open(self, *args, **kwargs): self.db.open(*args, **kwargs) self._closed = False def close(self, *args, **kwargs): self.db.close(*args, **kwargs) self._closed = True def __repr__(self): if self._closed: return '' % (id(self)) else: return repr(dict(self.iteritems())) def items(self, txn=None): if txn is not None: items = self.db.items(txn) else: items = self.db.items() newitems = [] for k, v in items: newitems.append( (k, cPickle.loads(v)) ) return newitems def values(self, txn=None): if txn is not None: values = self.db.values(txn) else: values = self.db.values() return map(cPickle.loads, values) #----------------------------------- # Other methods def __append(self, value, txn=None): data = _dumps(value, self.protocol) return self.db.append(data, txn) def append(self, value, txn=None): if self.get_type() == db.DB_RECNO: return self.__append(value, txn=txn) raise DBShelveError, "append() only supported when dbshelve opened with filetype=dbshelve.db.DB_RECNO" def associate(self, secondaryDB, callback, flags=0): def _shelf_callback(priKey, priData, realCallback=callback): # Safe in Python 2.x because expresion short circuit if sys.version_info[0] < 3 or isinstance(priData, bytes) : data = cPickle.loads(priData) else : data = cPickle.loads(bytes(priData, "iso8859-1")) # 8 bits return realCallback(priKey, data) return self.db.associate(secondaryDB, _shelf_callback, flags) #def get(self, key, default=None, txn=None, flags=0): def get(self, *args, **kw): # We do it with *args and **kw so if the default value wasn't # given nothing is passed to the extension module. That way # an exception can be raised if set_get_returns_none is turned # off. data = self.db.get(*args, **kw) try: return cPickle.loads(data) except (EOFError, TypeError, cPickle.UnpicklingError): return data # we may be getting the default value, or None, # so it doesn't need unpickled. def get_both(self, key, value, txn=None, flags=0): data = _dumps(value, self.protocol) data = self.db.get(key, data, txn, flags) return cPickle.loads(data) def cursor(self, txn=None, flags=0): c = DBShelfCursor(self.db.cursor(txn, flags)) c.protocol = self.protocol return c def put(self, key, value, txn=None, flags=0): data = _dumps(value, self.protocol) return self.db.put(key, data, txn, flags) def join(self, cursorList, flags=0): raise NotImplementedError #---------------------------------------------- # Methods allowed to pass-through to self.db # # close, delete, fd, get_byteswapped, get_type, has_key, # key_range, open, remove, rename, stat, sync, # upgrade, verify, and all set_* methods. #--------------------------------------------------------------------------- class DBShelfCursor: """ """ def __init__(self, cursor): self.dbc = cursor def __del__(self): self.close() def __getattr__(self, name): """Some methods we can just pass through to the cursor object. (See below)""" return getattr(self.dbc, name) #---------------------------------------------- def dup(self, flags=0): c = DBShelfCursor(self.dbc.dup(flags)) c.protocol = self.protocol return c def put(self, key, value, flags=0): data = _dumps(value, self.protocol) return self.dbc.put(key, data, flags) def get(self, *args): count = len(args) # a method overloading hack method = getattr(self, 'get_%d' % count) method(*args) def get_1(self, flags): rec = self.dbc.get(flags) return self._extract(rec) def get_2(self, key, flags): rec = self.dbc.get(key, flags) return self._extract(rec) def get_3(self, key, value, flags): data = _dumps(value, self.protocol) rec = self.dbc.get(key, flags) return self._extract(rec) def current(self, flags=0): return self.get_1(flags|db.DB_CURRENT) def first(self, flags=0): return self.get_1(flags|db.DB_FIRST) def last(self, flags=0): return self.get_1(flags|db.DB_LAST) def next(self, flags=0): return self.get_1(flags|db.DB_NEXT) def prev(self, flags=0): return self.get_1(flags|db.DB_PREV) def consume(self, flags=0): return self.get_1(flags|db.DB_CONSUME) def next_dup(self, flags=0): return self.get_1(flags|db.DB_NEXT_DUP) def next_nodup(self, flags=0): return self.get_1(flags|db.DB_NEXT_NODUP) def prev_nodup(self, flags=0): return self.get_1(flags|db.DB_PREV_NODUP) def get_both(self, key, value, flags=0): data = _dumps(value, self.protocol) rec = self.dbc.get_both(key, flags) return self._extract(rec) def set(self, key, flags=0): rec = self.dbc.set(key, flags) return self._extract(rec) def set_range(self, key, flags=0): rec = self.dbc.set_range(key, flags) return self._extract(rec) def set_recno(self, recno, flags=0): rec = self.dbc.set_recno(recno, flags) return self._extract(rec) set_both = get_both def _extract(self, rec): if rec is None: return None else: key, data = rec # Safe in Python 2.x because expresion short circuit if sys.version_info[0] < 3 or isinstance(data, bytes) : return key, cPickle.loads(data) else : return key, cPickle.loads(bytes(data, "iso8859-1")) # 8 bits #---------------------------------------------- # Methods allowed to pass-through to self.dbc # # close, count, delete, get_recno, join_item #--------------------------------------------------------------------------- bsddb3-6.1.0/Lib/bsddb/dbutils.py0000644000000000000000000000552112363167637016442 0ustar rootroot00000000000000#------------------------------------------------------------------------ # # Copyright (C) 2000 Autonomous Zone Industries # # License: This is free software. You may use this software for any # purpose including modification/redistribution, so long as # this header remains intact and that you do not claim any # rights of ownership or authorship of this software. This # software has been tested, but no warranty is expressed or # implied. # # Author: Gregory P. Smith # # Note: I don't know how useful this is in reality since when a # DBLockDeadlockError happens the current transaction is supposed to be # aborted. If it doesn't then when the operation is attempted again # the deadlock is still happening... # --Robin # #------------------------------------------------------------------------ # # import the time.sleep function in a namespace safe way to allow # "from bsddb.dbutils import *" # from time import sleep as _sleep import sys absolute_import = (sys.version_info[0] >= 3) if absolute_import : from . import db else : import db # always sleep at least N seconds between retrys _deadlock_MinSleepTime = 1.0/128 # never sleep more than N seconds between retrys _deadlock_MaxSleepTime = 3.14159 # Assign a file object to this for a "sleeping" message to be written to it # each retry _deadlock_VerboseFile = None def DeadlockWrap(function, *_args, **_kwargs): """DeadlockWrap(function, *_args, **_kwargs) - automatically retries function in case of a database deadlock. This is a function intended to be used to wrap database calls such that they perform retrys with exponentially backing off sleeps in between when a DBLockDeadlockError exception is raised. A 'max_retries' parameter may optionally be passed to prevent it from retrying forever (in which case the exception will be reraised). d = DB(...) d.open(...) DeadlockWrap(d.put, "foo", data="bar") # set key "foo" to "bar" """ sleeptime = _deadlock_MinSleepTime max_retries = _kwargs.get('max_retries', -1) if 'max_retries' in _kwargs: del _kwargs['max_retries'] while True: try: return function(*_args, **_kwargs) except db.DBLockDeadlockError: if _deadlock_VerboseFile: _deadlock_VerboseFile.write( 'dbutils.DeadlockWrap: sleeping %1.3f\n' % sleeptime) _sleep(sleeptime) # exponential backoff in the sleep time sleeptime *= 2 if sleeptime > _deadlock_MaxSleepTime: sleeptime = _deadlock_MaxSleepTime max_retries -= 1 if max_retries == -1: raise #------------------------------------------------------------------------ bsddb3-6.1.0/Lib/bsddb/db.py0000644000000000000000000000422412363167637015360 0ustar rootroot00000000000000#---------------------------------------------------------------------- # Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA # and Andrew Kuchling. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # o Redistributions of source code must retain the above copyright # notice, this list of conditions, and the disclaimer that follows. # # o Redistributions in binary form must reproduce the above copyright # notice, this list of conditions, and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # o Neither the name of Digital Creations nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS # IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED # TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DIGITAL # CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS # OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR # TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH # DAMAGE. #---------------------------------------------------------------------- # This module is just a placeholder for possible future expansion, in # case we ever want to augment the stuff in _db in any way. For now # it just simply imports everything from _db. import sys absolute_import = (sys.version_info[0] >= 3) if not absolute_import : from _pybsddb import * from _pybsddb import __version__ else : from ._pybsddb import * from ._pybsddb import __version__ bsddb3-6.1.0/Lib/bsddb/test/0000755000000000000000000000000012363235112015356 5ustar rootroot00000000000000bsddb3-6.1.0/Lib/bsddb/test/test_compare.py0000644000000000000000000004013512363167637020440 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for python DB duplicate and Btree key comparison function. """ import sys, os, re import test_all from cStringIO import StringIO import unittest from test_all import db, dbshelve, test_support, \ get_new_environment_path, get_new_database_path # Needed for python 3. "cmp" vanished in 3.0.1 def cmp(a, b) : if a==b : return 0 if a All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for exercising a Queue DB. """ import os, string from pprint import pprint import unittest from test_all import db, verbose, get_new_database_path #---------------------------------------------------------------------- class SimpleQueueTestCase(unittest.TestCase): def setUp(self): self.filename = get_new_database_path() def tearDown(self): try: os.remove(self.filename) except os.error: pass def test01_basic(self): # Basic Queue tests using the deprecated DBCursor.consume method. if verbose: print '\n', '-=' * 30 print "Running %s.test01_basic..." % self.__class__.__name__ d = db.DB() d.set_re_len(40) # Queues must be fixed length d.open(self.filename, db.DB_QUEUE, db.DB_CREATE) if verbose: print "before appends" + '-' * 30 pprint(d.stat()) for x in string.letters: d.append(x * 40) self.assertEqual(len(d), len(string.letters)) d.put(100, "some more data") d.put(101, "and some more ") d.put(75, "out of order") d.put(1, "replacement data") self.assertEqual(len(d), len(string.letters)+3) if verbose: print "before close" + '-' * 30 pprint(d.stat()) d.close() del d d = db.DB() d.open(self.filename) if verbose: print "after open" + '-' * 30 pprint(d.stat()) # Test "txn" as a positional parameter d.append("one more", None) # Test "txn" as a keyword parameter d.append("another one", txn=None) c = d.cursor() if verbose: print "after append" + '-' * 30 pprint(d.stat()) rec = c.consume() while rec: if verbose: print rec rec = c.consume() c.close() if verbose: print "after consume loop" + '-' * 30 pprint(d.stat()) self.assertEqual(len(d), 0, \ "if you see this message then you need to rebuild " \ "Berkeley DB 3.1.17 with the patch in patches/qam_stat.diff") d.close() def test02_basicPost32(self): # Basic Queue tests using the new DB.consume method in DB 3.2+ # (No cursor needed) if verbose: print '\n', '-=' * 30 print "Running %s.test02_basicPost32..." % self.__class__.__name__ d = db.DB() d.set_re_len(40) # Queues must be fixed length d.open(self.filename, db.DB_QUEUE, db.DB_CREATE) if verbose: print "before appends" + '-' * 30 pprint(d.stat()) for x in string.letters: d.append(x * 40) self.assertEqual(len(d), len(string.letters)) d.put(100, "some more data") d.put(101, "and some more ") d.put(75, "out of order") d.put(1, "replacement data") self.assertEqual(len(d), len(string.letters)+3) if verbose: print "before close" + '-' * 30 pprint(d.stat()) d.close() del d d = db.DB() d.open(self.filename) #d.set_get_returns_none(true) if verbose: print "after open" + '-' * 30 pprint(d.stat()) d.append("one more") if verbose: print "after append" + '-' * 30 pprint(d.stat()) rec = d.consume() while rec: if verbose: print rec rec = d.consume() if verbose: print "after consume loop" + '-' * 30 pprint(d.stat()) d.close() #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(SimpleQueueTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_get_none.py0000644000000000000000000000742312363167637020613 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for checking set_get_returns_none. """ import os, string import unittest from test_all import db, verbose, get_new_database_path #---------------------------------------------------------------------- class GetReturnsNoneTestCase(unittest.TestCase): def setUp(self): self.filename = get_new_database_path() def tearDown(self): try: os.remove(self.filename) except os.error: pass def test01_get_returns_none(self): d = db.DB() d.open(self.filename, db.DB_BTREE, db.DB_CREATE) d.set_get_returns_none(1) for x in string.letters: d.put(x, x * 40) data = d.get('bad key') self.assertEqual(data, None) data = d.get(string.letters[0]) self.assertEqual(data, string.letters[0]*40) count = 0 c = d.cursor() rec = c.first() while rec: count = count + 1 rec = c.next() self.assertEqual(rec, None) self.assertEqual(count, len(string.letters)) c.close() d.close() def test02_get_raises_exception(self): d = db.DB() d.open(self.filename, db.DB_BTREE, db.DB_CREATE) d.set_get_returns_none(0) for x in string.letters: d.put(x, x * 40) self.assertRaises(db.DBNotFoundError, d.get, 'bad key') self.assertRaises(KeyError, d.get, 'bad key') data = d.get(string.letters[0]) self.assertEqual(data, string.letters[0]*40) count = 0 exceptionHappened = 0 c = d.cursor() rec = c.first() while rec: count = count + 1 try: rec = c.next() except db.DBNotFoundError: # end of the records exceptionHappened = 1 break self.assertNotEqual(rec, None) self.assertTrue(exceptionHappened) self.assertEqual(count, len(string.letters)) c.close() d.close() #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(GetReturnsNoneTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_fileid.py0000644000000000000000000000652512363167637020253 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCase for reseting File ID. """ import os import shutil import unittest from test_all import db, test_support, get_new_environment_path, get_new_database_path class FileidResetTestCase(unittest.TestCase): def setUp(self): self.db_path_1 = get_new_database_path() self.db_path_2 = get_new_database_path() self.db_env_path = get_new_environment_path() def test_fileid_reset(self): # create DB 1 self.db1 = db.DB() self.db1.open(self.db_path_1, dbtype=db.DB_HASH, flags=(db.DB_CREATE|db.DB_EXCL)) self.db1.put('spam', 'eggs') self.db1.close() shutil.copy(self.db_path_1, self.db_path_2) self.db2 = db.DB() self.db2.open(self.db_path_2, dbtype=db.DB_HASH) self.db2.put('spam', 'spam') self.db2.close() self.db_env = db.DBEnv() self.db_env.open(self.db_env_path, db.DB_CREATE|db.DB_INIT_MPOOL) # use fileid_reset() here self.db_env.fileid_reset(self.db_path_2) self.db1 = db.DB(self.db_env) self.db1.open(self.db_path_1, dbtype=db.DB_HASH, flags=db.DB_RDONLY) self.assertEqual(self.db1.get('spam'), 'eggs') self.db2 = db.DB(self.db_env) self.db2.open(self.db_path_2, dbtype=db.DB_HASH, flags=db.DB_RDONLY) self.assertEqual(self.db2.get('spam'), 'spam') self.db1.close() self.db2.close() self.db_env.close() def tearDown(self): test_support.unlink(self.db_path_1) test_support.unlink(self.db_path_2) test_support.rmtree(self.db_env_path) def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(FileidResetTestCase)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_replication.py0000644000000000000000000005177512363167637021337 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for distributed transactions. """ import os import time import unittest from test_all import db, test_support, have_threads, verbose, \ get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- class DBReplication(unittest.TestCase) : def setUp(self) : self.homeDirMaster = get_new_environment_path() self.homeDirClient = get_new_environment_path() self.dbenvMaster = db.DBEnv() self.dbenvClient = db.DBEnv() # Must use "DB_THREAD" because the Replication Manager will # be executed in other threads but will use the same environment. # http://forums.oracle.com/forums/thread.jspa?threadID=645788&tstart=0 self.dbenvMaster.open(self.homeDirMaster, db.DB_CREATE | db.DB_INIT_TXN | db.DB_INIT_LOG | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_REP | db.DB_RECOVER | db.DB_THREAD, 0666) self.dbenvClient.open(self.homeDirClient, db.DB_CREATE | db.DB_INIT_TXN | db.DB_INIT_LOG | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_REP | db.DB_RECOVER | db.DB_THREAD, 0666) self.confirmed_master=self.client_startupdone=False def confirmed_master(a,b,c) : if b==db.DB_EVENT_REP_MASTER : self.confirmed_master=True def client_startupdone(a,b,c) : if b==db.DB_EVENT_REP_STARTUPDONE : self.client_startupdone=True self.dbenvMaster.set_event_notify(confirmed_master) self.dbenvClient.set_event_notify(client_startupdone) #self.dbenvMaster.set_verbose(db.DB_VERB_REPLICATION, True) #self.dbenvMaster.set_verbose(db.DB_VERB_FILEOPS_ALL, True) #self.dbenvClient.set_verbose(db.DB_VERB_REPLICATION, True) #self.dbenvClient.set_verbose(db.DB_VERB_FILEOPS_ALL, True) self.dbMaster = self.dbClient = None def tearDown(self): if self.dbClient : self.dbClient.close() if self.dbMaster : self.dbMaster.close() # Here we assign dummy event handlers to allow GC of the test object. # Since the dummy handler doesn't use any outer scope variable, it # doesn't keep any reference to the test object. def dummy(*args) : pass self.dbenvMaster.set_event_notify(dummy) self.dbenvClient.set_event_notify(dummy) self.dbenvClient.close() self.dbenvMaster.close() test_support.rmtree(self.homeDirClient) test_support.rmtree(self.homeDirMaster) class DBReplicationManager(DBReplication) : def test01_basic_replication(self) : master_port = test_support.find_unused_port() client_port = test_support.find_unused_port() if db.version() >= (5, 2) : self.site = self.dbenvMaster.repmgr_site("127.0.0.1", master_port) self.site.set_config(db.DB_GROUP_CREATOR, True) self.site.set_config(db.DB_LOCAL_SITE, True) self.site2 = self.dbenvMaster.repmgr_site("127.0.0.1", client_port) self.site3 = self.dbenvClient.repmgr_site("127.0.0.1", master_port) self.site3.set_config(db.DB_BOOTSTRAP_HELPER, True) self.site4 = self.dbenvClient.repmgr_site("127.0.0.1", client_port) self.site4.set_config(db.DB_LOCAL_SITE, True) d = { db.DB_BOOTSTRAP_HELPER: [False, False, True, False], db.DB_GROUP_CREATOR: [True, False, False, False], db.DB_LEGACY: [False, False, False, False], db.DB_LOCAL_SITE: [True, False, False, True], db.DB_REPMGR_PEER: [False, False, False, False ], } for i, j in d.items() : for k, v in \ zip([self.site, self.site2, self.site3, self.site4], j) : if v : self.assertTrue(k.get_config(i)) else : self.assertFalse(k.get_config(i)) self.assertNotEqual(self.site.get_eid(), self.site2.get_eid()) self.assertNotEqual(self.site3.get_eid(), self.site4.get_eid()) for i, j in zip([self.site, self.site2, self.site3, self.site4], \ [master_port, client_port, master_port, client_port]) : addr = i.get_address() self.assertEqual(addr, ("127.0.0.1", j)) for i in [self.site, self.site2] : self.assertEqual(i.get_address(), self.dbenvMaster.repmgr_site_by_eid(i.get_eid()).get_address()) for i in [self.site3, self.site4] : self.assertEqual(i.get_address(), self.dbenvClient.repmgr_site_by_eid(i.get_eid()).get_address()) else : self.dbenvMaster.repmgr_set_local_site("127.0.0.1", master_port) self.dbenvClient.repmgr_set_local_site("127.0.0.1", client_port) self.dbenvMaster.repmgr_add_remote_site("127.0.0.1", client_port) self.dbenvClient.repmgr_add_remote_site("127.0.0.1", master_port) self.dbenvMaster.rep_set_nsites(2) self.dbenvClient.rep_set_nsites(2) self.dbenvMaster.rep_set_priority(10) self.dbenvClient.rep_set_priority(0) self.dbenvMaster.rep_set_timeout(db.DB_REP_CONNECTION_RETRY,100123) self.dbenvClient.rep_set_timeout(db.DB_REP_CONNECTION_RETRY,100321) self.assertEqual(self.dbenvMaster.rep_get_timeout( db.DB_REP_CONNECTION_RETRY), 100123) self.assertEqual(self.dbenvClient.rep_get_timeout( db.DB_REP_CONNECTION_RETRY), 100321) self.dbenvMaster.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 100234) self.dbenvClient.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 100432) self.assertEqual(self.dbenvMaster.rep_get_timeout( db.DB_REP_ELECTION_TIMEOUT), 100234) self.assertEqual(self.dbenvClient.rep_get_timeout( db.DB_REP_ELECTION_TIMEOUT), 100432) self.dbenvMaster.rep_set_timeout(db.DB_REP_ELECTION_RETRY, 100345) self.dbenvClient.rep_set_timeout(db.DB_REP_ELECTION_RETRY, 100543) self.assertEqual(self.dbenvMaster.rep_get_timeout( db.DB_REP_ELECTION_RETRY), 100345) self.assertEqual(self.dbenvClient.rep_get_timeout( db.DB_REP_ELECTION_RETRY), 100543) self.dbenvMaster.repmgr_set_ack_policy(db.DB_REPMGR_ACKS_ALL) self.dbenvClient.repmgr_set_ack_policy(db.DB_REPMGR_ACKS_ALL) self.dbenvMaster.repmgr_start(1, db.DB_REP_MASTER); self.dbenvClient.repmgr_start(1, db.DB_REP_CLIENT); self.assertEqual(self.dbenvMaster.rep_get_nsites(),2) self.assertEqual(self.dbenvClient.rep_get_nsites(),2) self.assertEqual(self.dbenvMaster.rep_get_priority(),10) self.assertEqual(self.dbenvClient.rep_get_priority(),0) self.assertEqual(self.dbenvMaster.repmgr_get_ack_policy(), db.DB_REPMGR_ACKS_ALL) self.assertEqual(self.dbenvClient.repmgr_get_ack_policy(), db.DB_REPMGR_ACKS_ALL) # The timeout is necessary in BDB 4.5, since DB_EVENT_REP_STARTUPDONE # is not generated if the master has no new transactions. # This is solved in BDB 4.6 (#15542). import time timeout = time.time()+10 while (time.time() All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import unittest import os, glob from test_all import db, test_support, get_new_environment_path, \ get_new_database_path #---------------------------------------------------------------------- class DBEnv(unittest.TestCase): def setUp(self): self.homeDir = get_new_environment_path() self.env = db.DBEnv() def tearDown(self): self.env.close() del self.env test_support.rmtree(self.homeDir) class DBEnv_general(DBEnv) : def test_get_open_flags(self) : flags = db.DB_CREATE | db.DB_INIT_MPOOL self.env.open(self.homeDir, flags) self.assertEqual(flags, self.env.get_open_flags()) def test_get_open_flags2(self) : flags = db.DB_CREATE | db.DB_INIT_MPOOL | \ db.DB_INIT_LOCK | db.DB_THREAD self.env.open(self.homeDir, flags) self.assertEqual(flags, self.env.get_open_flags()) def test_lk_partitions(self) : for i in [10, 20, 40] : self.env.set_lk_partitions(i) self.assertEqual(i, self.env.get_lk_partitions()) def test_getset_intermediate_dir_mode(self) : self.assertEqual(None, self.env.get_intermediate_dir_mode()) for mode in ["rwx------", "rw-rw-rw-", "rw-r--r--"] : self.env.set_intermediate_dir_mode(mode) self.assertEqual(mode, self.env.get_intermediate_dir_mode()) self.assertRaises(db.DBInvalidArgError, self.env.set_intermediate_dir_mode, "abcde") def test_thread(self) : for i in [16, 100, 1000] : self.env.set_thread_count(i) self.assertEqual(i, self.env.get_thread_count()) def test_cache_max(self) : for size in [64, 128] : size = size*1024*1024 # Megabytes self.env.set_cache_max(0, size) size2 = self.env.get_cache_max() self.assertEqual(0, size2[0]) self.assertTrue(size <= size2[1]) self.assertTrue(2*size > size2[1]) def test_mutex_stat(self) : self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOCK) stat = self.env.mutex_stat() self.assertTrue("mutex_inuse_max" in stat) def test_lg_filemode(self) : for i in [0600, 0660, 0666] : self.env.set_lg_filemode(i) self.assertEqual(i, self.env.get_lg_filemode()) def test_mp_max_openfd(self) : for i in [17, 31, 42] : self.env.set_mp_max_openfd(i) self.assertEqual(i, self.env.get_mp_max_openfd()) def test_mp_max_write(self) : for i in [100, 200, 300] : for j in [1, 2, 3] : j *= 1000000 self.env.set_mp_max_write(i, j) v=self.env.get_mp_max_write() self.assertEqual((i, j), v) def test_invalid_txn(self) : # This environment doesn't support transactions self.assertRaises(db.DBInvalidArgError, self.env.txn_begin) def test_mp_mmapsize(self) : for i in [16, 32, 64] : i *= 1024*1024 self.env.set_mp_mmapsize(i) self.assertEqual(i, self.env.get_mp_mmapsize()) def test_tmp_dir(self) : for i in ["a", "bb", "ccc"] : self.env.set_tmp_dir(i) self.assertEqual(i, self.env.get_tmp_dir()) def test_flags(self) : self.env.set_flags(db.DB_AUTO_COMMIT, 1) self.assertEqual(db.DB_AUTO_COMMIT, self.env.get_flags()) self.env.set_flags(db.DB_TXN_NOSYNC, 1) self.assertEqual(db.DB_AUTO_COMMIT | db.DB_TXN_NOSYNC, self.env.get_flags()) self.env.set_flags(db.DB_AUTO_COMMIT, 0) self.assertEqual(db.DB_TXN_NOSYNC, self.env.get_flags()) self.env.set_flags(db.DB_TXN_NOSYNC, 0) self.assertEqual(0, self.env.get_flags()) def test_lk_max_objects(self) : for i in [1000, 2000, 3000] : self.env.set_lk_max_objects(i) self.assertEqual(i, self.env.get_lk_max_objects()) def test_lk_max_locks(self) : for i in [1000, 2000, 3000] : self.env.set_lk_max_locks(i) self.assertEqual(i, self.env.get_lk_max_locks()) def test_lk_max_lockers(self) : for i in [1000, 2000, 3000] : self.env.set_lk_max_lockers(i) self.assertEqual(i, self.env.get_lk_max_lockers()) def test_lg_regionmax(self) : for i in [128, 256, 1000] : i = i*1024*1024 self.env.set_lg_regionmax(i) j = self.env.get_lg_regionmax() self.assertTrue(i <= j) self.assertTrue(2*i > j) def test_lk_detect(self) : flags= [db.DB_LOCK_DEFAULT, db.DB_LOCK_EXPIRE, db.DB_LOCK_MAXLOCKS, db.DB_LOCK_MINLOCKS, db.DB_LOCK_MINWRITE, db.DB_LOCK_OLDEST, db.DB_LOCK_RANDOM, db.DB_LOCK_YOUNGEST] flags.append(db.DB_LOCK_MAXWRITE) for i in flags : self.env.set_lk_detect(i) self.assertEqual(i, self.env.get_lk_detect()) def test_lg_dir(self) : for i in ["a", "bb", "ccc", "dddd"] : self.env.set_lg_dir(i) self.assertEqual(i, self.env.get_lg_dir()) def test_lg_bsize(self) : log_size = 70*1024 self.env.set_lg_bsize(log_size) self.assertTrue(self.env.get_lg_bsize() >= log_size) self.assertTrue(self.env.get_lg_bsize() < 4*log_size) self.env.set_lg_bsize(4*log_size) self.assertTrue(self.env.get_lg_bsize() >= 4*log_size) def test_setget_data_dirs(self) : dirs = ("a", "b", "c", "d") for i in dirs : self.env.set_data_dir(i) self.assertEqual(dirs, self.env.get_data_dirs()) def test_setget_cachesize(self) : cachesize = (0, 512*1024*1024, 3) self.env.set_cachesize(*cachesize) self.assertEqual(cachesize, self.env.get_cachesize()) cachesize = (0, 1*1024*1024, 5) self.env.set_cachesize(*cachesize) cachesize2 = self.env.get_cachesize() self.assertEqual(cachesize[0], cachesize2[0]) self.assertEqual(cachesize[2], cachesize2[2]) # Berkeley DB expands the cache 25% accounting overhead, # if the cache is small. self.assertEqual(125, int(100.0*cachesize2[1]/cachesize[1])) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) cachesize = (0, 2*1024*1024, 1) self.assertRaises(db.DBInvalidArgError, self.env.set_cachesize, *cachesize) cachesize3 = self.env.get_cachesize() self.assertEqual(cachesize2[0], cachesize3[0]) self.assertEqual(cachesize2[2], cachesize3[2]) # In Berkeley DB 5.1, the cachesize can change when opening the Env self.assertTrue(cachesize2[1] <= cachesize3[1]) def test_set_cachesize_dbenv_db(self) : # You can not configure the cachesize using # the database handle, if you are using an environment. d = db.DB(self.env) self.assertRaises(db.DBInvalidArgError, d.set_cachesize, 0, 1024*1024, 1) def test_setget_shm_key(self) : shm_key=137 self.env.set_shm_key(shm_key) self.assertEqual(shm_key, self.env.get_shm_key()) self.env.set_shm_key(shm_key+1) self.assertEqual(shm_key+1, self.env.get_shm_key()) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) # If we try to reconfigure cache after opening the # environment, core dump. self.assertRaises(db.DBInvalidArgError, self.env.set_shm_key, shm_key) self.assertEqual(shm_key+1, self.env.get_shm_key()) def test_mutex_setget_max(self) : v = self.env.mutex_get_max() v2 = v*2+1 self.env.mutex_set_max(v2) self.assertEqual(v2, self.env.mutex_get_max()) self.env.mutex_set_max(v) self.assertEqual(v, self.env.mutex_get_max()) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE) self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_max, v2) def test_mutex_setget_increment(self) : v = self.env.mutex_get_increment() v2 = 127 self.env.mutex_set_increment(v2) self.assertEqual(v2, self.env.mutex_get_increment()) self.env.mutex_set_increment(v) self.assertEqual(v, self.env.mutex_get_increment()) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE) self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_increment, v2) def test_mutex_setget_tas_spins(self) : self.env.mutex_set_tas_spins(0) # Default = BDB decides v = self.env.mutex_get_tas_spins() v2 = v*2+1 self.env.mutex_set_tas_spins(v2) self.assertEqual(v2, self.env.mutex_get_tas_spins()) self.env.mutex_set_tas_spins(v) self.assertEqual(v, self.env.mutex_get_tas_spins()) # In this case, you can change configuration # after opening the environment. self.env.open(self.homeDir, db.DB_CREATE) self.env.mutex_set_tas_spins(v2) def test_mutex_setget_align(self) : v = self.env.mutex_get_align() v2 = 64 if v == 64 : v2 = 128 self.env.mutex_set_align(v2) self.assertEqual(v2, self.env.mutex_get_align()) # Requires a nonzero power of two self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_align, 0) self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_align, 17) self.env.mutex_set_align(2*v2) self.assertEqual(2*v2, self.env.mutex_get_align()) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE) self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_align, v2) class DBEnv_log(DBEnv) : def setUp(self): DBEnv.setUp(self) self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOG) def test_log_file(self) : log_file = self.env.log_file((1, 1)) self.assertEqual("log.0000000001", log_file[-14:]) # The version with transactions is checked in other test object def test_log_printf(self) : msg = "This is a test..." self.env.log_printf(msg) logc = self.env.log_cursor() self.assertTrue(msg in (logc.last()[1])) if db.version() >= (4, 7) : def test_log_config(self) : self.env.log_set_config(db.DB_LOG_DSYNC | db.DB_LOG_ZERO, 1) self.assertTrue(self.env.log_get_config(db.DB_LOG_DSYNC)) self.assertTrue(self.env.log_get_config(db.DB_LOG_ZERO)) self.env.log_set_config(db.DB_LOG_ZERO, 0) self.assertTrue(self.env.log_get_config(db.DB_LOG_DSYNC)) self.assertFalse(self.env.log_get_config(db.DB_LOG_ZERO)) class DBEnv_log_txn(DBEnv) : def setUp(self): DBEnv.setUp(self) self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOG | db.DB_INIT_TXN) if db.version() < (5, 2) : def test_tx_max(self) : txns=[] def tx() : for i in xrange(self.env.get_tx_max()) : txns.append(self.env.txn_begin()) tx() self.assertRaises(MemoryError, tx) # Abort the transactions before garbage collection, # to avoid "warnings". for i in txns : i.abort() # The version without transactions is checked in other test object def test_log_printf(self) : msg = "This is a test..." txn = self.env.txn_begin() self.env.log_printf(msg, txn=txn) txn.commit() logc = self.env.log_cursor() logc.last() # Skip the commit self.assertTrue(msg in (logc.prev()[1])) msg = "This is another test..." txn = self.env.txn_begin() self.env.log_printf(msg, txn=txn) txn.abort() # Do not store the new message logc.last() # Skip the abort self.assertTrue(msg not in (logc.prev()[1])) msg = "This is a third test..." txn = self.env.txn_begin() self.env.log_printf(msg, txn=txn) txn.commit() # Do not store the new message logc.last() # Skip the commit self.assertTrue(msg in (logc.prev()[1])) class DBEnv_memp(DBEnv): def setUp(self): DBEnv.setUp(self) self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOG) self.db = db.DB(self.env) self.db.open("test", db.DB_HASH, db.DB_CREATE, 0660) def tearDown(self): self.db.close() del self.db DBEnv.tearDown(self) def test_memp_1_trickle(self) : self.db.put("hi", "bye") self.assertTrue(self.env.memp_trickle(100) > 0) # Preserve the order, do "memp_trickle" test first def test_memp_2_sync(self) : self.db.put("hi", "bye") self.env.memp_sync() # Full flush # Nothing to do... self.assertTrue(self.env.memp_trickle(100) == 0) self.db.put("hi", "bye2") self.env.memp_sync((1, 0)) # NOP, probably # Something to do... or not self.assertTrue(self.env.memp_trickle(100) >= 0) self.db.put("hi", "bye3") self.env.memp_sync((123, 99)) # Full flush # Nothing to do... self.assertTrue(self.env.memp_trickle(100) == 0) def test_memp_stat_1(self) : stats = self.env.memp_stat() # No param self.assertTrue(len(stats)==2) self.assertTrue("cache_miss" in stats[0]) stats = self.env.memp_stat(db.DB_STAT_CLEAR) # Positional param self.assertTrue("cache_miss" in stats[0]) stats = self.env.memp_stat(flags=0) # Keyword param self.assertTrue("cache_miss" in stats[0]) def test_memp_stat_2(self) : stats=self.env.memp_stat()[1] self.assertTrue(len(stats))==1 self.assertTrue("test" in stats) self.assertTrue("page_in" in stats["test"]) class DBEnv_logcursor(DBEnv): def setUp(self): DBEnv.setUp(self) self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOG | db.DB_INIT_TXN) txn = self.env.txn_begin() self.db = db.DB(self.env) self.db.open("test", db.DB_HASH, db.DB_CREATE, 0660, txn=txn) txn.commit() for i in ["2", "8", "20"] : txn = self.env.txn_begin() self.db.put(key = i, data = i*int(i), txn=txn) txn.commit() def tearDown(self): self.db.close() del self.db DBEnv.tearDown(self) def _check_return(self, value) : self.assertTrue(isinstance(value, tuple)) self.assertEqual(len(value), 2) self.assertTrue(isinstance(value[0], tuple)) self.assertEqual(len(value[0]), 2) self.assertTrue(isinstance(value[0][0], int)) self.assertTrue(isinstance(value[0][1], int)) self.assertTrue(isinstance(value[1], str)) # Preserve test order def test_1_first(self) : logc = self.env.log_cursor() v = logc.first() self._check_return(v) self.assertTrue((1, 1) < v[0]) self.assertTrue(len(v[1])>0) def test_2_last(self) : logc = self.env.log_cursor() lsn_first = logc.first()[0] v = logc.last() self._check_return(v) self.assertTrue(lsn_first < v[0]) def test_3_next(self) : logc = self.env.log_cursor() lsn_last = logc.last()[0] self.assertEqual(logc.next(), None) lsn_first = logc.first()[0] v = logc.next() self._check_return(v) self.assertTrue(lsn_first < v[0]) self.assertTrue(lsn_last > v[0]) v2 = logc.next() self.assertTrue(v2[0] > v[0]) self.assertTrue(lsn_last > v2[0]) v3 = logc.next() self.assertTrue(v3[0] > v2[0]) self.assertTrue(lsn_last > v3[0]) def test_4_prev(self) : logc = self.env.log_cursor() lsn_first = logc.first()[0] self.assertEqual(logc.prev(), None) lsn_last = logc.last()[0] v = logc.prev() self._check_return(v) self.assertTrue(lsn_first < v[0]) self.assertTrue(lsn_last > v[0]) v2 = logc.prev() self.assertTrue(v2[0] < v[0]) self.assertTrue(lsn_first < v2[0]) v3 = logc.prev() self.assertTrue(v3[0] < v2[0]) self.assertTrue(lsn_first < v3[0]) def test_5_current(self) : logc = self.env.log_cursor() logc.first() v = logc.next() self.assertEqual(v, logc.current()) def test_6_set(self) : logc = self.env.log_cursor() logc.first() v = logc.next() self.assertNotEqual(v, logc.next()) self.assertNotEqual(v, logc.next()) self.assertEqual(v, logc.set(v[0])) def test_explicit_close(self) : logc = self.env.log_cursor() logc.close() self.assertRaises(db.DBCursorClosedError, logc.next) def test_implicit_close(self) : logc = [self.env.log_cursor() for i in xrange(10)] self.env.close() # This close should close too all its tree for i in logc : self.assertRaises(db.DBCursorClosedError, i.next) def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBEnv_general)) suite.addTest(unittest.makeSuite(DBEnv_memp)) suite.addTest(unittest.makeSuite(DBEnv_logcursor)) suite.addTest(unittest.makeSuite(DBEnv_log)) suite.addTest(unittest.makeSuite(DBEnv_log_txn)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_compat.py0000644000000000000000000001400612363167637020273 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ Test cases adapted from the test_bsddb.py module in Python's regression test suite. """ import os, string import unittest from test_all import db, hashopen, btopen, rnopen, verbose, \ get_new_database_path class CompatibilityTestCase(unittest.TestCase): def setUp(self): self.filename = get_new_database_path() def tearDown(self): try: os.remove(self.filename) except os.error: pass def test01_btopen(self): self.do_bthash_test(btopen, 'btopen') def test02_hashopen(self): self.do_bthash_test(hashopen, 'hashopen') def test03_rnopen(self): data = "The quick brown fox jumped over the lazy dog.".split() if verbose: print "\nTesting: rnopen" f = rnopen(self.filename, 'c') for x in range(len(data)): f[x+1] = data[x] getTest = (f[1], f[2], f[3]) if verbose: print '%s %s %s' % getTest self.assertEqual(getTest[1], 'quick', 'data mismatch!') rv = f.set_location(3) if rv != (3, 'brown'): self.fail('recno database set_location failed: '+repr(rv)) f[25] = 'twenty-five' f.close() del f f = rnopen(self.filename, 'w') f[20] = 'twenty' def noRec(f): rec = f[15] self.assertRaises(KeyError, noRec, f) def badKey(f): rec = f['a string'] self.assertRaises(TypeError, badKey, f) del f[3] rec = f.first() while rec: if verbose: print rec try: rec = f.next() except KeyError: break f.close() def test04_n_flag(self): f = hashopen(self.filename, 'n') f.close() def do_bthash_test(self, factory, what): if verbose: print '\nTesting: ', what f = factory(self.filename, 'c') if verbose: print 'creation...' # truth test if f: if verbose: print "truth test: true" else: if verbose: print "truth test: false" f['0'] = '' f['a'] = 'Guido' f['b'] = 'van' f['c'] = 'Rossum' f['d'] = 'invented' # 'e' intentionally left out f['f'] = 'Python' if verbose: print '%s %s %s' % (f['a'], f['b'], f['c']) if verbose: print 'key ordering...' start = f.set_location(f.first()[0]) if start != ('0', ''): self.fail("incorrect first() result: "+repr(start)) while 1: try: rec = f.next() except KeyError: self.assertEqual(rec, f.last(), 'Error, last <> last!') f.previous() break if verbose: print rec self.assertTrue(f.has_key('f'), 'Error, missing key!') # test that set_location() returns the next nearest key, value # on btree databases and raises KeyError on others. if factory == btopen: e = f.set_location('e') if e != ('f', 'Python'): self.fail('wrong key,value returned: '+repr(e)) else: try: e = f.set_location('e') except KeyError: pass else: self.fail("set_location on non-existent key did not raise KeyError") f.sync() f.close() # truth test try: if f: if verbose: print "truth test: true" else: if verbose: print "truth test: false" except db.DBError: pass else: self.fail("Exception expected") del f if verbose: print 'modification...' f = factory(self.filename, 'w') f['d'] = 'discovered' if verbose: print 'access...' for key in f.keys(): word = f[key] if verbose: print word def noRec(f): rec = f['no such key'] self.assertRaises(KeyError, noRec, f) def badKey(f): rec = f[15] self.assertRaises(TypeError, badKey, f) f.close() #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(CompatibilityTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_distributed_transactions.py0000644000000000000000000001434612363167637024131 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for distributed transactions. """ import os import unittest from test_all import db, test_support, get_new_environment_path, \ get_new_database_path from test_all import verbose #---------------------------------------------------------------------- class DBTxn_distributed(unittest.TestCase): num_txns=1234 nosync=True must_open_db=False def _create_env(self, must_open_db) : self.dbenv = db.DBEnv() self.dbenv.set_tx_max(self.num_txns) self.dbenv.set_lk_max_lockers(self.num_txns*2) self.dbenv.set_lk_max_locks(self.num_txns*2) self.dbenv.set_lk_max_objects(self.num_txns*2) if self.nosync : self.dbenv.set_flags(db.DB_TXN_NOSYNC,True) self.dbenv.open(self.homeDir, db.DB_CREATE | db.DB_THREAD | db.DB_RECOVER | db.DB_INIT_TXN | db.DB_INIT_LOG | db.DB_INIT_MPOOL | db.DB_INIT_LOCK, 0666) self.db = db.DB(self.dbenv) self.db.set_re_len(db.DB_GID_SIZE) if must_open_db : txn=self.dbenv.txn_begin() self.db.open(self.filename, db.DB_QUEUE, db.DB_CREATE | db.DB_THREAD, 0666, txn=txn) txn.commit() def setUp(self) : self.homeDir = get_new_environment_path() self.filename = "test" return self._create_env(must_open_db=True) def _destroy_env(self): if self.nosync : self.dbenv.log_flush() self.db.close() self.dbenv.close() def tearDown(self): self._destroy_env() test_support.rmtree(self.homeDir) def _recreate_env(self,must_open_db) : self._destroy_env() self._create_env(must_open_db) def test01_distributed_transactions(self) : txns=set() adapt = lambda x : x import sys if sys.version_info[0] >= 3 : adapt = lambda x : bytes(x, "ascii") # Create transactions, "prepare" them, and # let them be garbage collected. for i in xrange(self.num_txns) : txn = self.dbenv.txn_begin() gid = "%%%dd" %db.DB_GID_SIZE gid = adapt(gid %i) self.db.put(i, gid, txn=txn, flags=db.DB_APPEND) txns.add(gid) txn.prepare(gid) del txn self._recreate_env(self.must_open_db) # Get "to be recovered" transactions but # let them be garbage collected. recovered_txns=self.dbenv.txn_recover() self.assertEqual(self.num_txns,len(recovered_txns)) for gid,txn in recovered_txns : self.assertTrue(gid in txns) del txn del recovered_txns self._recreate_env(self.must_open_db) # Get "to be recovered" transactions. Commit, abort and # discard them. recovered_txns=self.dbenv.txn_recover() self.assertEqual(self.num_txns,len(recovered_txns)) discard_txns=set() committed_txns=set() state=0 for gid,txn in recovered_txns : if state==0 or state==1: committed_txns.add(gid) txn.commit() elif state==2 : txn.abort() elif state==3 : txn.discard() discard_txns.add(gid) state=-1 state+=1 del txn del recovered_txns self._recreate_env(self.must_open_db) # Verify the discarded transactions are still # around, and dispose them. recovered_txns=self.dbenv.txn_recover() self.assertEqual(len(discard_txns),len(recovered_txns)) for gid,txn in recovered_txns : txn.abort() del txn del recovered_txns self._recreate_env(must_open_db=True) # Be sure there are not pending transactions. # Check also database size. recovered_txns=self.dbenv.txn_recover() self.assertTrue(len(recovered_txns)==0) self.assertEqual(len(committed_txns),self.db.stat()["nkeys"]) class DBTxn_distributedSYNC(DBTxn_distributed): nosync=False class DBTxn_distributed_must_open_db(DBTxn_distributed): must_open_db=True class DBTxn_distributedSYNC_must_open_db(DBTxn_distributed): nosync=False must_open_db=True #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBTxn_distributed)) suite.addTest(unittest.makeSuite(DBTxn_distributedSYNC)) suite.addTest(unittest.makeSuite(DBTxn_distributed_must_open_db)) suite.addTest(unittest.makeSuite(DBTxn_distributedSYNC_must_open_db)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_misc.py0000644000000000000000000001450512363167637017747 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """Miscellaneous bsddb module test cases """ import os, sys import unittest from test_all import db, dbshelve, hashopen, test_support, get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- class MiscTestCase(unittest.TestCase): def setUp(self): self.filename = get_new_database_path() self.homeDir = get_new_environment_path() def tearDown(self): test_support.unlink(self.filename) test_support.rmtree(self.homeDir) def test01_badpointer(self): dbs = dbshelve.open(self.filename) dbs.close() self.assertRaises(db.DBError, dbs.get, "foo") def test02_db_home(self): env = db.DBEnv() # check for crash fixed when db_home is used before open() self.assertTrue(env.db_home is None) env.open(self.homeDir, db.DB_CREATE) if sys.version_info[0] < 3 : self.assertEqual(self.homeDir, env.db_home) else : self.assertEqual(bytes(self.homeDir, "ascii"), env.db_home) def test03_repr_closed_db(self): db = hashopen(self.filename) db.close() rp = repr(db) self.assertEqual(rp, "{}") def test04_repr_db(self) : db = hashopen(self.filename) d = {} for i in xrange(100) : db[repr(i)] = repr(100*i) d[repr(i)] = repr(100*i) db.close() db = hashopen(self.filename) rp = repr(sorted(db.items())) rd = repr(sorted(d.items())) self.assertEqual(rp, rd) db.close() # http://sourceforge.net/tracker/index.php?func=detail&aid=1708868&group_id=13900&atid=313900 # # See the bug report for details. # # The problem was that make_key_dbt() was not allocating a copy of # string keys but FREE_DBT() was always being told to free it when the # database was opened with DB_THREAD. def test05_double_free_make_key_dbt(self): try: db1 = db.DB() db1.open(self.filename, None, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD) curs = db1.cursor() t = curs.get("/foo", db.DB_SET) # double free happened during exit from DBC_get finally: db1.close() test_support.unlink(self.filename) def test06_key_with_null_bytes(self): try: db1 = db.DB() db1.open(self.filename, None, db.DB_HASH, db.DB_CREATE) db1['a'] = 'eh?' db1['a\x00'] = 'eh zed.' db1['a\x00a'] = 'eh zed eh?' db1['aaa'] = 'eh eh eh!' keys = db1.keys() keys.sort() self.assertEqual(['a', 'a\x00', 'a\x00a', 'aaa'], keys) self.assertEqual(db1['a'], 'eh?') self.assertEqual(db1['a\x00'], 'eh zed.') self.assertEqual(db1['a\x00a'], 'eh zed eh?') self.assertEqual(db1['aaa'], 'eh eh eh!') finally: db1.close() test_support.unlink(self.filename) def test07_DB_set_flags_persists(self): try: db1 = db.DB() db1.set_flags(db.DB_DUPSORT) db1.open(self.filename, db.DB_HASH, db.DB_CREATE) db1['a'] = 'eh' db1['a'] = 'A' self.assertEqual([('a', 'A')], db1.items()) db1.put('a', 'Aa') self.assertEqual([('a', 'A'), ('a', 'Aa')], db1.items()) db1.close() db1 = db.DB() # no set_flags call, we're testing that it reads and obeys # the flags on open. db1.open(self.filename, db.DB_HASH) self.assertEqual([('a', 'A'), ('a', 'Aa')], db1.items()) # if it read the flags right this will replace all values # for key 'a' instead of adding a new one. (as a dict should) db1['a'] = 'new A' self.assertEqual([('a', 'new A')], db1.items()) finally: db1.close() test_support.unlink(self.filename) def test08_ExceptionTypes(self) : self.assertTrue(issubclass(db.DBError, Exception)) for i, j in db.__dict__.items() : if i.startswith("DB") and i.endswith("Error") : self.assertTrue(issubclass(j, db.DBError), msg=i) if i not in ("DBKeyEmptyError", "DBNotFoundError") : self.assertFalse(issubclass(j, KeyError), msg=i) # This two exceptions have two bases self.assertTrue(issubclass(db.DBKeyEmptyError, KeyError)) self.assertTrue(issubclass(db.DBNotFoundError, KeyError)) #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(MiscTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/__init__.py0000644000000000000000000000000012247676335017476 0ustar rootroot00000000000000bsddb3-6.1.0/Lib/bsddb/test/test_recno.py0000644000000000000000000002332012363167637020115 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for exercising a Recno DB. """ import os, sys import errno from pprint import pprint import unittest from test_all import db, test_support, verbose, get_new_environment_path, get_new_database_path letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' #---------------------------------------------------------------------- class SimpleRecnoTestCase(unittest.TestCase): if sys.version_info < (2, 7) : def assertIsInstance(self, obj, datatype, msg=None) : return self.assertEqual(type(obj), datatype, msg=msg) def assertGreaterEqual(self, a, b, msg=None) : return self.assertTrue(a>=b, msg=msg) def setUp(self): self.filename = get_new_database_path() self.homeDir = None def tearDown(self): test_support.unlink(self.filename) if self.homeDir: test_support.rmtree(self.homeDir) def test01_basic(self): d = db.DB() get_returns_none = d.set_get_returns_none(2) d.set_get_returns_none(get_returns_none) d.open(self.filename, db.DB_RECNO, db.DB_CREATE) for x in letters: recno = d.append(x * 60) self.assertIsInstance(recno, int) self.assertGreaterEqual(recno, 1) if verbose: print recno, if verbose: print stat = d.stat() if verbose: pprint(stat) for recno in range(1, len(d)+1): data = d[recno] if verbose: print data self.assertIsInstance(data, str) self.assertEqual(data, d.get(recno)) try: data = d[0] # This should raise a KeyError!?!?! except db.DBInvalidArgError, val: self.assertEqual(val.args[0], db.EINVAL) if verbose: print val else: self.fail("expected exception") # test that has_key raises DB exceptions (fixed in pybsddb 4.3.2) try: d.has_key(0) except db.DBError, val: pass else: self.fail("has_key did not raise a proper exception") try: data = d[100] except KeyError: pass else: self.fail("expected exception") try: data = d.get(100) except db.DBNotFoundError, val: if get_returns_none: self.fail("unexpected exception") else: self.assertEqual(data, None) keys = d.keys() if verbose: print keys self.assertIsInstance(keys, list) self.assertIsInstance(keys[0], int) self.assertEqual(len(keys), len(d)) items = d.items() if verbose: pprint(items) self.assertIsInstance(items, list) self.assertIsInstance(items[0], tuple) self.assertEqual(len(items[0]), 2) self.assertIsInstance(items[0][0], int) self.assertIsInstance(items[0][1], str) self.assertEqual(len(items), len(d)) self.assertTrue(d.has_key(25)) del d[25] self.assertFalse(d.has_key(25)) d.delete(13) self.assertFalse(d.has_key(13)) data = d.get_both(26, "z" * 60) self.assertEqual(data, "z" * 60, 'was %r' % data) if verbose: print data fd = d.fd() if verbose: print fd c = d.cursor() rec = c.first() while rec: if verbose: print rec rec = c.next() c.set(50) rec = c.current() if verbose: print rec c.put(-1, "a replacement record", db.DB_CURRENT) c.set(50) rec = c.current() self.assertEqual(rec, (50, "a replacement record")) if verbose: print rec rec = c.set_range(30) if verbose: print rec # test that non-existent key lookups work (and that # DBC_set_range doesn't have a memleak under valgrind) rec = c.set_range(999999) self.assertEqual(rec, None) if verbose: print rec c.close() d.close() d = db.DB() d.open(self.filename) c = d.cursor() # put a record beyond the consecutive end of the recno's d[100] = "way out there" self.assertEqual(d[100], "way out there") try: data = d[99] except KeyError: pass else: self.fail("expected exception") try: d.get(99) except db.DBKeyEmptyError, val: if get_returns_none: self.fail("unexpected DBKeyEmptyError exception") else: self.assertEqual(val.args[0], db.DB_KEYEMPTY) if verbose: print val else: if not get_returns_none: self.fail("expected exception") rec = c.set(40) while rec: if verbose: print rec rec = c.next() c.close() d.close() def test02_WithSource(self): """ A Recno file that is given a "backing source file" is essentially a simple ASCII file. Normally each record is delimited by \n and so is just a line in the file, but you can set a different record delimiter if needed. """ homeDir = get_new_environment_path() self.homeDir = homeDir source = os.path.join(homeDir, 'test_recno.txt') if not os.path.isdir(homeDir): os.mkdir(homeDir) f = open(source, 'w') # create the file f.close() d = db.DB() # This is the default value, just checking if both int d.set_re_delim(0x0A) d.set_re_delim('\n') # and char can be used... d.set_re_source(source) d.open(self.filename, db.DB_RECNO, db.DB_CREATE) data = "The quick brown fox jumped over the lazy dog".split() for datum in data: d.append(datum) d.sync() d.close() # get the text from the backing source f = open(source, 'r') text = f.read() f.close() text = text.strip() if verbose: print text print data print text.split('\n') self.assertEqual(text.split('\n'), data) # open as a DB again d = db.DB() d.set_re_source(source) d.open(self.filename, db.DB_RECNO) d[3] = 'reddish-brown' d[8] = 'comatose' d.sync() d.close() f = open(source, 'r') text = f.read() f.close() text = text.strip() if verbose: print text print text.split('\n') self.assertEqual(text.split('\n'), "The quick reddish-brown fox jumped over the comatose dog".split()) def test03_FixedLength(self): d = db.DB() d.set_re_len(40) # fixed length records, 40 bytes long d.set_re_pad('-') # sets the pad character... d.set_re_pad(45) # ...test both int and char d.open(self.filename, db.DB_RECNO, db.DB_CREATE) for x in letters: d.append(x * 35) # These will be padded d.append('.' * 40) # this one will be exact try: # this one will fail d.append('bad' * 20) except db.DBInvalidArgError, val: self.assertEqual(val.args[0], db.EINVAL) if verbose: print val else: self.fail("expected exception") c = d.cursor() rec = c.first() while rec: if verbose: print rec rec = c.next() c.close() d.close() def test04_get_size_empty(self) : d = db.DB() d.open(self.filename, dbtype=db.DB_RECNO, flags=db.DB_CREATE) row_id = d.append(' ') self.assertEqual(1, d.get_size(key=row_id)) row_id = d.append('') self.assertEqual(0, d.get_size(key=row_id)) #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(SimpleRecnoTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_basics.py0000644000000000000000000010677512363167637020273 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ Basic TestCases for BTree and hash DBs, with and without a DBEnv, with various DB flags, etc. """ import os import errno import string from pprint import pprint import unittest import time import sys from test_all import db, test_support, verbose, get_new_environment_path, \ get_new_database_path DASH = '-' #---------------------------------------------------------------------- class VersionTestCase(unittest.TestCase): def test00_version(self): info = db.version() if verbose: print '\n', '-=' * 20 print 'bsddb.db.version(): %s' % (info, ) print db.DB_VERSION_STRING print '-=' * 20 self.assertEqual(info, (db.DB_VERSION_MAJOR, db.DB_VERSION_MINOR, db.DB_VERSION_PATCH)) #---------------------------------------------------------------------- class BasicTestCase(unittest.TestCase): dbtype = db.DB_UNKNOWN # must be set in derived class cachesize = (0, 1024*1024, 1) dbopenflags = 0 dbsetflags = 0 dbmode = 0660 dbname = None useEnv = 0 envflags = 0 envsetflags = 0 _numKeys = 1002 # PRIVATE. NOTE: must be an even value def setUp(self): if self.useEnv: self.homeDir=get_new_environment_path() try: self.env = db.DBEnv() self.env.set_lg_max(1024*1024) self.env.set_tx_max(30) self._t = int(time.time()) self.env.set_tx_timestamp(self._t) self.env.set_flags(self.envsetflags, 1) self.env.open(self.homeDir, self.envflags | db.DB_CREATE) self.filename = "test" # Yes, a bare except is intended, since we're re-raising the exc. except: test_support.rmtree(self.homeDir) raise else: self.env = None self.filename = get_new_database_path() # create and open the DB self.d = db.DB(self.env) if not self.useEnv : self.d.set_cachesize(*self.cachesize) cachesize = self.d.get_cachesize() self.assertEqual(cachesize[0], self.cachesize[0]) self.assertEqual(cachesize[2], self.cachesize[2]) # Berkeley DB expands the cache 25% accounting overhead, # if the cache is small. self.assertEqual(125, int(100.0*cachesize[1]/self.cachesize[1])) self.d.set_flags(self.dbsetflags) if self.dbname: self.d.open(self.filename, self.dbname, self.dbtype, self.dbopenflags|db.DB_CREATE, self.dbmode) else: self.d.open(self.filename, # try out keyword args mode = self.dbmode, dbtype = self.dbtype, flags = self.dbopenflags|db.DB_CREATE) if not self.useEnv: self.assertRaises(db.DBInvalidArgError, self.d.set_cachesize, *self.cachesize) self.populateDB() def tearDown(self): self.d.close() if self.env is not None: self.env.close() test_support.rmtree(self.homeDir) else: os.remove(self.filename) def populateDB(self, _txn=None): d = self.d for x in range(self._numKeys//2): key = '%04d' % (self._numKeys - x) # insert keys in reverse order data = self.makeData(key) d.put(key, data, _txn) d.put('empty value', '', _txn) for x in range(self._numKeys//2-1): key = '%04d' % x # and now some in forward order data = self.makeData(key) d.put(key, data, _txn) if _txn: _txn.commit() num = len(d) if verbose: print "created %d records" % num def makeData(self, key): return DASH.join([key] * 5) #---------------------------------------- def test01_GetsAndPuts(self): d = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test01_GetsAndPuts..." % self.__class__.__name__ for key in ['0001', '0100', '0400', '0700', '0999']: data = d.get(key) if verbose: print data self.assertEqual(d.get('0321'), '0321-0321-0321-0321-0321') # By default non-existent keys return None... self.assertEqual(d.get('abcd'), None) # ...but they raise exceptions in other situations. Call # set_get_returns_none() to change it. try: d.delete('abcd') except db.DBNotFoundError, val: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print val else: self.fail("expected exception") d.put('abcd', 'a new record') self.assertEqual(d.get('abcd'), 'a new record') d.put('abcd', 'same key') if self.dbsetflags & db.DB_DUP: self.assertEqual(d.get('abcd'), 'a new record') else: self.assertEqual(d.get('abcd'), 'same key') try: d.put('abcd', 'this should fail', flags=db.DB_NOOVERWRITE) except db.DBKeyExistError, val: self.assertEqual(val.args[0], db.DB_KEYEXIST) if verbose: print val else: self.fail("expected exception") if self.dbsetflags & db.DB_DUP: self.assertEqual(d.get('abcd'), 'a new record') else: self.assertEqual(d.get('abcd'), 'same key') d.sync() d.close() del d self.d = db.DB(self.env) if self.dbname: self.d.open(self.filename, self.dbname) else: self.d.open(self.filename) d = self.d self.assertEqual(d.get('0321'), '0321-0321-0321-0321-0321') if self.dbsetflags & db.DB_DUP: self.assertEqual(d.get('abcd'), 'a new record') else: self.assertEqual(d.get('abcd'), 'same key') rec = d.get_both('0555', '0555-0555-0555-0555-0555') if verbose: print rec self.assertEqual(d.get_both('0555', 'bad data'), None) # test default value data = d.get('bad key', 'bad data') self.assertEqual(data, 'bad data') # any object can pass through data = d.get('bad key', self) self.assertEqual(data, self) s = d.stat() self.assertEqual(type(s), type({})) if verbose: print 'd.stat() returned this dictionary:' pprint(s) #---------------------------------------- def test02_DictionaryMethods(self): d = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test02_DictionaryMethods..." % \ self.__class__.__name__ for key in ['0002', '0101', '0401', '0701', '0998']: data = d[key] self.assertEqual(data, self.makeData(key)) if verbose: print data self.assertEqual(len(d), self._numKeys) keys = d.keys() self.assertEqual(len(keys), self._numKeys) self.assertEqual(type(keys), type([])) d['new record'] = 'a new record' self.assertEqual(len(d), self._numKeys+1) keys = d.keys() self.assertEqual(len(keys), self._numKeys+1) d['new record'] = 'a replacement record' self.assertEqual(len(d), self._numKeys+1) keys = d.keys() self.assertEqual(len(keys), self._numKeys+1) if verbose: print "the first 10 keys are:" pprint(keys[:10]) self.assertEqual(d['new record'], 'a replacement record') # We check also the positional parameter self.assertEqual(d.has_key('0001', None), 1) # We check also the keyword parameter self.assertEqual(d.has_key('spam', txn=None), 0) items = d.items() self.assertEqual(len(items), self._numKeys+1) self.assertEqual(type(items), type([])) self.assertEqual(type(items[0]), type(())) self.assertEqual(len(items[0]), 2) if verbose: print "the first 10 items are:" pprint(items[:10]) values = d.values() self.assertEqual(len(values), self._numKeys+1) self.assertEqual(type(values), type([])) if verbose: print "the first 10 values are:" pprint(values[:10]) #---------------------------------------- def test02b_SequenceMethods(self): d = self.d for key in ['0002', '0101', '0401', '0701', '0998']: data = d[key] self.assertEqual(data, self.makeData(key)) if verbose: print data self.assertTrue(hasattr(d, "__contains__")) self.assertTrue("0401" in d) self.assertFalse("1234" in d) #---------------------------------------- def test03_SimpleCursorStuff(self, get_raises_error=0, set_raises_error=0): if verbose: print '\n', '-=' * 30 print "Running %s.test03_SimpleCursorStuff (get_error %s, set_error %s)..." % \ (self.__class__.__name__, get_raises_error, set_raises_error) if self.env and self.dbopenflags & db.DB_AUTO_COMMIT: txn = self.env.txn_begin() else: txn = None c = self.d.cursor(txn=txn) rec = c.first() count = 0 while rec is not None: count = count + 1 if verbose and count % 100 == 0: print rec try: rec = c.next() except db.DBNotFoundError, val: if get_raises_error: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print val rec = None else: self.fail("unexpected DBNotFoundError") self.assertEqual(c.get_current_size(), len(c.current()[1]), "%s != len(%r)" % (c.get_current_size(), c.current()[1])) self.assertEqual(count, self._numKeys) rec = c.last() count = 0 while rec is not None: count = count + 1 if verbose and count % 100 == 0: print rec try: rec = c.prev() except db.DBNotFoundError, val: if get_raises_error: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print val rec = None else: self.fail("unexpected DBNotFoundError") self.assertEqual(count, self._numKeys) rec = c.set('0505') rec2 = c.current() self.assertEqual(rec, rec2) self.assertEqual(rec[0], '0505') self.assertEqual(rec[1], self.makeData('0505')) self.assertEqual(c.get_current_size(), len(rec[1])) # make sure we get empty values properly rec = c.set('empty value') self.assertEqual(rec[1], '') self.assertEqual(c.get_current_size(), 0) try: n = c.set('bad key') except db.DBNotFoundError, val: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print val else: if set_raises_error: self.fail("expected exception") if n is not None: self.fail("expected None: %r" % (n,)) rec = c.get_both('0404', self.makeData('0404')) self.assertEqual(rec, ('0404', self.makeData('0404'))) try: n = c.get_both('0404', 'bad data') except db.DBNotFoundError, val: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print val else: if get_raises_error: self.fail("expected exception") if n is not None: self.fail("expected None: %r" % (n,)) if self.d.get_type() == db.DB_BTREE: rec = c.set_range('011') if verbose: print "searched for '011', found: ", rec rec = c.set_range('011',dlen=0,doff=0) if verbose: print "searched (partial) for '011', found: ", rec if rec[1] != '': self.fail('expected empty data portion') ev = c.set_range('empty value') if verbose: print "search for 'empty value' returned", ev if ev[1] != '': self.fail('empty value lookup failed') c.set('0499') c.delete() try: rec = c.current() except db.DBKeyEmptyError, val: if get_raises_error: self.assertEqual(val.args[0], db.DB_KEYEMPTY) if verbose: print val else: self.fail("unexpected DBKeyEmptyError") else: if get_raises_error: self.fail('DBKeyEmptyError exception expected') c.next() c2 = c.dup(db.DB_POSITION) self.assertEqual(c.current(), c2.current()) c2.put('', 'a new value', db.DB_CURRENT) self.assertEqual(c.current(), c2.current()) self.assertEqual(c.current()[1], 'a new value') c2.put('', 'er', db.DB_CURRENT, dlen=0, doff=5) self.assertEqual(c2.current()[1], 'a newer value') c.close() c2.close() if txn: txn.commit() # time to abuse the closed cursors and hope we don't crash methods_to_test = { 'current': (), 'delete': (), 'dup': (db.DB_POSITION,), 'first': (), 'get': (0,), 'next': (), 'prev': (), 'last': (), 'put':('', 'spam', db.DB_CURRENT), 'set': ("0505",), } for method, args in methods_to_test.items(): try: if verbose: print "attempting to use a closed cursor's %s method" % \ method # a bug may cause a NULL pointer dereference... getattr(c, method)(*args) except db.DBError, val: self.assertEqual(val.args[0], 0) if verbose: print val else: self.fail("no exception raised when using a buggy cursor's" "%s method" % method) # # free cursor referencing a closed database, it should not barf: # oldcursor = self.d.cursor(txn=txn) self.d.close() # this would originally cause a segfault when the cursor for a # closed database was cleaned up. it should not anymore. # SF pybsddb bug id 667343 del oldcursor def test03b_SimpleCursorWithoutGetReturnsNone0(self): # same test but raise exceptions instead of returning None if verbose: print '\n', '-=' * 30 print "Running %s.test03b_SimpleCursorStuffWithoutGetReturnsNone..." % \ self.__class__.__name__ old = self.d.set_get_returns_none(0) self.assertEqual(old, 2) self.test03_SimpleCursorStuff(get_raises_error=1, set_raises_error=1) def test03b_SimpleCursorWithGetReturnsNone1(self): # same test but raise exceptions instead of returning None if verbose: print '\n', '-=' * 30 print "Running %s.test03b_SimpleCursorStuffWithoutGetReturnsNone..." % \ self.__class__.__name__ old = self.d.set_get_returns_none(1) self.test03_SimpleCursorStuff(get_raises_error=0, set_raises_error=1) def test03c_SimpleCursorGetReturnsNone2(self): # same test but raise exceptions instead of returning None if verbose: print '\n', '-=' * 30 print "Running %s.test03c_SimpleCursorStuffWithoutSetReturnsNone..." % \ self.__class__.__name__ old = self.d.set_get_returns_none(1) self.assertEqual(old, 2) old = self.d.set_get_returns_none(2) self.assertEqual(old, 1) self.test03_SimpleCursorStuff(get_raises_error=0, set_raises_error=0) def test03d_SimpleCursorPriority(self) : c = self.d.cursor() c.set_priority(db.DB_PRIORITY_VERY_LOW) # Positional self.assertEqual(db.DB_PRIORITY_VERY_LOW, c.get_priority()) c.set_priority(priority=db.DB_PRIORITY_HIGH) # Keyword self.assertEqual(db.DB_PRIORITY_HIGH, c.get_priority()) c.close() #---------------------------------------- def test04_PartialGetAndPut(self): d = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test04_PartialGetAndPut..." % \ self.__class__.__name__ key = "partialTest" data = "1" * 1000 + "2" * 1000 d.put(key, data) self.assertEqual(d.get(key), data) self.assertEqual(d.get(key, dlen=20, doff=990), ("1" * 10) + ("2" * 10)) d.put("partialtest2", ("1" * 30000) + "robin" ) self.assertEqual(d.get("partialtest2", dlen=5, doff=30000), "robin") # There seems to be a bug in DB here... Commented out the test for # now. ##self.assertEqual(d.get("partialtest2", dlen=5, doff=30010), "") if self.dbsetflags != db.DB_DUP: # Partial put with duplicate records requires a cursor d.put(key, "0000", dlen=2000, doff=0) self.assertEqual(d.get(key), "0000") d.put(key, "1111", dlen=1, doff=2) self.assertEqual(d.get(key), "0011110") #---------------------------------------- def test05_GetSize(self): d = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test05_GetSize..." % self.__class__.__name__ for i in range(1, 50000, 500): key = "size%s" % i #print "before ", i, d.put(key, "1" * i) #print "after", self.assertEqual(d.get_size(key), i) #print "done" #---------------------------------------- def test06_Truncate(self): d = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test06_Truncate..." % self.__class__.__name__ d.put("abcde", "ABCDE"); num = d.truncate() self.assertTrue(num >= 1, "truncate returned <= 0 on non-empty database") num = d.truncate() self.assertEqual(num, 0, "truncate on empty DB returned nonzero (%r)" % (num,)) #---------------------------------------- def test07_verify(self): # Verify bug solved in 4.7.3pre8 self.d.close() d = db.DB(self.env) d.verify(self.filename) #---------------------------------------- def test08_exists(self) : self.d.put("abcde", "ABCDE") self.assertTrue(self.d.exists("abcde") == True, "DB->exists() returns wrong value") self.assertTrue(self.d.exists("x") == False, "DB->exists() returns wrong value") #---------------------------------------- def test_compact(self) : d = self.d self.assertEqual(0, d.compact(flags=db.DB_FREELIST_ONLY)) self.assertEqual(0, d.compact(flags=db.DB_FREELIST_ONLY)) d.put("abcde", "ABCDE"); d.put("bcde", "BCDE"); d.put("abc", "ABC"); d.put("monty", "python"); d.delete("abc") d.delete("bcde") d.compact(start='abcde', stop='monty', txn=None, compact_fillpercent=42, compact_pages=1, compact_timeout=50000000, flags=db.DB_FREELIST_ONLY|db.DB_FREE_SPACE) #---------------------------------------- #---------------------------------------------------------------------- class BasicBTreeTestCase(BasicTestCase): dbtype = db.DB_BTREE class BasicHashTestCase(BasicTestCase): dbtype = db.DB_HASH class BasicBTreeWithThreadFlagTestCase(BasicTestCase): dbtype = db.DB_BTREE dbopenflags = db.DB_THREAD class BasicHashWithThreadFlagTestCase(BasicTestCase): dbtype = db.DB_HASH dbopenflags = db.DB_THREAD class BasicWithEnvTestCase(BasicTestCase): dbopenflags = db.DB_THREAD useEnv = 1 envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK #---------------------------------------- def test09_EnvRemoveAndRename(self): if not self.env: return if verbose: print '\n', '-=' * 30 print "Running %s.test09_EnvRemoveAndRename..." % self.__class__.__name__ # can't rename or remove an open DB self.d.close() newname = self.filename + '.renamed' self.env.dbrename(self.filename, None, newname) self.env.dbremove(newname) #---------------------------------------- class BasicBTreeWithEnvTestCase(BasicWithEnvTestCase): dbtype = db.DB_BTREE class BasicHashWithEnvTestCase(BasicWithEnvTestCase): dbtype = db.DB_HASH #---------------------------------------------------------------------- class BasicTransactionTestCase(BasicTestCase): if sys.version_info < (2, 7) : def assertIn(self, a, b, msg=None) : return self.assertTrue(a in b, msg=msg) dbopenflags = db.DB_THREAD | db.DB_AUTO_COMMIT useEnv = 1 envflags = (db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_TXN) envsetflags = db.DB_AUTO_COMMIT def tearDown(self): self.txn.commit() BasicTestCase.tearDown(self) def populateDB(self): txn = self.env.txn_begin() BasicTestCase.populateDB(self, _txn=txn) self.txn = self.env.txn_begin() def test06_Transactions(self): d = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test06_Transactions..." % self.__class__.__name__ self.assertEqual(d.get('new rec', txn=self.txn), None) d.put('new rec', 'this is a new record', self.txn) self.assertEqual(d.get('new rec', txn=self.txn), 'this is a new record') self.txn.abort() self.assertEqual(d.get('new rec'), None) self.txn = self.env.txn_begin() self.assertEqual(d.get('new rec', txn=self.txn), None) d.put('new rec', 'this is a new record', self.txn) self.assertEqual(d.get('new rec', txn=self.txn), 'this is a new record') self.txn.commit() self.assertEqual(d.get('new rec'), 'this is a new record') self.txn = self.env.txn_begin() c = d.cursor(self.txn) rec = c.first() count = 0 while rec is not None: count = count + 1 if verbose and count % 100 == 0: print rec rec = c.next() self.assertEqual(count, self._numKeys+1) c.close() # Cursors *MUST* be closed before commit! self.txn.commit() # flush pending updates self.env.txn_checkpoint (0, 0, 0) statDict = self.env.log_stat(0); self.assertIn('magic', statDict) self.assertIn('version', statDict) self.assertIn('cur_file', statDict) self.assertIn('region_nowait', statDict) # must have at least one log file present: logs = self.env.log_archive(db.DB_ARCH_ABS | db.DB_ARCH_LOG) self.assertNotEqual(logs, None) for log in logs: if verbose: print 'log file: ' + log logs = self.env.log_archive(db.DB_ARCH_REMOVE) self.assertTrue(not logs) self.txn = self.env.txn_begin() #---------------------------------------- def test08_exists(self) : txn = self.env.txn_begin() self.d.put("abcde", "ABCDE", txn=txn) txn.commit() txn = self.env.txn_begin() self.assertTrue(self.d.exists("abcde", txn=txn) == True, "DB->exists() returns wrong value") self.assertTrue(self.d.exists("x", txn=txn) == False, "DB->exists() returns wrong value") txn.abort() #---------------------------------------- def test09_TxnTruncate(self): d = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test09_TxnTruncate..." % self.__class__.__name__ d.put("abcde", "ABCDE"); txn = self.env.txn_begin() num = d.truncate(txn) self.assertTrue(num >= 1, "truncate returned <= 0 on non-empty database") num = d.truncate(txn) self.assertEqual(num, 0, "truncate on empty DB returned nonzero (%r)" % (num,)) txn.commit() #---------------------------------------- def test10_TxnLateUse(self): txn = self.env.txn_begin() txn.abort() try: txn.abort() except db.DBError, e: pass else: raise RuntimeError, "DBTxn.abort() called after DB_TXN no longer valid w/o an exception" txn = self.env.txn_begin() txn.commit() try: txn.commit() except db.DBError, e: pass else: raise RuntimeError, "DBTxn.commit() called after DB_TXN no longer valid w/o an exception" #---------------------------------------- def test_txn_name(self) : txn=self.env.txn_begin() self.assertEqual(txn.get_name(), "") txn.set_name("XXYY") self.assertEqual(txn.get_name(), "XXYY") txn.set_name("") self.assertEqual(txn.get_name(), "") txn.abort() #---------------------------------------- def test_txn_set_timeout(self) : txn=self.env.txn_begin() txn.set_timeout(1234567, db.DB_SET_LOCK_TIMEOUT) txn.set_timeout(2345678, flags=db.DB_SET_TXN_TIMEOUT) txn.abort() #---------------------------------------- def test_get_tx_max(self) : self.assertEqual(self.env.get_tx_max(), 30) def test_get_tx_timestamp(self) : self.assertEqual(self.env.get_tx_timestamp(), self._t) class BTreeTransactionTestCase(BasicTransactionTestCase): dbtype = db.DB_BTREE class HashTransactionTestCase(BasicTransactionTestCase): dbtype = db.DB_HASH #---------------------------------------------------------------------- class BTreeRecnoTestCase(BasicTestCase): dbtype = db.DB_BTREE dbsetflags = db.DB_RECNUM def test09_RecnoInBTree(self): d = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test09_RecnoInBTree..." % self.__class__.__name__ rec = d.get(200) self.assertEqual(type(rec), type(())) self.assertEqual(len(rec), 2) if verbose: print "Record #200 is ", rec c = d.cursor() c.set('0200') num = c.get_recno() self.assertEqual(type(num), type(1)) if verbose: print "recno of d['0200'] is ", num rec = c.current() self.assertEqual(c.set_recno(num), rec) c.close() class BTreeRecnoWithThreadFlagTestCase(BTreeRecnoTestCase): dbopenflags = db.DB_THREAD #---------------------------------------------------------------------- class BasicDUPTestCase(BasicTestCase): dbsetflags = db.DB_DUP def test10_DuplicateKeys(self): d = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test10_DuplicateKeys..." % \ self.__class__.__name__ d.put("dup0", "before") for x in "The quick brown fox jumped over the lazy dog.".split(): d.put("dup1", x) d.put("dup2", "after") data = d.get("dup1") self.assertEqual(data, "The") if verbose: print data c = d.cursor() rec = c.set("dup1") self.assertEqual(rec, ('dup1', 'The')) next_reg = c.next() self.assertEqual(next_reg, ('dup1', 'quick')) rec = c.set("dup1") count = c.count() self.assertEqual(count, 9) next_dup = c.next_dup() self.assertEqual(next_dup, ('dup1', 'quick')) rec = c.set('dup1') while rec is not None: if verbose: print rec rec = c.next_dup() c.set('dup1') rec = c.next_nodup() self.assertNotEqual(rec[0], 'dup1') if verbose: print rec c.close() class BTreeDUPTestCase(BasicDUPTestCase): dbtype = db.DB_BTREE class HashDUPTestCase(BasicDUPTestCase): dbtype = db.DB_HASH class BTreeDUPWithThreadTestCase(BasicDUPTestCase): dbtype = db.DB_BTREE dbopenflags = db.DB_THREAD class HashDUPWithThreadTestCase(BasicDUPTestCase): dbtype = db.DB_HASH dbopenflags = db.DB_THREAD #---------------------------------------------------------------------- class BasicMultiDBTestCase(BasicTestCase): dbname = 'first' def otherType(self): if self.dbtype == db.DB_BTREE: return db.DB_HASH else: return db.DB_BTREE def test11_MultiDB(self): d1 = self.d if verbose: print '\n', '-=' * 30 print "Running %s.test11_MultiDB..." % self.__class__.__name__ d2 = db.DB(self.env) d2.open(self.filename, "second", self.dbtype, self.dbopenflags|db.DB_CREATE) d3 = db.DB(self.env) d3.open(self.filename, "third", self.otherType(), self.dbopenflags|db.DB_CREATE) for x in "The quick brown fox jumped over the lazy dog".split(): d2.put(x, self.makeData(x)) for x in string.letters: d3.put(x, x*70) d1.sync() d2.sync() d3.sync() d1.close() d2.close() d3.close() self.d = d1 = d2 = d3 = None self.d = d1 = db.DB(self.env) d1.open(self.filename, self.dbname, flags = self.dbopenflags) d2 = db.DB(self.env) d2.open(self.filename, "second", flags = self.dbopenflags) d3 = db.DB(self.env) d3.open(self.filename, "third", flags = self.dbopenflags) c1 = d1.cursor() c2 = d2.cursor() c3 = d3.cursor() count = 0 rec = c1.first() while rec is not None: count = count + 1 if verbose and (count % 50) == 0: print rec rec = c1.next() self.assertEqual(count, self._numKeys) count = 0 rec = c2.first() while rec is not None: count = count + 1 if verbose: print rec rec = c2.next() self.assertEqual(count, 9) count = 0 rec = c3.first() while rec is not None: count = count + 1 if verbose: print rec rec = c3.next() self.assertEqual(count, len(string.letters)) c1.close() c2.close() c3.close() d2.close() d3.close() # Strange things happen if you try to use Multiple DBs per file without a # DBEnv with MPOOL and LOCKing... class BTreeMultiDBTestCase(BasicMultiDBTestCase): dbtype = db.DB_BTREE dbopenflags = db.DB_THREAD useEnv = 1 envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK class HashMultiDBTestCase(BasicMultiDBTestCase): dbtype = db.DB_HASH dbopenflags = db.DB_THREAD useEnv = 1 envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK class PrivateObject(unittest.TestCase) : def tearDown(self) : del self.obj def test01_DefaultIsNone(self) : self.assertEqual(self.obj.get_private(), None) def test02_assignment(self) : a = "example of private object" self.obj.set_private(a) b = self.obj.get_private() self.assertTrue(a is b) # Object identity def test03_leak_assignment(self) : a = "example of private object" refcount = sys.getrefcount(a) self.obj.set_private(a) self.assertEqual(refcount+1, sys.getrefcount(a)) self.obj.set_private(None) self.assertEqual(refcount, sys.getrefcount(a)) def test04_leak_GC(self) : a = "example of private object" refcount = sys.getrefcount(a) self.obj.set_private(a) self.obj = None self.assertEqual(refcount, sys.getrefcount(a)) class DBEnvPrivateObject(PrivateObject) : def setUp(self) : self.obj = db.DBEnv() class DBPrivateObject(PrivateObject) : def setUp(self) : self.obj = db.DB() class CrashAndBurn(unittest.TestCase) : #def test01_OpenCrash(self) : # # See http://bugs.python.org/issue3307 # self.assertRaises(db.DBInvalidArgError, db.DB, None, 65535) if db.version() < (4, 8) : def test02_DBEnv_dealloc(self): # http://bugs.python.org/issue3885 import gc self.assertRaises(db.DBInvalidArgError, db.DBEnv, ~db.DB_RPCCLIENT) gc.collect() #---------------------------------------------------------------------- #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(VersionTestCase)) suite.addTest(unittest.makeSuite(BasicBTreeTestCase)) suite.addTest(unittest.makeSuite(BasicHashTestCase)) suite.addTest(unittest.makeSuite(BasicBTreeWithThreadFlagTestCase)) suite.addTest(unittest.makeSuite(BasicHashWithThreadFlagTestCase)) suite.addTest(unittest.makeSuite(BasicBTreeWithEnvTestCase)) suite.addTest(unittest.makeSuite(BasicHashWithEnvTestCase)) suite.addTest(unittest.makeSuite(BTreeTransactionTestCase)) suite.addTest(unittest.makeSuite(HashTransactionTestCase)) suite.addTest(unittest.makeSuite(BTreeRecnoTestCase)) suite.addTest(unittest.makeSuite(BTreeRecnoWithThreadFlagTestCase)) suite.addTest(unittest.makeSuite(BTreeDUPTestCase)) suite.addTest(unittest.makeSuite(HashDUPTestCase)) suite.addTest(unittest.makeSuite(BTreeDUPWithThreadTestCase)) suite.addTest(unittest.makeSuite(HashDUPWithThreadTestCase)) suite.addTest(unittest.makeSuite(BTreeMultiDBTestCase)) suite.addTest(unittest.makeSuite(HashMultiDBTestCase)) suite.addTest(unittest.makeSuite(DBEnvPrivateObject)) suite.addTest(unittest.makeSuite(DBPrivateObject)) suite.addTest(unittest.makeSuite(CrashAndBurn)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_join.py0000644000000000000000000001126212363167637017750 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for using the DB.join and DBCursor.join_item methods. """ import os import unittest from test_all import db, dbshelve, test_support, verbose, \ get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- ProductIndex = [ ('apple', "Convenience Store"), ('blueberry', "Farmer's Market"), ('shotgun', "S-Mart"), # Aisle 12 ('pear', "Farmer's Market"), ('chainsaw', "S-Mart"), # "Shop smart. Shop S-Mart!" ('strawberry', "Farmer's Market"), ] ColorIndex = [ ('blue', "blueberry"), ('red', "apple"), ('red', "chainsaw"), ('red', "strawberry"), ('yellow', "peach"), ('yellow', "pear"), ('black', "shotgun"), ] class JoinTestCase(unittest.TestCase): keytype = '' def setUp(self): self.filename = self.__class__.__name__ + '.db' self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOCK ) def tearDown(self): self.env.close() test_support.rmtree(self.homeDir) def test01_join(self): if verbose: print '\n', '-=' * 30 print "Running %s.test01_join..." % \ self.__class__.__name__ # create and populate primary index priDB = db.DB(self.env) priDB.open(self.filename, "primary", db.DB_BTREE, db.DB_CREATE) map(lambda t, priDB=priDB: priDB.put(*t), ProductIndex) # create and populate secondary index secDB = db.DB(self.env) secDB.set_flags(db.DB_DUP | db.DB_DUPSORT) secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE) map(lambda t, secDB=secDB: secDB.put(*t), ColorIndex) sCursor = None jCursor = None try: # lets look up all of the red Products sCursor = secDB.cursor() # Don't do the .set() in an assert, or you can get a bogus failure # when running python -O tmp = sCursor.set('red') self.assertTrue(tmp) # FIXME: jCursor doesn't properly hold a reference to its # cursors, if they are closed before jcursor is used it # can cause a crash. jCursor = priDB.join([sCursor]) if jCursor.get(0) != ('apple', "Convenience Store"): self.fail("join cursor positioned wrong") if jCursor.join_item() != 'chainsaw': self.fail("DBCursor.join_item returned wrong item") if jCursor.get(0)[0] != 'strawberry': self.fail("join cursor returned wrong thing") if jCursor.get(0): # there were only three red items to return self.fail("join cursor returned too many items") finally: if jCursor: jCursor.close() if sCursor: sCursor.close() priDB.close() secDB.close() def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(JoinTestCase)) return suite bsddb3-6.1.0/Lib/bsddb/test/test_all.py0000644000000000000000000005023312363167637017562 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """Run all test cases. """ import sys import os import unittest import bsddb3 as bsddb if sys.version_info[0] >= 3 : charset = "iso8859-1" # Full 8 bit class logcursor_py3k(object) : def __init__(self, env) : self._logcursor = env.log_cursor() def __getattr__(self, v) : return getattr(self._logcursor, v) def __next__(self) : v = getattr(self._logcursor, "next")() if v is not None : v = (v[0], v[1].decode(charset)) return v next = __next__ def first(self) : v = self._logcursor.first() if v is not None : v = (v[0], v[1].decode(charset)) return v def last(self) : v = self._logcursor.last() if v is not None : v = (v[0], v[1].decode(charset)) return v def prev(self) : v = self._logcursor.prev() if v is not None : v = (v[0], v[1].decode(charset)) return v def current(self) : v = self._logcursor.current() if v is not None : v = (v[0], v[1].decode(charset)) return v def set(self, lsn) : v = self._logcursor.set(lsn) if v is not None : v = (v[0], v[1].decode(charset)) return v class cursor_py3k(object) : def __init__(self, db, *args, **kwargs) : self._dbcursor = db.cursor(*args, **kwargs) def __getattr__(self, v) : return getattr(self._dbcursor, v) def _fix(self, v) : if v is None : return None key, value = v if isinstance(key, bytes) : key = key.decode(charset) return (key, value.decode(charset)) def __next__(self, flags=0, dlen=-1, doff=-1) : v = getattr(self._dbcursor, "next")(flags=flags, dlen=dlen, doff=doff) return self._fix(v) next = __next__ def previous(self) : v = self._dbcursor.previous() return self._fix(v) def last(self) : v = self._dbcursor.last() return self._fix(v) def set(self, k) : if isinstance(k, str) : k = bytes(k, charset) v = self._dbcursor.set(k) return self._fix(v) def set_recno(self, num) : v = self._dbcursor.set_recno(num) return self._fix(v) def set_range(self, k, dlen=-1, doff=-1) : if isinstance(k, str) : k = bytes(k, charset) v = self._dbcursor.set_range(k, dlen=dlen, doff=doff) return self._fix(v) def dup(self, flags=0) : cursor = self._dbcursor.dup(flags) return dup_cursor_py3k(cursor) def next_dup(self) : v = self._dbcursor.next_dup() return self._fix(v) def next_nodup(self) : v = self._dbcursor.next_nodup() return self._fix(v) def put(self, key, data, flags=0, dlen=-1, doff=-1) : if isinstance(key, str) : key = bytes(key, charset) if isinstance(data, str) : value = bytes(data, charset) return self._dbcursor.put(key, data, flags=flags, dlen=dlen, doff=doff) def current(self, flags=0, dlen=-1, doff=-1) : v = self._dbcursor.current(flags=flags, dlen=dlen, doff=doff) return self._fix(v) def first(self, flags=0, dlen=-1, doff=-1) : v = self._dbcursor.first(flags=flags, dlen=dlen, doff=doff) return self._fix(v) def pget(self, key=None, data=None, flags=0) : # Incorrect because key can be a bare number, # but enough to pass testsuite if isinstance(key, int) and (data is None) and (flags == 0) : flags = key key = None if isinstance(key, str) : key = bytes(key, charset) if isinstance(data, int) and (flags==0) : flags = data data = None if isinstance(data, str) : data = bytes(data, charset) v=self._dbcursor.pget(key=key, data=data, flags=flags) if v is not None : v1, v2, v3 = v if isinstance(v1, bytes) : v1 = v1.decode(charset) if isinstance(v2, bytes) : v2 = v2.decode(charset) v = (v1, v2, v3.decode(charset)) return v def join_item(self) : v = self._dbcursor.join_item() if v is not None : v = v.decode(charset) return v def get(self, *args, **kwargs) : l = len(args) if l == 2 : k, f = args if isinstance(k, str) : k = bytes(k, "iso8859-1") args = (k, f) elif l == 3 : k, d, f = args if isinstance(k, str) : k = bytes(k, charset) if isinstance(d, str) : d = bytes(d, charset) args =(k, d, f) v = self._dbcursor.get(*args, **kwargs) if v is not None : k, v = v if isinstance(k, bytes) : k = k.decode(charset) v = (k, v.decode(charset)) return v def get_both(self, key, value) : if isinstance(key, str) : key = bytes(key, charset) if isinstance(value, str) : value = bytes(value, charset) v=self._dbcursor.get_both(key, value) return self._fix(v) class dup_cursor_py3k(cursor_py3k) : def __init__(self, dbcursor) : self._dbcursor = dbcursor class DB_py3k(object) : def __init__(self, *args, **kwargs) : args2=[] for i in args : if isinstance(i, DBEnv_py3k) : i = i._dbenv args2.append(i) args = tuple(args2) for k, v in kwargs.items() : if isinstance(v, DBEnv_py3k) : kwargs[k] = v._dbenv self._db = bsddb._db.DB_orig(*args, **kwargs) def __contains__(self, k) : if isinstance(k, str) : k = bytes(k, charset) return getattr(self._db, "has_key")(k) def __getitem__(self, k) : if isinstance(k, str) : k = bytes(k, charset) v = self._db[k] if v is not None : v = v.decode(charset) return v def __setitem__(self, k, v) : if isinstance(k, str) : k = bytes(k, charset) if isinstance(v, str) : v = bytes(v, charset) self._db[k] = v def __delitem__(self, k) : if isinstance(k, str) : k = bytes(k, charset) del self._db[k] def __getattr__(self, v) : return getattr(self._db, v) def __len__(self) : return len(self._db) def has_key(self, k, txn=None) : if isinstance(k, str) : k = bytes(k, charset) return self._db.has_key(k, txn=txn) def set_re_delim(self, c) : if isinstance(c, str) : # We can use a numeric value byte too c = bytes(c, charset) return self._db.set_re_delim(c) def set_re_pad(self, c) : if isinstance(c, str) : # We can use a numeric value byte too c = bytes(c, charset) return self._db.set_re_pad(c) def get_re_source(self) : source = self._db.get_re_source() return source.decode(charset) def put(self, key, data, txn=None, flags=0, dlen=-1, doff=-1) : if isinstance(key, str) : key = bytes(key, charset) if isinstance(data, str) : value = bytes(data, charset) return self._db.put(key, data, flags=flags, txn=txn, dlen=dlen, doff=doff) def append(self, value, txn=None) : if isinstance(value, str) : value = bytes(value, charset) return self._db.append(value, txn=txn) def get_size(self, key) : if isinstance(key, str) : key = bytes(key, charset) return self._db.get_size(key) def exists(self, key, *args, **kwargs) : if isinstance(key, str) : key = bytes(key, charset) return self._db.exists(key, *args, **kwargs) def get(self, key, default="MagicCookie", txn=None, flags=0, dlen=-1, doff=-1) : if isinstance(key, str) : key = bytes(key, charset) if default != "MagicCookie" : # Magic for 'test_get_none.py' v=self._db.get(key, default=default, txn=txn, flags=flags, dlen=dlen, doff=doff) else : v=self._db.get(key, txn=txn, flags=flags, dlen=dlen, doff=doff) if (v is not None) and isinstance(v, bytes) : v = v.decode(charset) return v def pget(self, key, txn=None) : if isinstance(key, str) : key = bytes(key, charset) v=self._db.pget(key, txn=txn) if v is not None : v1, v2 = v if isinstance(v1, bytes) : v1 = v1.decode(charset) v = (v1, v2.decode(charset)) return v def get_both(self, key, value, txn=None, flags=0) : if isinstance(key, str) : key = bytes(key, charset) if isinstance(value, str) : value = bytes(value, charset) v=self._db.get_both(key, value, txn=txn, flags=flags) if v is not None : v = v.decode(charset) return v def delete(self, key, txn=None) : if isinstance(key, str) : key = bytes(key, charset) return self._db.delete(key, txn=txn) def keys(self) : k = self._db.keys() if len(k) and isinstance(k[0], bytes) : return [i.decode(charset) for i in self._db.keys()] else : return k def items(self) : data = self._db.items() if not len(data) : return data data2 = [] for k, v in data : if isinstance(k, bytes) : k = k.decode(charset) data2.append((k, v.decode(charset))) return data2 def associate(self, secondarydb, callback, flags=0, txn=None) : class associate_callback(object) : def __init__(self, callback) : self._callback = callback def callback(self, key, data) : if isinstance(key, str) : key = key.decode(charset) data = data.decode(charset) key = self._callback(key, data) if (key != bsddb._db.DB_DONOTINDEX) : if isinstance(key, str) : key = bytes(key, charset) elif isinstance(key, list) : key2 = [] for i in key : if isinstance(i, str) : i = bytes(i, charset) key2.append(i) key = key2 return key return self._db.associate(secondarydb._db, associate_callback(callback).callback, flags=flags, txn=txn) def cursor(self, txn=None, flags=0) : return cursor_py3k(self._db, txn=txn, flags=flags) def join(self, cursor_list) : cursor_list = [i._dbcursor for i in cursor_list] return dup_cursor_py3k(self._db.join(cursor_list)) class DBEnv_py3k(object) : def __init__(self, *args, **kwargs) : self._dbenv = bsddb._db.DBEnv_orig(*args, **kwargs) def __getattr__(self, v) : return getattr(self._dbenv, v) def log_cursor(self, flags=0) : return logcursor_py3k(self._dbenv) def get_lg_dir(self) : return self._dbenv.get_lg_dir().decode(charset) def get_tmp_dir(self) : return self._dbenv.get_tmp_dir().decode(charset) def get_data_dirs(self) : return tuple( (i.decode(charset) for i in self._dbenv.get_data_dirs())) class DBSequence_py3k(object) : def __init__(self, db, *args, **kwargs) : self._db=db self._dbsequence = bsddb._db.DBSequence_orig(db._db, *args, **kwargs) def __getattr__(self, v) : return getattr(self._dbsequence, v) def open(self, key, *args, **kwargs) : return self._dbsequence.open(bytes(key, charset), *args, **kwargs) def get_key(self) : return self._dbsequence.get_key().decode(charset) def get_dbp(self) : return self._db import string string.letters=[chr(i) for i in xrange(65,91)] bsddb._db.DBEnv_orig = bsddb._db.DBEnv bsddb._db.DB_orig = bsddb._db.DB bsddb._db.DBSequence_orig = bsddb._db.DBSequence def do_proxy_db_py3k(flag) : flag2 = do_proxy_db_py3k.flag do_proxy_db_py3k.flag = flag if flag : bsddb.DBEnv = bsddb.db.DBEnv = bsddb._db.DBEnv = DBEnv_py3k bsddb.DB = bsddb.db.DB = bsddb._db.DB = DB_py3k bsddb._db.DBSequence = DBSequence_py3k else : bsddb.DBEnv = bsddb.db.DBEnv = bsddb._db.DBEnv = bsddb._db.DBEnv_orig bsddb.DB = bsddb.db.DB = bsddb._db.DB = bsddb._db.DB_orig bsddb._db.DBSequence = bsddb._db.DBSequence_orig return flag2 do_proxy_db_py3k.flag = False do_proxy_db_py3k(True) from bsddb3 import db, dbtables, dbutils, dbshelve, \ hashopen, btopen, rnopen, dbobj if sys.version_info[0] < 3 : from test import test_support else : from test import support as test_support try: if sys.version_info[0] < 3 : from threading import Thread, currentThread del Thread, currentThread else : from threading import Thread, current_thread del Thread, current_thread have_threads = True except ImportError: have_threads = False verbose = 0 if 'verbose' in sys.argv: verbose = 1 sys.argv.remove('verbose') if 'silent' in sys.argv: # take care of old flag, just in case verbose = 0 sys.argv.remove('silent') def print_versions(): print print '-=' * 38 print db.DB_VERSION_STRING print 'bsddb.db.version(): %s' % (db.version(), ) if db.version() >= (5, 0) : print 'bsddb.db.full_version(): %s' %repr(db.full_version()) print 'bsddb.db.__version__: %s' % db.__version__ # Workaround for allowing generating an EGGs as a ZIP files. suffix="__" print 'py module: %s' % getattr(bsddb, "__file"+suffix) print 'extension module: %s' % getattr(bsddb, "__file"+suffix) print 'Test working dir: %s' % get_test_path_prefix() import platform print 'python version: %s %s' % \ (sys.version.replace("\r", "").replace("\n", ""), \ platform.architecture()[0]) print 'My pid: %s' % os.getpid() print '-=' * 38 def get_new_path(name) : get_new_path.mutex.acquire() try : import os path=os.path.join(get_new_path.prefix, name+"_"+str(os.getpid())+"_"+str(get_new_path.num)) get_new_path.num+=1 finally : get_new_path.mutex.release() return path def get_new_environment_path() : path=get_new_path("environment") import os try: os.makedirs(path,mode=0700) except os.error: test_support.rmtree(path) os.makedirs(path) return path def get_new_database_path() : path=get_new_path("database") import os if os.path.exists(path) : os.remove(path) return path # This path can be overriden via "set_test_path_prefix()". import os, os.path get_new_path.prefix=os.path.join(os.environ.get("TMPDIR", os.path.join(os.sep,"tmp")), "z-Berkeley_DB") get_new_path.num=0 def get_test_path_prefix() : return get_new_path.prefix def set_test_path_prefix(path) : get_new_path.prefix=path def remove_test_path_directory() : test_support.rmtree(get_new_path.prefix) if have_threads : import threading get_new_path.mutex=threading.Lock() del threading else : class Lock(object) : def acquire(self) : pass def release(self) : pass get_new_path.mutex=Lock() del Lock class PrintInfoFakeTest(unittest.TestCase): def testPrintVersions(self): print_versions() # This little hack is for when this module is run as main and all the # other modules import it so they will still be able to get the right # verbose setting. It's confusing but it works. if sys.version_info[0] < 3 : import test_all test_all.verbose = verbose else : import sys print >>sys.stderr, "Work to do!" def suite(module_prefix='', timing_check=None): test_modules = [ 'test_associate', 'test_basics', 'test_dbenv', 'test_db', 'test_compare', 'test_compat', 'test_cursor_pget_bug', 'test_dbobj', 'test_dbshelve', 'test_dbtables', 'test_distributed_transactions', 'test_early_close', 'test_fileid', 'test_get_none', 'test_join', 'test_lock', 'test_misc', 'test_pickle', 'test_queue', 'test_recno', 'test_replication', 'test_sequence', 'test_thread', ] alltests = unittest.TestSuite() for name in test_modules: #module = __import__(name) # Do it this way so that suite may be called externally via # python's Lib/test/test_bsddb3. module = __import__(module_prefix+name, globals(), locals(), name) alltests.addTest(module.test_suite()) if timing_check: alltests.addTest(unittest.makeSuite(timing_check)) return alltests def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(PrintInfoFakeTest)) return suite if __name__ == '__main__': print_versions() unittest.main(defaultTest='suite') bsddb3-6.1.0/Lib/bsddb/test/test_dbobj.py0000644000000000000000000000767112363167637020102 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import os, string import unittest from test_all import db, dbobj, test_support, get_new_environment_path, \ get_new_database_path #---------------------------------------------------------------------- class dbobjTestCase(unittest.TestCase): """Verify that dbobj.DB and dbobj.DBEnv work properly""" db_name = 'test-dbobj.db' def setUp(self): self.homeDir = get_new_environment_path() def tearDown(self): if hasattr(self, 'db'): del self.db if hasattr(self, 'env'): del self.env test_support.rmtree(self.homeDir) def test01_both(self): class TestDBEnv(dbobj.DBEnv): pass class TestDB(dbobj.DB): def put(self, key, *args, **kwargs): key = key.upper() # call our parent classes put method with an upper case key return dbobj.DB.put(self, key, *args, **kwargs) self.env = TestDBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) self.db = TestDB(self.env) self.db.open(self.db_name, db.DB_HASH, db.DB_CREATE) self.db.put('spam', 'eggs') self.assertEqual(self.db.get('spam'), None, "overridden dbobj.DB.put() method failed [1]") self.assertEqual(self.db.get('SPAM'), 'eggs', "overridden dbobj.DB.put() method failed [2]") self.db.close() self.env.close() def test02_dbobj_dict_interface(self): self.env = dbobj.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) self.db = dbobj.DB(self.env) self.db.open(self.db_name+'02', db.DB_HASH, db.DB_CREATE) # __setitem__ self.db['spam'] = 'eggs' # __len__ self.assertEqual(len(self.db), 1) # __getitem__ self.assertEqual(self.db['spam'], 'eggs') # __del__ del self.db['spam'] self.assertEqual(self.db.get('spam'), None, "dbobj __del__ failed") self.db.close() self.env.close() def test03_dbobj_type_before_open(self): # Ensure this doesn't cause a segfault. self.assertRaises(db.DBInvalidArgError, db.DB().type) #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(dbobjTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_associate.py0000644000000000000000000004033412363167637020766 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for DB.associate. """ import sys, os, string import time from pprint import pprint import unittest from test_all import db, dbshelve, test_support, verbose, have_threads, \ get_new_environment_path #---------------------------------------------------------------------- musicdata = { 1 : ("Bad English", "The Price Of Love", "Rock"), 2 : ("DNA featuring Suzanne Vega", "Tom's Diner", "Rock"), 3 : ("George Michael", "Praying For Time", "Rock"), 4 : ("Gloria Estefan", "Here We Are", "Rock"), 5 : ("Linda Ronstadt", "Don't Know Much", "Rock"), 6 : ("Michael Bolton", "How Am I Supposed To Live Without You", "Blues"), 7 : ("Paul Young", "Oh Girl", "Rock"), 8 : ("Paula Abdul", "Opposites Attract", "Rock"), 9 : ("Richard Marx", "Should've Known Better", "Rock"), 10: ("Rod Stewart", "Forever Young", "Rock"), 11: ("Roxette", "Dangerous", "Rock"), 12: ("Sheena Easton", "The Lover In Me", "Rock"), 13: ("Sinead O'Connor", "Nothing Compares 2 U", "Rock"), 14: ("Stevie B.", "Because I Love You", "Rock"), 15: ("Taylor Dayne", "Love Will Lead You Back", "Rock"), 16: ("The Bangles", "Eternal Flame", "Rock"), 17: ("Wilson Phillips", "Release Me", "Rock"), 18: ("Billy Joel", "Blonde Over Blue", "Rock"), 19: ("Billy Joel", "Famous Last Words", "Rock"), 20: ("Billy Joel", "Lullabye (Goodnight, My Angel)", "Rock"), 21: ("Billy Joel", "The River Of Dreams", "Rock"), 22: ("Billy Joel", "Two Thousand Years", "Rock"), 23: ("Janet Jackson", "Alright", "Rock"), 24: ("Janet Jackson", "Black Cat", "Rock"), 25: ("Janet Jackson", "Come Back To Me", "Rock"), 26: ("Janet Jackson", "Escapade", "Rock"), 27: ("Janet Jackson", "Love Will Never Do (Without You)", "Rock"), 28: ("Janet Jackson", "Miss You Much", "Rock"), 29: ("Janet Jackson", "Rhythm Nation", "Rock"), 30: ("Janet Jackson", "State Of The World", "Rock"), 31: ("Janet Jackson", "The Knowledge", "Rock"), 32: ("Spyro Gyra", "End of Romanticism", "Jazz"), 33: ("Spyro Gyra", "Heliopolis", "Jazz"), 34: ("Spyro Gyra", "Jubilee", "Jazz"), 35: ("Spyro Gyra", "Little Linda", "Jazz"), 36: ("Spyro Gyra", "Morning Dance", "Jazz"), 37: ("Spyro Gyra", "Song for Lorraine", "Jazz"), 38: ("Yes", "Owner Of A Lonely Heart", "Rock"), 39: ("Yes", "Rhythm Of Love", "Rock"), 40: ("Cusco", "Dream Catcher", "New Age"), 41: ("Cusco", "Geronimos Laughter", "New Age"), 42: ("Cusco", "Ghost Dance", "New Age"), 43: ("Blue Man Group", "Drumbone", "New Age"), 44: ("Blue Man Group", "Endless Column", "New Age"), 45: ("Blue Man Group", "Klein Mandelbrot", "New Age"), 46: ("Kenny G", "Silhouette", "Jazz"), 47: ("Sade", "Smooth Operator", "Jazz"), 48: ("David Arkenstone", "Papillon (On The Wings Of The Butterfly)", "New Age"), 49: ("David Arkenstone", "Stepping Stars", "New Age"), 50: ("David Arkenstone", "Carnation Lily Lily Rose", "New Age"), 51: ("David Lanz", "Behind The Waterfall", "New Age"), 52: ("David Lanz", "Cristofori's Dream", "New Age"), 53: ("David Lanz", "Heartsounds", "New Age"), 54: ("David Lanz", "Leaves on the Seine", "New Age"), 99: ("unknown artist", "Unnamed song", "Unknown"), } #---------------------------------------------------------------------- class AssociateErrorTestCase(unittest.TestCase): def setUp(self): self.filename = self.__class__.__name__ + '.db' self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) def tearDown(self): self.env.close() self.env = None test_support.rmtree(self.homeDir) def test00_associateDBError(self): if verbose: print '\n', '-=' * 30 print "Running %s.test00_associateDBError..." % \ self.__class__.__name__ dupDB = db.DB(self.env) dupDB.set_flags(db.DB_DUP) dupDB.open(self.filename, "primary", db.DB_BTREE, db.DB_CREATE) secDB = db.DB(self.env) secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE) # dupDB has been configured to allow duplicates, it can't # associate with a secondary. Berkeley DB will return an error. try: def f(a,b): return a+b dupDB.associate(secDB, f) except db.DBError: # good secDB.close() dupDB.close() else: secDB.close() dupDB.close() self.fail("DBError exception was expected") #---------------------------------------------------------------------- class AssociateTestCase(unittest.TestCase): keytype = '' envFlags = 0 dbFlags = 0 def setUp(self): self.filename = self.__class__.__name__ + '.db' self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_THREAD | self.envFlags) def tearDown(self): self.closeDB() self.env.close() self.env = None test_support.rmtree(self.homeDir) def addDataToDB(self, d, txn=None): for key, value in musicdata.items(): if type(self.keytype) == type(''): key = "%02d" % key d.put(key, '|'.join(value), txn=txn) def createDB(self, txn=None): self.cur = None self.secDB = None self.primary = db.DB(self.env) self.primary.set_get_returns_none(2) self.primary.open(self.filename, "primary", self.dbtype, db.DB_CREATE | db.DB_THREAD | self.dbFlags, txn=txn) def closeDB(self): if self.cur: self.cur.close() self.cur = None if self.secDB: self.secDB.close() self.secDB = None self.primary.close() self.primary = None def getDB(self): return self.primary def _associateWithDB(self, getGenre): self.createDB() self.secDB = db.DB(self.env) self.secDB.set_flags(db.DB_DUP) self.secDB.set_get_returns_none(2) self.secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE | db.DB_THREAD | self.dbFlags) self.getDB().associate(self.secDB, getGenre) self.addDataToDB(self.getDB()) self.finish_test(self.secDB) def test01_associateWithDB(self): if verbose: print '\n', '-=' * 30 print "Running %s.test01_associateWithDB..." % \ self.__class__.__name__ return self._associateWithDB(self.getGenre) def _associateAfterDB(self, getGenre) : self.createDB() self.addDataToDB(self.getDB()) self.secDB = db.DB(self.env) self.secDB.set_flags(db.DB_DUP) self.secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE | db.DB_THREAD | self.dbFlags) # adding the DB_CREATE flag will cause it to index existing records self.getDB().associate(self.secDB, getGenre, db.DB_CREATE) self.finish_test(self.secDB) def test02_associateAfterDB(self): if verbose: print '\n', '-=' * 30 print "Running %s.test02_associateAfterDB..." % \ self.__class__.__name__ return self._associateAfterDB(self.getGenre) def test03_associateWithDB(self): if verbose: print '\n', '-=' * 30 print "Running %s.test03_associateWithDB..." % \ self.__class__.__name__ return self._associateWithDB(self.getGenreList) def test04_associateAfterDB(self): if verbose: print '\n', '-=' * 30 print "Running %s.test04_associateAfterDB..." % \ self.__class__.__name__ return self._associateAfterDB(self.getGenreList) def finish_test(self, secDB, txn=None): # 'Blues' should not be in the secondary database vals = secDB.pget('Blues', txn=txn) self.assertEqual(vals, None, vals) vals = secDB.pget('Unknown', txn=txn) self.assertTrue(vals[0] == 99 or vals[0] == '99', vals) vals[1].index('Unknown') vals[1].index('Unnamed') vals[1].index('unknown') if verbose: print "Primary key traversal:" self.cur = self.getDB().cursor(txn) count = 0 rec = self.cur.first() while rec is not None: if type(self.keytype) == type(''): self.assertTrue(int(rec[0])) # for primary db, key is a number else: self.assertTrue(rec[0] and type(rec[0]) == type(0)) count = count + 1 if verbose: print rec rec = getattr(self.cur, "next")() self.assertEqual(count, len(musicdata)) # all items accounted for if verbose: print "Secondary key traversal:" self.cur = secDB.cursor(txn) count = 0 # test cursor pget vals = self.cur.pget('Unknown', flags=db.DB_LAST) self.assertTrue(vals[1] == 99 or vals[1] == '99', vals) self.assertEqual(vals[0], 'Unknown') vals[2].index('Unknown') vals[2].index('Unnamed') vals[2].index('unknown') vals = self.cur.pget('Unknown', data='wrong value', flags=db.DB_GET_BOTH) self.assertEqual(vals, None, vals) rec = self.cur.first() self.assertEqual(rec[0], "Jazz") while rec is not None: count = count + 1 if verbose: print rec rec = getattr(self.cur, "next")() # all items accounted for EXCEPT for 1 with "Blues" genre self.assertEqual(count, len(musicdata)-1) self.cur = None def getGenre(self, priKey, priData): self.assertEqual(type(priData), type("")) genre = priData.split('|')[2] if verbose: print 'getGenre key: %r data: %r' % (priKey, priData) if genre == 'Blues': return db.DB_DONOTINDEX else: return genre def getGenreList(self, priKey, PriData) : v = self.getGenre(priKey, PriData) if type(v) == type("") : v = [v] return v #---------------------------------------------------------------------- class AssociateHashTestCase(AssociateTestCase): dbtype = db.DB_HASH class AssociateBTreeTestCase(AssociateTestCase): dbtype = db.DB_BTREE class AssociateRecnoTestCase(AssociateTestCase): dbtype = db.DB_RECNO keytype = 0 #---------------------------------------------------------------------- class AssociateBTreeTxnTestCase(AssociateBTreeTestCase): envFlags = db.DB_INIT_TXN dbFlags = 0 def txn_finish_test(self, sDB, txn): try: self.finish_test(sDB, txn=txn) finally: if self.cur: self.cur.close() self.cur = None if txn: txn.commit() def test13_associate_in_transaction(self): if verbose: print '\n', '-=' * 30 print "Running %s.test13_associateAutoCommit..." % \ self.__class__.__name__ txn = self.env.txn_begin() try: self.createDB(txn=txn) self.secDB = db.DB(self.env) self.secDB.set_flags(db.DB_DUP) self.secDB.set_get_returns_none(2) self.secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, txn=txn) self.getDB().associate(self.secDB, self.getGenre, txn=txn) self.addDataToDB(self.getDB(), txn=txn) except: txn.abort() raise self.txn_finish_test(self.secDB, txn=txn) #---------------------------------------------------------------------- class ShelveAssociateTestCase(AssociateTestCase): def createDB(self): self.primary = dbshelve.open(self.filename, dbname="primary", dbenv=self.env, filetype=self.dbtype) def addDataToDB(self, d): for key, value in musicdata.items(): if type(self.keytype) == type(''): key = "%02d" % key d.put(key, value) # save the value as is this time def getGenre(self, priKey, priData): self.assertEqual(type(priData), type(())) if verbose: print 'getGenre key: %r data: %r' % (priKey, priData) genre = priData[2] if genre == 'Blues': return db.DB_DONOTINDEX else: return genre class ShelveAssociateHashTestCase(ShelveAssociateTestCase): dbtype = db.DB_HASH class ShelveAssociateBTreeTestCase(ShelveAssociateTestCase): dbtype = db.DB_BTREE class ShelveAssociateRecnoTestCase(ShelveAssociateTestCase): dbtype = db.DB_RECNO keytype = 0 #---------------------------------------------------------------------- class ThreadedAssociateTestCase(AssociateTestCase): def addDataToDB(self, d): t1 = Thread(target = self.writer1, args = (d, )) t2 = Thread(target = self.writer2, args = (d, )) t1.setDaemon(True) t2.setDaemon(True) t1.start() t2.start() t1.join() t2.join() def writer1(self, d): for key, value in musicdata.items(): if type(self.keytype) == type(''): key = "%02d" % key d.put(key, '|'.join(value)) def writer2(self, d): for x in range(100, 600): key = 'z%2d' % x value = [key] * 4 d.put(key, '|'.join(value)) class ThreadedAssociateHashTestCase(ShelveAssociateTestCase): dbtype = db.DB_HASH class ThreadedAssociateBTreeTestCase(ShelveAssociateTestCase): dbtype = db.DB_BTREE class ThreadedAssociateRecnoTestCase(ShelveAssociateTestCase): dbtype = db.DB_RECNO keytype = 0 #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(AssociateErrorTestCase)) suite.addTest(unittest.makeSuite(AssociateHashTestCase)) suite.addTest(unittest.makeSuite(AssociateBTreeTestCase)) suite.addTest(unittest.makeSuite(AssociateRecnoTestCase)) suite.addTest(unittest.makeSuite(AssociateBTreeTxnTestCase)) suite.addTest(unittest.makeSuite(ShelveAssociateHashTestCase)) suite.addTest(unittest.makeSuite(ShelveAssociateBTreeTestCase)) suite.addTest(unittest.makeSuite(ShelveAssociateRecnoTestCase)) if have_threads: suite.addTest(unittest.makeSuite(ThreadedAssociateHashTestCase)) suite.addTest(unittest.makeSuite(ThreadedAssociateBTreeTestCase)) suite.addTest(unittest.makeSuite(ThreadedAssociateRecnoTestCase)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_dbtables.py0000644000000000000000000003610112247674231020562 0ustar rootroot00000000000000#!/usr/bin/env python # #----------------------------------------------------------------------- # A test suite for the table interface built on bsddb.db #----------------------------------------------------------------------- # # Copyright (C) 2000, 2001 by Autonomous Zone Industries # Copyright (C) 2002 Gregory P. Smith # # March 20, 2000 # # License: This is free software. You may use this software for any # purpose including modification/redistribution, so long as # this header remains intact and that you do not claim any # rights of ownership or authorship of this software. This # software has been tested, but no warranty is expressed or # implied. # # -- Gregory P. Smith # # $Id: test_dbtables.py,v 53e5f052c511 2012/01/16 18:18:15 jcea $ import os, re, sys if sys.version_info[0] < 3 : try: import cPickle pickle = cPickle except ImportError: import pickle else : import pickle import unittest from test_all import db, dbtables, test_support, verbose, \ get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- class TableDBTestCase(unittest.TestCase): db_name = 'test-table.db' def setUp(self): import sys if sys.version_info[0] >= 3 : from test_all import do_proxy_db_py3k self._flag_proxy_db_py3k = do_proxy_db_py3k(False) self.testHomeDir = get_new_environment_path() self.tdb = dbtables.bsdTableDB( filename='tabletest.db', dbhome=self.testHomeDir, create=1) def tearDown(self): self.tdb.close() import sys if sys.version_info[0] >= 3 : from test_all import do_proxy_db_py3k do_proxy_db_py3k(self._flag_proxy_db_py3k) test_support.rmtree(self.testHomeDir) def test01(self): tabname = "test01" colname = 'cool numbers' try: self.tdb.Drop(tabname) except dbtables.TableDBError: pass self.tdb.CreateTable(tabname, [colname]) import sys if sys.version_info[0] < 3 : self.tdb.Insert(tabname, {colname: pickle.dumps(3.14159, 1)}) else : self.tdb.Insert(tabname, {colname: pickle.dumps(3.14159, 1).decode("iso8859-1")}) # 8 bits if verbose: self.tdb._db_print() values = self.tdb.Select( tabname, [colname], conditions={colname: None}) import sys if sys.version_info[0] < 3 : colval = pickle.loads(values[0][colname]) else : colval = pickle.loads(bytes(values[0][colname], "iso8859-1")) self.assertTrue(colval > 3.141) self.assertTrue(colval < 3.142) def test02(self): tabname = "test02" col0 = 'coolness factor' col1 = 'but can it fly?' col2 = 'Species' import sys if sys.version_info[0] < 3 : testinfo = [ {col0: pickle.dumps(8, 1), col1: 'no', col2: 'Penguin'}, {col0: pickle.dumps(-1, 1), col1: 'no', col2: 'Turkey'}, {col0: pickle.dumps(9, 1), col1: 'yes', col2: 'SR-71A Blackbird'} ] else : testinfo = [ {col0: pickle.dumps(8, 1).decode("iso8859-1"), col1: 'no', col2: 'Penguin'}, {col0: pickle.dumps(-1, 1).decode("iso8859-1"), col1: 'no', col2: 'Turkey'}, {col0: pickle.dumps(9, 1).decode("iso8859-1"), col1: 'yes', col2: 'SR-71A Blackbird'} ] try: self.tdb.Drop(tabname) except dbtables.TableDBError: pass self.tdb.CreateTable(tabname, [col0, col1, col2]) for row in testinfo : self.tdb.Insert(tabname, row) import sys if sys.version_info[0] < 3 : values = self.tdb.Select(tabname, [col2], conditions={col0: lambda x: pickle.loads(x) >= 8}) else : values = self.tdb.Select(tabname, [col2], conditions={col0: lambda x: pickle.loads(bytes(x, "iso8859-1")) >= 8}) self.assertEqual(len(values), 2) if values[0]['Species'] == 'Penguin' : self.assertEqual(values[1]['Species'], 'SR-71A Blackbird') elif values[0]['Species'] == 'SR-71A Blackbird' : self.assertEqual(values[1]['Species'], 'Penguin') else : if verbose: print "values= %r" % (values,) raise RuntimeError("Wrong values returned!") def test03(self): tabname = "test03" try: self.tdb.Drop(tabname) except dbtables.TableDBError: pass if verbose: print '...before CreateTable...' self.tdb._db_print() self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e']) if verbose: print '...after CreateTable...' self.tdb._db_print() self.tdb.Drop(tabname) if verbose: print '...after Drop...' self.tdb._db_print() self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e']) try: self.tdb.Insert(tabname, {'a': "", 'e': pickle.dumps([{4:5, 6:7}, 'foo'], 1), 'f': "Zero"}) self.fail('Expected an exception') except dbtables.TableDBError: pass try: self.tdb.Select(tabname, [], conditions={'foo': '123'}) self.fail('Expected an exception') except dbtables.TableDBError: pass self.tdb.Insert(tabname, {'a': '42', 'b': "bad", 'c': "meep", 'e': 'Fuzzy wuzzy was a bear'}) self.tdb.Insert(tabname, {'a': '581750', 'b': "good", 'd': "bla", 'c': "black", 'e': 'fuzzy was here'}) self.tdb.Insert(tabname, {'a': '800000', 'b': "good", 'd': "bla", 'c': "black", 'e': 'Fuzzy wuzzy is a bear'}) if verbose: self.tdb._db_print() # this should return two rows values = self.tdb.Select(tabname, ['b', 'a', 'd'], conditions={'e': re.compile('wuzzy').search, 'a': re.compile('^[0-9]+$').match}) self.assertEqual(len(values), 2) # now lets delete one of them and try again self.tdb.Delete(tabname, conditions={'b': dbtables.ExactCond('good')}) values = self.tdb.Select( tabname, ['a', 'd', 'b'], conditions={'e': dbtables.PrefixCond('Fuzzy')}) self.assertEqual(len(values), 1) self.assertEqual(values[0]['d'], None) values = self.tdb.Select(tabname, ['b'], conditions={'c': lambda c: c == 'meep'}) self.assertEqual(len(values), 1) self.assertEqual(values[0]['b'], "bad") def test04_MultiCondSelect(self): tabname = "test04_MultiCondSelect" try: self.tdb.Drop(tabname) except dbtables.TableDBError: pass self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e']) try: self.tdb.Insert(tabname, {'a': "", 'e': pickle.dumps([{4:5, 6:7}, 'foo'], 1), 'f': "Zero"}) self.fail('Expected an exception') except dbtables.TableDBError: pass self.tdb.Insert(tabname, {'a': "A", 'b': "B", 'c': "C", 'd': "D", 'e': "E"}) self.tdb.Insert(tabname, {'a': "-A", 'b': "-B", 'c': "-C", 'd': "-D", 'e': "-E"}) self.tdb.Insert(tabname, {'a': "A-", 'b': "B-", 'c': "C-", 'd': "D-", 'e': "E-"}) if verbose: self.tdb._db_print() # This select should return 0 rows. it is designed to test # the bug identified and fixed in sourceforge bug # 590449 # (Big Thanks to "Rob Tillotson (n9mtb)" for tracking this down # and supplying a fix!! This one caused many headaches to say # the least...) values = self.tdb.Select(tabname, ['b', 'a', 'd'], conditions={'e': dbtables.ExactCond('E'), 'a': dbtables.ExactCond('A'), 'd': dbtables.PrefixCond('-') } ) self.assertEqual(len(values), 0, values) def test_CreateOrExtend(self): tabname = "test_CreateOrExtend" self.tdb.CreateOrExtendTable( tabname, ['name', 'taste', 'filling', 'alcohol content', 'price']) try: self.tdb.Insert(tabname, {'taste': 'crap', 'filling': 'no', 'is it Guinness?': 'no'}) self.fail("Insert should've failed due to bad column name") except: pass self.tdb.CreateOrExtendTable(tabname, ['name', 'taste', 'is it Guinness?']) # these should both succeed as the table should contain the union of both sets of columns. self.tdb.Insert(tabname, {'taste': 'crap', 'filling': 'no', 'is it Guinness?': 'no'}) self.tdb.Insert(tabname, {'taste': 'great', 'filling': 'yes', 'is it Guinness?': 'yes', 'name': 'Guinness'}) def test_CondObjs(self): tabname = "test_CondObjs" self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e', 'p']) self.tdb.Insert(tabname, {'a': "the letter A", 'b': "the letter B", 'c': "is for cookie"}) self.tdb.Insert(tabname, {'a': "is for aardvark", 'e': "the letter E", 'c': "is for cookie", 'd': "is for dog"}) self.tdb.Insert(tabname, {'a': "the letter A", 'e': "the letter E", 'c': "is for cookie", 'p': "is for Python"}) values = self.tdb.Select( tabname, ['p', 'e'], conditions={'e': dbtables.PrefixCond('the l')}) self.assertEqual(len(values), 2, values) self.assertEqual(values[0]['e'], values[1]['e'], values) self.assertNotEqual(values[0]['p'], values[1]['p'], values) values = self.tdb.Select( tabname, ['d', 'a'], conditions={'a': dbtables.LikeCond('%aardvark%')}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['d'], "is for dog", values) self.assertEqual(values[0]['a'], "is for aardvark", values) values = self.tdb.Select(tabname, None, {'b': dbtables.Cond(), 'e':dbtables.LikeCond('%letter%'), 'a':dbtables.PrefixCond('is'), 'd':dbtables.ExactCond('is for dog'), 'c':dbtables.PrefixCond('is for'), 'p':lambda s: not s}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['d'], "is for dog", values) self.assertEqual(values[0]['a'], "is for aardvark", values) def test_Delete(self): tabname = "test_Delete" self.tdb.CreateTable(tabname, ['x', 'y', 'z']) # prior to 2001-05-09 there was a bug where Delete() would # fail if it encountered any rows that did not have values in # every column. # Hunted and Squashed by (Jukka Santala - donwulff@nic.fi) self.tdb.Insert(tabname, {'x': 'X1', 'y':'Y1'}) self.tdb.Insert(tabname, {'x': 'X2', 'y':'Y2', 'z': 'Z2'}) self.tdb.Delete(tabname, conditions={'x': dbtables.PrefixCond('X')}) values = self.tdb.Select(tabname, ['y'], conditions={'x': dbtables.PrefixCond('X')}) self.assertEqual(len(values), 0) def test_Modify(self): tabname = "test_Modify" self.tdb.CreateTable(tabname, ['Name', 'Type', 'Access']) self.tdb.Insert(tabname, {'Name': 'Index to MP3 files.doc', 'Type': 'Word', 'Access': '8'}) self.tdb.Insert(tabname, {'Name': 'Nifty.MP3', 'Access': '1'}) self.tdb.Insert(tabname, {'Type': 'Unknown', 'Access': '0'}) def set_type(type): if type is None: return 'MP3' return type def increment_access(count): return str(int(count)+1) def remove_value(value): return None self.tdb.Modify(tabname, conditions={'Access': dbtables.ExactCond('0')}, mappings={'Access': remove_value}) self.tdb.Modify(tabname, conditions={'Name': dbtables.LikeCond('%MP3%')}, mappings={'Type': set_type}) self.tdb.Modify(tabname, conditions={'Name': dbtables.LikeCond('%')}, mappings={'Access': increment_access}) try: self.tdb.Modify(tabname, conditions={'Name': dbtables.LikeCond('%')}, mappings={'Access': 'What is your quest?'}) except TypeError: # success, the string value in mappings isn't callable pass else: raise RuntimeError, "why was TypeError not raised for bad callable?" # Delete key in select conditions values = self.tdb.Select( tabname, None, conditions={'Type': dbtables.ExactCond('Unknown')}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['Name'], None, values) self.assertEqual(values[0]['Access'], None, values) # Modify value by select conditions values = self.tdb.Select( tabname, None, conditions={'Name': dbtables.ExactCond('Nifty.MP3')}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['Type'], "MP3", values) self.assertEqual(values[0]['Access'], "2", values) # Make sure change applied only to select conditions values = self.tdb.Select( tabname, None, conditions={'Name': dbtables.LikeCond('%doc%')}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['Type'], "Word", values) self.assertEqual(values[0]['Access'], "9", values) def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(TableDBTestCase)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_thread.py0000644000000000000000000004145712363167637020271 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for multi-threaded access to a DB. """ import os import sys import time import errno from random import random DASH = '-' try: WindowsError except NameError: class WindowsError(Exception): pass import unittest from test_all import db, dbutils, test_support, verbose, have_threads, \ get_new_environment_path, get_new_database_path if have_threads : from threading import Thread if sys.version_info[0] < 3 : from threading import currentThread else : from threading import current_thread as currentThread #---------------------------------------------------------------------- class BaseThreadedTestCase(unittest.TestCase): dbtype = db.DB_UNKNOWN # must be set in derived class dbopenflags = 0 dbsetflags = 0 envflags = 0 def setUp(self): if verbose: dbutils._deadlock_VerboseFile = sys.stdout self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.setEnvOpts() self.env.open(self.homeDir, self.envflags | db.DB_CREATE) self.filename = self.__class__.__name__ + '.db' self.d = db.DB(self.env) if self.dbsetflags: self.d.set_flags(self.dbsetflags) self.d.open(self.filename, self.dbtype, self.dbopenflags|db.DB_CREATE) def tearDown(self): self.d.close() self.env.close() test_support.rmtree(self.homeDir) def setEnvOpts(self): pass def makeData(self, key): return DASH.join([key] * 5) #---------------------------------------------------------------------- class ConcurrentDataStoreBase(BaseThreadedTestCase): dbopenflags = db.DB_THREAD envflags = db.DB_THREAD | db.DB_INIT_CDB | db.DB_INIT_MPOOL readers = 0 # derived class should set writers = 0 records = 1000 def test01_1WriterMultiReaders(self): if verbose: print '\n', '-=' * 30 print "Running %s.test01_1WriterMultiReaders..." % \ self.__class__.__name__ keys=range(self.records) import random random.shuffle(keys) records_per_writer=self.records//self.writers readers_per_writer=self.readers//self.writers self.assertEqual(self.records,self.writers*records_per_writer) self.assertEqual(self.readers,self.writers*readers_per_writer) self.assertTrue((records_per_writer%readers_per_writer)==0) readers = [] for x in xrange(self.readers): rt = Thread(target = self.readerThread, args = (self.d, x), name = 'reader %d' % x, )#verbose = verbose) if sys.version_info[0] < 3 : rt.setDaemon(True) else : rt.daemon = True readers.append(rt) writers=[] for x in xrange(self.writers): a=keys[records_per_writer*x:records_per_writer*(x+1)] a.sort() # Generate conflicts b=readers[readers_per_writer*x:readers_per_writer*(x+1)] wt = Thread(target = self.writerThread, args = (self.d, a, b), name = 'writer %d' % x, )#verbose = verbose) writers.append(wt) for t in writers: if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() for t in writers: t.join() for t in readers: t.join() def writerThread(self, d, keys, readers): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name if verbose: print "%s: creating records %d - %d" % (name, start, stop) count=len(keys)//len(readers) count2=count for x in keys : key = '%04d' % x dbutils.DeadlockWrap(d.put, key, self.makeData(key), max_retries=12) if verbose and x % 100 == 0: print "%s: records %d - %d finished" % (name, start, x) count2-=1 if not count2 : readers.pop().start() count2=count if verbose: print "%s: finished creating records" % name if verbose: print "%s: thread finished" % name def readerThread(self, d, readerNum): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name for i in xrange(5) : c = d.cursor() count = 0 rec = c.first() while rec: count += 1 key, data = rec self.assertEqual(self.makeData(key), data) rec = c.next() if verbose: print "%s: found %d records" % (name, count) c.close() if verbose: print "%s: thread finished" % name class BTreeConcurrentDataStore(ConcurrentDataStoreBase): dbtype = db.DB_BTREE writers = 2 readers = 10 records = 1000 class HashConcurrentDataStore(ConcurrentDataStoreBase): dbtype = db.DB_HASH writers = 2 readers = 10 records = 1000 #---------------------------------------------------------------------- class SimpleThreadedBase(BaseThreadedTestCase): dbopenflags = db.DB_THREAD envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK readers = 10 writers = 2 records = 1000 def setEnvOpts(self): self.env.set_lk_detect(db.DB_LOCK_DEFAULT) def test02_SimpleLocks(self): if verbose: print '\n', '-=' * 30 print "Running %s.test02_SimpleLocks..." % self.__class__.__name__ keys=range(self.records) import random random.shuffle(keys) records_per_writer=self.records//self.writers readers_per_writer=self.readers//self.writers self.assertEqual(self.records,self.writers*records_per_writer) self.assertEqual(self.readers,self.writers*readers_per_writer) self.assertTrue((records_per_writer%readers_per_writer)==0) readers = [] for x in xrange(self.readers): rt = Thread(target = self.readerThread, args = (self.d, x), name = 'reader %d' % x, )#verbose = verbose) if sys.version_info[0] < 3 : rt.setDaemon(True) else : rt.daemon = True readers.append(rt) writers = [] for x in xrange(self.writers): a=keys[records_per_writer*x:records_per_writer*(x+1)] a.sort() # Generate conflicts b=readers[readers_per_writer*x:readers_per_writer*(x+1)] wt = Thread(target = self.writerThread, args = (self.d, a, b), name = 'writer %d' % x, )#verbose = verbose) writers.append(wt) for t in writers: if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() for t in writers: t.join() for t in readers: t.join() def writerThread(self, d, keys, readers): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name if verbose: print "%s: creating records %d - %d" % (name, start, stop) count=len(keys)//len(readers) count2=count for x in keys : key = '%04d' % x dbutils.DeadlockWrap(d.put, key, self.makeData(key), max_retries=12) if verbose and x % 100 == 0: print "%s: records %d - %d finished" % (name, start, x) count2-=1 if not count2 : readers.pop().start() count2=count if verbose: print "%s: thread finished" % name def readerThread(self, d, readerNum): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name c = d.cursor() count = 0 rec = dbutils.DeadlockWrap(c.first, max_retries=10) while rec: count += 1 key, data = rec self.assertEqual(self.makeData(key), data) rec = dbutils.DeadlockWrap(c.next, max_retries=10) if verbose: print "%s: found %d records" % (name, count) c.close() if verbose: print "%s: thread finished" % name class BTreeSimpleThreaded(SimpleThreadedBase): dbtype = db.DB_BTREE class HashSimpleThreaded(SimpleThreadedBase): dbtype = db.DB_HASH #---------------------------------------------------------------------- class ThreadedTransactionsBase(BaseThreadedTestCase): dbopenflags = db.DB_THREAD | db.DB_AUTO_COMMIT envflags = (db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_LOG | db.DB_INIT_TXN ) readers = 0 writers = 0 records = 2000 txnFlag = 0 def setEnvOpts(self): #self.env.set_lk_detect(db.DB_LOCK_DEFAULT) pass def test03_ThreadedTransactions(self): if verbose: print '\n', '-=' * 30 print "Running %s.test03_ThreadedTransactions..." % \ self.__class__.__name__ keys=range(self.records) import random random.shuffle(keys) records_per_writer=self.records//self.writers readers_per_writer=self.readers//self.writers self.assertEqual(self.records,self.writers*records_per_writer) self.assertEqual(self.readers,self.writers*readers_per_writer) self.assertTrue((records_per_writer%readers_per_writer)==0) readers=[] for x in xrange(self.readers): rt = Thread(target = self.readerThread, args = (self.d, x), name = 'reader %d' % x, )#verbose = verbose) if sys.version_info[0] < 3 : rt.setDaemon(True) else : rt.daemon = True readers.append(rt) writers = [] for x in xrange(self.writers): a=keys[records_per_writer*x:records_per_writer*(x+1)] b=readers[readers_per_writer*x:readers_per_writer*(x+1)] wt = Thread(target = self.writerThread, args = (self.d, a, b), name = 'writer %d' % x, )#verbose = verbose) writers.append(wt) dt = Thread(target = self.deadlockThread) if sys.version_info[0] < 3 : dt.setDaemon(True) else : dt.daemon = True dt.start() for t in writers: if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() for t in writers: t.join() for t in readers: t.join() self.doLockDetect = False dt.join() def writerThread(self, d, keys, readers): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name count=len(keys)//len(readers) while len(keys): try: txn = self.env.txn_begin(None, self.txnFlag) keys2=keys[:count] for x in keys2 : key = '%04d' % x d.put(key, self.makeData(key), txn) if verbose and x % 100 == 0: print "%s: records %d - %d finished" % (name, start, x) txn.commit() keys=keys[count:] readers.pop().start() except (db.DBLockDeadlockError, db.DBLockNotGrantedError), val: if verbose: print "%s: Aborting transaction (%s)" % (name, val.args[1]) txn.abort() if verbose: print "%s: thread finished" % name def readerThread(self, d, readerNum): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name finished = False while not finished: try: txn = self.env.txn_begin(None, self.txnFlag) c = d.cursor(txn) count = 0 rec = c.first() while rec: count += 1 key, data = rec self.assertEqual(self.makeData(key), data) rec = c.next() if verbose: print "%s: found %d records" % (name, count) c.close() txn.commit() finished = True except (db.DBLockDeadlockError, db.DBLockNotGrantedError), val: if verbose: print "%s: Aborting transaction (%s)" % (name, val.args[1]) c.close() txn.abort() if verbose: print "%s: thread finished" % name def deadlockThread(self): self.doLockDetect = True while self.doLockDetect: time.sleep(0.05) try: aborted = self.env.lock_detect( db.DB_LOCK_RANDOM, db.DB_LOCK_CONFLICT) if verbose and aborted: print "deadlock: Aborted %d deadlocked transaction(s)" \ % aborted except db.DBError: pass class BTreeThreadedTransactions(ThreadedTransactionsBase): dbtype = db.DB_BTREE writers = 2 readers = 10 records = 1000 class HashThreadedTransactions(ThreadedTransactionsBase): dbtype = db.DB_HASH writers = 2 readers = 10 records = 1000 class BTreeThreadedNoWaitTransactions(ThreadedTransactionsBase): dbtype = db.DB_BTREE writers = 2 readers = 10 records = 1000 txnFlag = db.DB_TXN_NOWAIT class HashThreadedNoWaitTransactions(ThreadedTransactionsBase): dbtype = db.DB_HASH writers = 2 readers = 10 records = 1000 txnFlag = db.DB_TXN_NOWAIT #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() if have_threads: suite.addTest(unittest.makeSuite(BTreeConcurrentDataStore)) suite.addTest(unittest.makeSuite(HashConcurrentDataStore)) suite.addTest(unittest.makeSuite(BTreeSimpleThreaded)) suite.addTest(unittest.makeSuite(HashSimpleThreaded)) suite.addTest(unittest.makeSuite(BTreeThreadedTransactions)) suite.addTest(unittest.makeSuite(HashThreadedTransactions)) suite.addTest(unittest.makeSuite(BTreeThreadedNoWaitTransactions)) suite.addTest(unittest.makeSuite(HashThreadedNoWaitTransactions)) else: print "Threads not available, skipping thread tests." return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_early_close.py0000644000000000000000000001767412363167637021327 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for checking that it does not segfault when a DBEnv object is closed before its DB objects. """ import os, sys import unittest from test_all import db, test_support, verbose, get_new_environment_path, get_new_database_path # We're going to get warnings in this module about trying to close the db when # its env is already closed. Let's just ignore those. try: import warnings except ImportError: pass else: warnings.filterwarnings('ignore', message='DB could not be closed in', category=RuntimeWarning) #---------------------------------------------------------------------- class DBEnvClosedEarlyCrash(unittest.TestCase): def setUp(self): self.homeDir = get_new_environment_path() self.filename = "test" def tearDown(self): test_support.rmtree(self.homeDir) def test01_close_dbenv_before_db(self): dbenv = db.DBEnv() dbenv.open(self.homeDir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL, 0666) d = db.DB(dbenv) d2 = db.DB(dbenv) d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0666) self.assertRaises(db.DBNoSuchFileError, d2.open, self.filename+"2", db.DB_BTREE, db.DB_THREAD, 0666) d.put("test","this is a test") self.assertEqual(d.get("test"), "this is a test", "put!=get") dbenv.close() # This "close" should close the child db handle also self.assertRaises(db.DBError, d.get, "test") def test02_close_dbenv_before_dbcursor(self): dbenv = db.DBEnv() dbenv.open(self.homeDir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL, 0666) d = db.DB(dbenv) d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0666) d.put("test","this is a test") d.put("test2","another test") d.put("test3","another one") self.assertEqual(d.get("test"), "this is a test", "put!=get") c=d.cursor() c.first() c.next() d.close() # This "close" should close the child db handle also # db.close should close the child cursor self.assertRaises(db.DBError,c.next) d = db.DB(dbenv) d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0666) c=d.cursor() c.first() c.next() dbenv.close() # The "close" should close the child db handle also, with cursors self.assertRaises(db.DBError, c.next) def test03_close_db_before_dbcursor_without_env(self): import os.path path=os.path.join(self.homeDir,self.filename) d = db.DB() d.open(path, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0666) d.put("test","this is a test") d.put("test2","another test") d.put("test3","another one") self.assertEqual(d.get("test"), "this is a test", "put!=get") c=d.cursor() c.first() c.next() d.close() # The "close" should close the child db handle also self.assertRaises(db.DBError, c.next) def test04_close_massive(self): dbenv = db.DBEnv() dbenv.open(self.homeDir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL, 0666) dbs=[db.DB(dbenv) for i in xrange(16)] cursors=[] for i in dbs : i.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0666) dbs[10].put("test","this is a test") dbs[10].put("test2","another test") dbs[10].put("test3","another one") self.assertEqual(dbs[4].get("test"), "this is a test", "put!=get") for i in dbs : cursors.extend([i.cursor() for j in xrange(32)]) for i in dbs[::3] : i.close() for i in cursors[::3] : i.close() # Check for missing exception in DB! (after DB close) self.assertRaises(db.DBError, dbs[9].get, "test") # Check for missing exception in DBCursor! (after DB close) self.assertRaises(db.DBError, cursors[101].first) cursors[80].first() cursors[80].next() dbenv.close() # This "close" should close the child db handle also # Check for missing exception! (after DBEnv close) self.assertRaises(db.DBError, cursors[80].next) def test05_close_dbenv_delete_db_success(self): dbenv = db.DBEnv() dbenv.open(self.homeDir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL, 0666) d = db.DB(dbenv) d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0666) dbenv.close() # This "close" should close the child db handle also del d try: import gc except ImportError: gc = None if gc: # force d.__del__ [DB_dealloc] to be called gc.collect() def test06_close_txn_before_dup_cursor(self) : dbenv = db.DBEnv() dbenv.open(self.homeDir,db.DB_INIT_TXN | db.DB_INIT_MPOOL | db.DB_INIT_LOG | db.DB_CREATE) d = db.DB(dbenv) txn = dbenv.txn_begin() d.open(self.filename, dbtype = db.DB_HASH, flags = db.DB_CREATE, txn=txn) d.put("XXX", "yyy", txn=txn) txn.commit() txn = dbenv.txn_begin() c1 = d.cursor(txn) c2 = c1.dup() self.assertEqual(("XXX", "yyy"), c1.first()) # Not interested in warnings about implicit close. import warnings with warnings.catch_warnings() : warnings.filterwarnings("ignore") txn.commit() self.assertRaises(db.DBCursorClosedError, c2.first) def test07_close_db_before_sequence(self): import os.path path=os.path.join(self.homeDir,self.filename) d = db.DB() d.open(path, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0666) dbs=db.DBSequence(d) d.close() # This "close" should close the child DBSequence also dbs.close() # If not closed, core dump (in Berkeley DB 4.6.*) #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBEnvClosedEarlyCrash)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_cursor_pget_bug.py0000644000000000000000000000664712363167637022215 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import unittest import os, glob from test_all import db, test_support, get_new_environment_path, \ get_new_database_path #---------------------------------------------------------------------- class pget_bugTestCase(unittest.TestCase): """Verify that cursor.pget works properly""" db_name = 'test-cursor_pget.db' def setUp(self): self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) self.primary_db = db.DB(self.env) self.primary_db.open(self.db_name, 'primary', db.DB_BTREE, db.DB_CREATE) self.secondary_db = db.DB(self.env) self.secondary_db.set_flags(db.DB_DUP) self.secondary_db.open(self.db_name, 'secondary', db.DB_BTREE, db.DB_CREATE) self.primary_db.associate(self.secondary_db, lambda key, data: data) self.primary_db.put('salad', 'eggs') self.primary_db.put('spam', 'ham') self.primary_db.put('omelet', 'eggs') def tearDown(self): self.secondary_db.close() self.primary_db.close() self.env.close() del self.secondary_db del self.primary_db del self.env test_support.rmtree(self.homeDir) def test_pget(self): cursor = self.secondary_db.cursor() self.assertEqual(('eggs', 'salad', 'eggs'), cursor.pget(key='eggs', flags=db.DB_SET)) self.assertEqual(('eggs', 'omelet', 'eggs'), cursor.pget(db.DB_NEXT_DUP)) self.assertEqual(None, cursor.pget(db.DB_NEXT_DUP)) self.assertEqual(('ham', 'spam', 'ham'), cursor.pget('ham', 'spam', flags=db.DB_SET)) self.assertEqual(None, cursor.pget(db.DB_NEXT_DUP)) cursor.close() def test_suite(): return unittest.makeSuite(pget_bugTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_sequence.py0000644000000000000000000001525112363167637020623 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import unittest import os from test_all import db, test_support, get_new_environment_path, get_new_database_path class DBSequenceTest(unittest.TestCase): def setUp(self): self.int_32_max = 0x100000000 self.homeDir = get_new_environment_path() self.filename = "test" self.dbenv = db.DBEnv() self.dbenv.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL, 0666) self.d = db.DB(self.dbenv) self.d.open(self.filename, db.DB_BTREE, db.DB_CREATE, 0666) def tearDown(self): if hasattr(self, 'seq'): self.seq.close() del self.seq if hasattr(self, 'd'): self.d.close() del self.d if hasattr(self, 'dbenv'): self.dbenv.close() del self.dbenv test_support.rmtree(self.homeDir) def test_get(self): self.seq = db.DBSequence(self.d, flags=0) start_value = 10 * self.int_32_max self.assertEqual(0xA00000000, start_value) self.assertEqual(None, self.seq.initial_value(start_value)) self.assertEqual(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) self.assertEqual(start_value, self.seq.get(5)) self.assertEqual(start_value + 5, self.seq.get()) def test_remove(self): self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(None, self.seq.remove(txn=None, flags=0)) del self.seq def test_get_key(self): self.seq = db.DBSequence(self.d, flags=0) key = 'foo' self.assertEqual(None, self.seq.open(key=key, txn=None, flags=db.DB_CREATE)) self.assertEqual(key, self.seq.get_key()) def test_get_dbp(self): self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(self.d, self.seq.get_dbp()) def test_cachesize(self): self.seq = db.DBSequence(self.d, flags=0) cashe_size = 10 self.assertEqual(None, self.seq.set_cachesize(cashe_size)) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(cashe_size, self.seq.get_cachesize()) def test_flags(self): self.seq = db.DBSequence(self.d, flags=0) flag = db.DB_SEQ_WRAP; self.assertEqual(None, self.seq.set_flags(flag)) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(flag, self.seq.get_flags() & flag) def test_range(self): self.seq = db.DBSequence(self.d, flags=0) seq_range = (10 * self.int_32_max, 11 * self.int_32_max - 1) self.assertEqual(None, self.seq.set_range(seq_range)) self.seq.initial_value(seq_range[0]) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(seq_range, self.seq.get_range()) def test_stat(self): self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) stat = self.seq.stat() for param in ('nowait', 'min', 'max', 'value', 'current', 'flags', 'cache_size', 'last_value', 'wait'): self.assertTrue(param in stat, "parameter %s isn't in stat info" % param) # This code checks a crash solved in Berkeley DB 4.7 def test_stat_crash(self) : d=db.DB() d.open(None,dbtype=db.DB_HASH,flags=db.DB_CREATE) # In RAM seq = db.DBSequence(d, flags=0) self.assertRaises(db.DBNotFoundError, seq.open, key='id', txn=None, flags=0) self.assertRaises(db.DBInvalidArgError, seq.stat) d.close() def test_64bits(self) : # We don't use both extremes because they are problematic value_plus=(1L<<63)-2 self.assertEqual(9223372036854775806L,value_plus) value_minus=(-1L<<63)+1 # Two complement self.assertEqual(-9223372036854775807L,value_minus) self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.initial_value(value_plus-1)) self.assertEqual(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) self.assertEqual(value_plus-1, self.seq.get(1)) self.assertEqual(value_plus, self.seq.get(1)) self.seq.remove(txn=None, flags=0) self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.initial_value(value_minus)) self.assertEqual(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) self.assertEqual(value_minus, self.seq.get(1)) self.assertEqual(value_minus+1, self.seq.get(1)) def test_multiple_close(self): self.seq = db.DBSequence(self.d) self.seq.close() # You can close a Sequence multiple times self.seq.close() self.seq.close() def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBSequenceTest)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_pickle.py0000644000000000000000000000701312363167637020257 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import os import pickle import sys if sys.version_info[0] < 3 : try: import cPickle except ImportError: cPickle = None else : cPickle = None import unittest from test_all import db, test_support, get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- class pickleTestCase(unittest.TestCase): """Verify that DBError can be pickled and unpickled""" db_name = 'test-dbobj.db' def setUp(self): self.homeDir = get_new_environment_path() def tearDown(self): if hasattr(self, 'db'): del self.db if hasattr(self, 'env'): del self.env test_support.rmtree(self.homeDir) def _base_test_pickle_DBError(self, pickle): self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) self.db = db.DB(self.env) self.db.open(self.db_name, db.DB_HASH, db.DB_CREATE) self.db.put('spam', 'eggs') self.assertEqual(self.db['spam'], 'eggs') try: self.db.put('spam', 'ham', flags=db.DB_NOOVERWRITE) except db.DBError, egg: pickledEgg = pickle.dumps(egg) #print repr(pickledEgg) rottenEgg = pickle.loads(pickledEgg) if rottenEgg.args != egg.args or type(rottenEgg) != type(egg): raise Exception, (rottenEgg, '!=', egg) else: raise Exception, "where's my DBError exception?!?" self.db.close() self.env.close() def test01_pickle_DBError(self): self._base_test_pickle_DBError(pickle=pickle) if cPickle: def test02_cPickle_DBError(self): self._base_test_pickle_DBError(pickle=cPickle) #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(pickleTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_lock.py0000644000000000000000000001720612363167637017745 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for testing the locking sub-system. """ import time import unittest from test_all import db, test_support, verbose, have_threads, \ get_new_environment_path, get_new_database_path if have_threads : from threading import Thread import sys if sys.version_info[0] < 3 : from threading import currentThread else : from threading import current_thread as currentThread #---------------------------------------------------------------------- class LockingTestCase(unittest.TestCase): def setUp(self): self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_CREATE) def tearDown(self): self.env.close() test_support.rmtree(self.homeDir) def test01_simple(self): if verbose: print '\n', '-=' * 30 print "Running %s.test01_simple..." % self.__class__.__name__ anID = self.env.lock_id() if verbose: print "locker ID: %s" % anID lock = self.env.lock_get(anID, "some locked thing", db.DB_LOCK_WRITE) if verbose: print "Aquired lock: %s" % lock self.env.lock_put(lock) if verbose: print "Released lock: %s" % lock self.env.lock_id_free(anID) def test02_threaded(self): if verbose: print '\n', '-=' * 30 print "Running %s.test02_threaded..." % self.__class__.__name__ threads = [] threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_READ,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_READ,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_READ,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_READ,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) for t in threads: import sys if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() for t in threads: t.join() def test03_lock_timeout(self): self.env.set_timeout(0, db.DB_SET_LOCK_TIMEOUT) self.assertEqual(self.env.get_timeout(db.DB_SET_LOCK_TIMEOUT), 0) self.env.set_timeout(0, db.DB_SET_TXN_TIMEOUT) self.assertEqual(self.env.get_timeout(db.DB_SET_TXN_TIMEOUT), 0) self.env.set_timeout(123456, db.DB_SET_LOCK_TIMEOUT) self.assertEqual(self.env.get_timeout(db.DB_SET_LOCK_TIMEOUT), 123456) self.env.set_timeout(7890123, db.DB_SET_TXN_TIMEOUT) self.assertEqual(self.env.get_timeout(db.DB_SET_TXN_TIMEOUT), 7890123) def test04_lock_timeout2(self): self.env.set_timeout(0, db.DB_SET_LOCK_TIMEOUT) self.env.set_timeout(0, db.DB_SET_TXN_TIMEOUT) self.env.set_timeout(123456, db.DB_SET_LOCK_TIMEOUT) self.env.set_timeout(7890123, db.DB_SET_TXN_TIMEOUT) def deadlock_detection() : while not deadlock_detection.end : deadlock_detection.count = \ self.env.lock_detect(db.DB_LOCK_EXPIRE) if deadlock_detection.count : while not deadlock_detection.end : pass break time.sleep(0.01) deadlock_detection.end=False deadlock_detection.count=0 t=Thread(target=deadlock_detection) import sys if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() self.env.set_timeout(100000, db.DB_SET_LOCK_TIMEOUT) anID = self.env.lock_id() anID2 = self.env.lock_id() self.assertNotEqual(anID, anID2) lock = self.env.lock_get(anID, "shared lock", db.DB_LOCK_WRITE) start_time=time.time() self.assertRaises(db.DBLockNotGrantedError, self.env.lock_get,anID2, "shared lock", db.DB_LOCK_READ) end_time=time.time() deadlock_detection.end=True # Floating point rounding self.assertTrue((end_time-start_time) >= 0.0999) self.env.lock_put(lock) t.join() self.env.lock_id_free(anID) self.env.lock_id_free(anID2) self.assertTrue(deadlock_detection.count>0) def theThread(self, lockType): import sys if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name if lockType == db.DB_LOCK_WRITE: lt = "write" else: lt = "read" anID = self.env.lock_id() if verbose: print "%s: locker ID: %s" % (name, anID) for i in xrange(1000) : lock = self.env.lock_get(anID, "some locked thing", lockType) if verbose: print "%s: Aquired %s lock: %s" % (name, lt, lock) self.env.lock_put(lock) if verbose: print "%s: Released %s lock: %s" % (name, lt, lock) self.env.lock_id_free(anID) #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() if have_threads: suite.addTest(unittest.makeSuite(LockingTestCase)) else: suite.addTest(unittest.makeSuite(LockingTestCase, 'test01')) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_dbshelve.py0000644000000000000000000003142412363167637020607 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for checking dbShelve objects. """ import os, string, sys import random import unittest from test_all import db, dbshelve, test_support, verbose, \ get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- # We want the objects to be comparable so we can test dbshelve.values # later on. class DataClass: def __init__(self): self.value = random.random() def __repr__(self) : # For Python 3.0 comparison return "DataClass %f" %self.value def __cmp__(self, other): # For Python 2.x comparison return cmp(self.value, other) class DBShelveTestCase(unittest.TestCase): if sys.version_info < (2, 7) : def assertIn(self, a, b, msg=None) : return self.assertTrue(a in b, msg=msg) def setUp(self): if sys.version_info[0] >= 3 : from test_all import do_proxy_db_py3k self._flag_proxy_db_py3k = do_proxy_db_py3k(False) self.filename = get_new_database_path() self.do_open() def tearDown(self): if sys.version_info[0] >= 3 : from test_all import do_proxy_db_py3k do_proxy_db_py3k(self._flag_proxy_db_py3k) self.do_close() test_support.unlink(self.filename) def mk(self, key): """Turn key into an appropriate key type for this db""" # override in child class for RECNO if sys.version_info[0] < 3 : return key else : return bytes(key, "iso8859-1") # 8 bits def populateDB(self, d): for x in string.letters: d[self.mk('S' + x)] = 10 * x # add a string d[self.mk('I' + x)] = ord(x) # add an integer d[self.mk('L' + x)] = [x] * 10 # add a list inst = DataClass() # add an instance inst.S = 10 * x inst.I = ord(x) inst.L = [x] * 10 d[self.mk('O' + x)] = inst # overridable in derived classes to affect how the shelf is created/opened def do_open(self): self.d = dbshelve.open(self.filename) # and closed... def do_close(self): self.d.close() def test01_basics(self): if verbose: print '\n', '-=' * 30 print "Running %s.test01_basics..." % self.__class__.__name__ self.populateDB(self.d) self.d.sync() self.do_close() self.do_open() d = self.d l = len(d) k = d.keys() s = d.stat() f = d.fd() if verbose: print "length:", l print "keys:", k print "stats:", s self.assertEqual(0, d.has_key(self.mk('bad key'))) self.assertEqual(1, d.has_key(self.mk('IA'))) self.assertEqual(1, d.has_key(self.mk('OA'))) d.delete(self.mk('IA')) del d[self.mk('OA')] self.assertEqual(0, d.has_key(self.mk('IA'))) self.assertEqual(0, d.has_key(self.mk('OA'))) self.assertEqual(len(d), l-2) values = [] for key in d.keys(): value = d[key] values.append(value) if verbose: print "%s: %s" % (key, value) self.checkrec(key, value) dbvalues = d.values() self.assertEqual(len(dbvalues), len(d.keys())) # XXX: Convert all to strings. Please, improve values.sort(key=lambda x : str(x)) dbvalues.sort(key=lambda x : str(x)) self.assertEqual(repr(values), repr(dbvalues)) items = d.items() self.assertEqual(len(items), len(values)) for key, value in items: self.checkrec(key, value) self.assertEqual(d.get(self.mk('bad key')), None) self.assertEqual(d.get(self.mk('bad key'), None), None) self.assertEqual(d.get(self.mk('bad key'), 'a string'), 'a string') self.assertEqual(d.get(self.mk('bad key'), [1, 2, 3]), [1, 2, 3]) d.set_get_returns_none(0) self.assertRaises(db.DBNotFoundError, d.get, self.mk('bad key')) d.set_get_returns_none(1) d.put(self.mk('new key'), 'new data') self.assertEqual(d.get(self.mk('new key')), 'new data') self.assertEqual(d[self.mk('new key')], 'new data') def test02_cursors(self): if verbose: print '\n', '-=' * 30 print "Running %s.test02_cursors..." % self.__class__.__name__ self.populateDB(self.d) d = self.d count = 0 c = d.cursor() rec = c.first() while rec is not None: count = count + 1 if verbose: print rec key, value = rec self.checkrec(key, value) # Hack to avoid conversion by 2to3 tool rec = getattr(c, "next")() del c self.assertEqual(count, len(d)) count = 0 c = d.cursor() rec = c.last() while rec is not None: count = count + 1 if verbose: print rec key, value = rec self.checkrec(key, value) rec = c.prev() self.assertEqual(count, len(d)) c.set(self.mk('SS')) key, value = c.current() self.checkrec(key, value) del c def test03_append(self): # NOTE: this is overridden in RECNO subclass, don't change its name. if verbose: print '\n', '-=' * 30 print "Running %s.test03_append..." % self.__class__.__name__ self.assertRaises(dbshelve.DBShelveError, self.d.append, 'unit test was here') def test04_iterable(self) : self.populateDB(self.d) d = self.d keys = d.keys() keyset = set(keys) self.assertEqual(len(keyset), len(keys)) for key in d : self.assertIn(key, keyset) keyset.remove(key) self.assertEqual(len(keyset), 0) def checkrec(self, key, value): # override this in a subclass if the key type is different if sys.version_info[0] >= 3 : if isinstance(key, bytes) : key = key.decode("iso8859-1") # 8 bits x = key[1] if key[0] == 'S': self.assertEqual(type(value), str) self.assertEqual(value, 10 * x) elif key[0] == 'I': self.assertEqual(type(value), int) self.assertEqual(value, ord(x)) elif key[0] == 'L': self.assertEqual(type(value), list) self.assertEqual(value, [x] * 10) elif key[0] == 'O': if sys.version_info[0] < 3 : from types import InstanceType self.assertEqual(type(value), InstanceType) else : self.assertEqual(type(value), DataClass) self.assertEqual(value.S, 10 * x) self.assertEqual(value.I, ord(x)) self.assertEqual(value.L, [x] * 10) else: self.assertTrue(0, 'Unknown key type, fix the test') #---------------------------------------------------------------------- class BasicShelveTestCase(DBShelveTestCase): def do_open(self): self.d = dbshelve.DBShelf() self.d.open(self.filename, self.dbtype, self.dbflags) def do_close(self): self.d.close() class BTreeShelveTestCase(BasicShelveTestCase): dbtype = db.DB_BTREE dbflags = db.DB_CREATE class HashShelveTestCase(BasicShelveTestCase): dbtype = db.DB_HASH dbflags = db.DB_CREATE class ThreadBTreeShelveTestCase(BasicShelveTestCase): dbtype = db.DB_BTREE dbflags = db.DB_CREATE | db.DB_THREAD class ThreadHashShelveTestCase(BasicShelveTestCase): dbtype = db.DB_HASH dbflags = db.DB_CREATE | db.DB_THREAD #---------------------------------------------------------------------- class BasicEnvShelveTestCase(DBShelveTestCase): def do_open(self): self.env = db.DBEnv() self.env.open(self.homeDir, self.envflags | db.DB_INIT_MPOOL | db.DB_CREATE) self.filename = os.path.split(self.filename)[1] self.d = dbshelve.DBShelf(self.env) self.d.open(self.filename, self.dbtype, self.dbflags) def do_close(self): self.d.close() self.env.close() def setUp(self) : self.homeDir = get_new_environment_path() DBShelveTestCase.setUp(self) def tearDown(self): if sys.version_info[0] >= 3 : from test_all import do_proxy_db_py3k do_proxy_db_py3k(self._flag_proxy_db_py3k) self.do_close() test_support.rmtree(self.homeDir) class EnvBTreeShelveTestCase(BasicEnvShelveTestCase): envflags = 0 dbtype = db.DB_BTREE dbflags = db.DB_CREATE class EnvHashShelveTestCase(BasicEnvShelveTestCase): envflags = 0 dbtype = db.DB_HASH dbflags = db.DB_CREATE class EnvThreadBTreeShelveTestCase(BasicEnvShelveTestCase): envflags = db.DB_THREAD dbtype = db.DB_BTREE dbflags = db.DB_CREATE | db.DB_THREAD class EnvThreadHashShelveTestCase(BasicEnvShelveTestCase): envflags = db.DB_THREAD dbtype = db.DB_HASH dbflags = db.DB_CREATE | db.DB_THREAD #---------------------------------------------------------------------- # test cases for a DBShelf in a RECNO DB. class RecNoShelveTestCase(BasicShelveTestCase): dbtype = db.DB_RECNO dbflags = db.DB_CREATE def setUp(self): BasicShelveTestCase.setUp(self) # pool to assign integer key values out of self.key_pool = list(range(1, 5000)) self.key_map = {} # map string keys to the number we gave them self.intkey_map = {} # reverse map of above def mk(self, key): if key not in self.key_map: self.key_map[key] = self.key_pool.pop(0) self.intkey_map[self.key_map[key]] = key return self.key_map[key] def checkrec(self, intkey, value): key = self.intkey_map[intkey] BasicShelveTestCase.checkrec(self, key, value) def test03_append(self): if verbose: print '\n', '-=' * 30 print "Running %s.test03_append..." % self.__class__.__name__ self.d[1] = 'spam' self.d[5] = 'eggs' self.assertEqual(6, self.d.append('spam')) self.assertEqual(7, self.d.append('baked beans')) self.assertEqual('spam', self.d.get(6)) self.assertEqual('spam', self.d.get(1)) self.assertEqual('baked beans', self.d.get(7)) self.assertEqual('eggs', self.d.get(5)) #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBShelveTestCase)) suite.addTest(unittest.makeSuite(BTreeShelveTestCase)) suite.addTest(unittest.makeSuite(HashShelveTestCase)) suite.addTest(unittest.makeSuite(ThreadBTreeShelveTestCase)) suite.addTest(unittest.makeSuite(ThreadHashShelveTestCase)) suite.addTest(unittest.makeSuite(EnvBTreeShelveTestCase)) suite.addTest(unittest.makeSuite(EnvHashShelveTestCase)) suite.addTest(unittest.makeSuite(EnvThreadBTreeShelveTestCase)) suite.addTest(unittest.makeSuite(EnvThreadHashShelveTestCase)) suite.addTest(unittest.makeSuite(RecNoShelveTestCase)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib/bsddb/test/test_db.py0000644000000000000000000001623212363167637017400 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import unittest import os, glob from test_all import db, test_support, get_new_environment_path, \ get_new_database_path #---------------------------------------------------------------------- class DB(unittest.TestCase): def setUp(self): self.path = get_new_database_path() self.db = db.DB() def tearDown(self): self.db.close() del self.db test_support.unlink(self.path) class DB_general(DB) : def test_get_open_flags(self) : self.db.open(self.path, dbtype=db.DB_HASH, flags = db.DB_CREATE) self.assertEqual(db.DB_CREATE, self.db.get_open_flags()) def test_get_open_flags2(self) : self.db.open(self.path, dbtype=db.DB_HASH, flags = db.DB_CREATE | db.DB_THREAD) self.assertEqual(db.DB_CREATE | db.DB_THREAD, self.db.get_open_flags()) def test_get_dbname_filename(self) : self.db.open(self.path, dbtype=db.DB_HASH, flags = db.DB_CREATE) self.assertEqual((self.path, None), self.db.get_dbname()) def test_get_dbname_filename_database(self) : name = "jcea-random-name" self.db.open(self.path, dbname=name, dbtype=db.DB_HASH, flags = db.DB_CREATE) self.assertEqual((self.path, name), self.db.get_dbname()) def test_bt_minkey(self) : for i in [17, 108, 1030] : self.db.set_bt_minkey(i) self.assertEqual(i, self.db.get_bt_minkey()) def test_lorder(self) : self.db.set_lorder(1234) self.assertEqual(1234, self.db.get_lorder()) self.db.set_lorder(4321) self.assertEqual(4321, self.db.get_lorder()) self.assertRaises(db.DBInvalidArgError, self.db.set_lorder, 9182) def test_priority(self) : flags = [db.DB_PRIORITY_VERY_LOW, db.DB_PRIORITY_LOW, db.DB_PRIORITY_DEFAULT, db.DB_PRIORITY_HIGH, db.DB_PRIORITY_VERY_HIGH] for flag in flags : self.db.set_priority(flag) self.assertEqual(flag, self.db.get_priority()) def test_get_transactional(self) : self.assertFalse(self.db.get_transactional()) self.db.open(self.path, dbtype=db.DB_HASH, flags = db.DB_CREATE) self.assertFalse(self.db.get_transactional()) class DB_hash(DB) : def test_h_ffactor(self) : for ffactor in [4, 16, 256] : self.db.set_h_ffactor(ffactor) self.assertEqual(ffactor, self.db.get_h_ffactor()) def test_h_nelem(self) : for nelem in [1, 2, 4] : nelem = nelem*1024*1024 # Millions self.db.set_h_nelem(nelem) self.assertEqual(nelem, self.db.get_h_nelem()) def test_pagesize(self) : for i in xrange(9, 17) : # From 512 to 65536 i = 1<`__ -- `Documentation `__ -- `Mailing List `__ -- `Donation `__ Platform: UNKNOWN Classifier: License :: OSI Approved :: BSD License Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: Natural Language :: English Classifier: Natural Language :: Spanish Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Topic :: Database Classifier: Topic :: Software Development Classifier: Topic :: System :: Clustering Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.2 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 bsddb3-6.1.0/bsddb3.egg-info/dependency_links.txt0000644000000000000000000000000112363235111021513 0ustar rootroot00000000000000 bsddb3-6.1.0/bsddb3.egg-info/SOURCES.txt0000644000000000000000000000653312363235112017341 0ustar rootroot00000000000000ChangeLog LICENSE.txt MANIFEST.in README.txt TODO.txt licenses.txt make3.py setup.cfg setup.py setup2.py setup3.py test-full_prerelease.py test.py test2.py test3.py Lib/bsddb/__init__.py Lib/bsddb/db.py Lib/bsddb/dbobj.py Lib/bsddb/dbrecio.py Lib/bsddb/dbshelve.py Lib/bsddb/dbtables.py Lib/bsddb/dbutils.py Lib/bsddb/test/__init__.py Lib/bsddb/test/test_all.py Lib/bsddb/test/test_associate.py Lib/bsddb/test/test_basics.py Lib/bsddb/test/test_compare.py Lib/bsddb/test/test_compat.py Lib/bsddb/test/test_cursor_pget_bug.py Lib/bsddb/test/test_db.py Lib/bsddb/test/test_dbenv.py Lib/bsddb/test/test_dbobj.py Lib/bsddb/test/test_dbshelve.py Lib/bsddb/test/test_dbtables.py Lib/bsddb/test/test_distributed_transactions.py Lib/bsddb/test/test_early_close.py Lib/bsddb/test/test_fileid.py Lib/bsddb/test/test_get_none.py Lib/bsddb/test/test_join.py Lib/bsddb/test/test_lock.py Lib/bsddb/test/test_misc.py Lib/bsddb/test/test_pickle.py Lib/bsddb/test/test_queue.py Lib/bsddb/test/test_recno.py Lib/bsddb/test/test_replication.py Lib/bsddb/test/test_sequence.py Lib/bsddb/test/test_thread.py Lib3/bsddb/__init__.py Lib3/bsddb/db.py Lib3/bsddb/dbobj.py Lib3/bsddb/dbrecio.py Lib3/bsddb/dbshelve.py Lib3/bsddb/dbtables.py Lib3/bsddb/dbutils.py Lib3/bsddb/test/__init__.py Lib3/bsddb/test/test_all.py Lib3/bsddb/test/test_associate.py Lib3/bsddb/test/test_basics.py Lib3/bsddb/test/test_compare.py Lib3/bsddb/test/test_compat.py Lib3/bsddb/test/test_cursor_pget_bug.py Lib3/bsddb/test/test_db.py Lib3/bsddb/test/test_dbenv.py Lib3/bsddb/test/test_dbobj.py Lib3/bsddb/test/test_dbshelve.py Lib3/bsddb/test/test_dbtables.py Lib3/bsddb/test/test_distributed_transactions.py Lib3/bsddb/test/test_early_close.py Lib3/bsddb/test/test_fileid.py Lib3/bsddb/test/test_get_none.py Lib3/bsddb/test/test_join.py Lib3/bsddb/test/test_lock.py Lib3/bsddb/test/test_misc.py Lib3/bsddb/test/test_pickle.py Lib3/bsddb/test/test_queue.py Lib3/bsddb/test/test_recno.py Lib3/bsddb/test/test_replication.py Lib3/bsddb/test/test_sequence.py Lib3/bsddb/test/test_thread.py Modules/_bsddb.c Modules/bsddb.h bsddb3.egg-info/PKG-INFO bsddb3.egg-info/SOURCES.txt bsddb3.egg-info/dependency_links.txt bsddb3.egg-info/top_level.txt docs/README.txt docs/bitcoin.png docs/contents.rst docs/db.rst docs/dbcursor.rst docs/dbenv.rst docs/dblock.rst docs/dblogcursor.rst docs/dbsequence.rst docs/dbsite.rst docs/dbtxn.rst docs/donate.rst docs/history.rst docs/introduction.rst docs/license.rst docs/html/contents.html docs/html/db.html docs/html/dbcursor.html docs/html/dbenv.html docs/html/dblock.html docs/html/dblogcursor.html docs/html/dbsequence.html docs/html/dbsite.html docs/html/dbtxn.html docs/html/donate.html docs/html/genindex.html docs/html/history.html docs/html/introduction.html docs/html/search.html docs/html/searchindex.js docs/html/images/bitcoin.png docs/html/static/ajax-loader.gif docs/html/static/basic.css docs/html/static/comment-bright.png docs/html/static/comment-close.png docs/html/static/comment.png docs/html/static/default.css docs/html/static/doctools.js docs/html/static/down-pressed.png docs/html/static/down.png docs/html/static/file.png docs/html/static/jquery.js docs/html/static/minus.png docs/html/static/plus.png docs/html/static/pygments.css docs/html/static/searchtools.js docs/html/static/sidebar.js docs/html/static/underscore.js docs/html/static/up-pressed.png docs/html/static/up.png docs/html/static/websupport.jsbsddb3-6.1.0/make3.py0000644000000000000000000000633312363167637014212 0ustar rootroot00000000000000#!/usr/bin/env python """ Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import sys, os refactor_path="/usr/local/lib/python3.4/" def copy2to3(path_from, path_to) : files_to_convert = {} if os.path.isdir(path_from) : if path_from.endswith(".hg") : return {} try : os.mkdir(path_to) except : pass for i in os.listdir(path_from) : files_to_convert.update(copy2to3(path_from+"/"+i,path_to+"/"+i)) return files_to_convert cwd = os.getcwd() if (not path_from.endswith(".py")) or (os.path.exists(path_to) and \ (os.stat(path_from).st_mtime < os.stat(path_to).st_mtime)) : return {} if path_from[0] != "/" : path_from = cwd+"/"+path_from if path_to[0] != "/" : path_to = cwd+"/"+path_to files_to_convert[path_from] = path_to try : open(path_to, "w").write(open(path_from, "r").read()) except : os.remove(path_to) raise return files_to_convert def make2to3(path_from, path_to) : files_to_convert = copy2to3(path_from, path_to) retcode = 0 for path_from, path_to in files_to_convert.iteritems() : print "*** Converting", path_to try : import subprocess process = subprocess.Popen(["2to3", "-w", path_to], cwd=refactor_path) retcode = process.wait() except : os.remove(path_to) raise try : os.remove(path_to+".bak") except : pass if retcode : os.remove(path_to) print "ERROR!" return bool(retcode) return bool(retcode) print "Using '%s' for 2to3 conversion tool" %refactor_path make2to3("setup2.py", "setup3.py") make2to3("test2.py", "test3.py") make2to3("Lib", "Lib3") bsddb3-6.1.0/licenses.txt0000644000000000000000000000326112247676155015204 0ustar rootroot00000000000000The history of this project is long and complicated, and so are their copyrights and licenses. I (Jesus Cea) write this document in orden to clarify the situation. This code was integrated once in Python 2.x, so I guess somebody did the paperwork related to contributor agreement. This code is derived from that version, so that could be a license checkpoint/blank sheet. I must verify that. New files will be covered by the 3-clause BSD license. License of Python files: * Except the following files, license is 3-clause BSD license. Jesus Cea Avion. * test2.py, test3.py: Zope Public License, version 2.0. Zope Corporation and contributors. * Lib*/bsddb/dbrecio.py: Itamar Shtull-Trauring . * Lib*/bsddb/__init__.py: 3-clause BSD license. Digital Creations and Andrew Kuchling. * Lib*/bsddb/db.py: 3-clause BSD license. Digital Creations and Andrew Kuchling. * Lib*/bsddb/dbobj.py, dbutils.py: Free software. Autonomous Zone Industries, Gregory P. Smith. * Lib*/bsddb/dbtables.py: Free software. Autonomous Zone Industries, Gregory P. Smith. * Lib*/bsddb/test/test_dbtables.py: Free software. Autonomous Zone Industries, Gregory P. Smith. * Lib*/bsddb/dbshelve.py: Free software. Total Control Software. * Lib*/bsddb/test_support.py: PYTHON (This code will be dropped when minimum supported version is Python 2.6) License of C files: * Modules/bsddb.h, _bsddb.c: 3-clause BSD license. Digital Creations, Andrew Kuchling, Robin Dunn, Gregory P. Smith, Duncan Grisby, Jesus Cea Avion. NOTE: If this file is changed, "docs/license.rst" must be updated too. bsddb3-6.1.0/Lib3/0000755000000000000000000000000012363235112013404 5ustar rootroot00000000000000bsddb3-6.1.0/Lib3/bsddb/0000755000000000000000000000000012363235112014462 5ustar rootroot00000000000000bsddb3-6.1.0/Lib3/bsddb/dbutils.py0000644000000000000000000000553012363206616016514 0ustar rootroot00000000000000#------------------------------------------------------------------------ # # Copyright (C) 2000 Autonomous Zone Industries # # License: This is free software. You may use this software for any # purpose including modification/redistribution, so long as # this header remains intact and that you do not claim any # rights of ownership or authorship of this software. This # software has been tested, but no warranty is expressed or # implied. # # Author: Gregory P. Smith # # Note: I don't know how useful this is in reality since when a # DBLockDeadlockError happens the current transaction is supposed to be # aborted. If it doesn't then when the operation is attempted again # the deadlock is still happening... # --Robin # #------------------------------------------------------------------------ # # import the time.sleep function in a namespace safe way to allow # "from bsddb.dbutils import *" # from time import sleep as _sleep import sys absolute_import = (sys.version_info[0] >= 3) if absolute_import : from . import db else : from . import db # always sleep at least N seconds between retrys _deadlock_MinSleepTime = 1.0/128 # never sleep more than N seconds between retrys _deadlock_MaxSleepTime = 3.14159 # Assign a file object to this for a "sleeping" message to be written to it # each retry _deadlock_VerboseFile = None def DeadlockWrap(function, *_args, **_kwargs): """DeadlockWrap(function, *_args, **_kwargs) - automatically retries function in case of a database deadlock. This is a function intended to be used to wrap database calls such that they perform retrys with exponentially backing off sleeps in between when a DBLockDeadlockError exception is raised. A 'max_retries' parameter may optionally be passed to prevent it from retrying forever (in which case the exception will be reraised). d = DB(...) d.open(...) DeadlockWrap(d.put, "foo", data="bar") # set key "foo" to "bar" """ sleeptime = _deadlock_MinSleepTime max_retries = _kwargs.get('max_retries', -1) if 'max_retries' in _kwargs: del _kwargs['max_retries'] while True: try: return function(*_args, **_kwargs) except db.DBLockDeadlockError: if _deadlock_VerboseFile: _deadlock_VerboseFile.write( 'dbutils.DeadlockWrap: sleeping %1.3f\n' % sleeptime) _sleep(sleeptime) # exponential backoff in the sleep time sleeptime *= 2 if sleeptime > _deadlock_MaxSleepTime: sleeptime = _deadlock_MaxSleepTime max_retries -= 1 if max_retries == -1: raise #------------------------------------------------------------------------ bsddb3-6.1.0/Lib3/bsddb/dbtables.py0000644000000000000000000007300012363206600016614 0ustar rootroot00000000000000#----------------------------------------------------------------------- # # Copyright (C) 2000, 2001 by Autonomous Zone Industries # Copyright (C) 2002 Gregory P. Smith # # License: This is free software. You may use this software for any # purpose including modification/redistribution, so long as # this header remains intact and that you do not claim any # rights of ownership or authorship of this software. This # software has been tested, but no warranty is expressed or # implied. # # -- Gregory P. Smith # This provides a simple database table interface built on top of # the Python Berkeley DB 3 interface. # import re import sys import copy import random import struct if sys.version_info[0] >= 3 : import pickle else : import warnings with warnings.catch_warnings() : warnings.filterwarnings("ignore", category=DeprecationWarning) import pickle as pickle from bsddb3 import db class TableDBError(Exception): pass class TableAlreadyExists(TableDBError): pass class Cond: """This condition matches everything""" def __call__(self, s): return 1 class ExactCond(Cond): """Acts as an exact match condition function""" def __init__(self, strtomatch): self.strtomatch = strtomatch def __call__(self, s): return s == self.strtomatch class PrefixCond(Cond): """Acts as a condition function for matching a string prefix""" def __init__(self, prefix): self.prefix = prefix def __call__(self, s): return s[:len(self.prefix)] == self.prefix class PostfixCond(Cond): """Acts as a condition function for matching a string postfix""" def __init__(self, postfix): self.postfix = postfix def __call__(self, s): return s[-len(self.postfix):] == self.postfix class LikeCond(Cond): """ Acts as a function that will match using an SQL 'LIKE' style string. Case insensitive and % signs are wild cards. This isn't perfect but it should work for the simple common cases. """ def __init__(self, likestr, re_flags=re.IGNORECASE): # escape python re characters chars_to_escape = '.*+()[]?' for char in chars_to_escape : likestr = likestr.replace(char, '\\'+char) # convert %s to wildcards self.likestr = likestr.replace('%', '.*') self.re = re.compile('^'+self.likestr+'$', re_flags) def __call__(self, s): return self.re.match(s) # # keys used to store database metadata # _table_names_key = '__TABLE_NAMES__' # list of the tables in this db _columns = '._COLUMNS__' # table_name+this key contains a list of columns def _columns_key(table): return table + _columns # # these keys are found within table sub databases # _data = '._DATA_.' # this+column+this+rowid key contains table data _rowid = '._ROWID_.' # this+rowid+this key contains a unique entry for each # row in the table. (no data is stored) _rowid_str_len = 8 # length in bytes of the unique rowid strings def _data_key(table, col, rowid): return table + _data + col + _data + rowid def _search_col_data_key(table, col): return table + _data + col + _data def _search_all_data_key(table): return table + _data def _rowid_key(table, rowid): return table + _rowid + rowid + _rowid def _search_rowid_key(table): return table + _rowid def contains_metastrings(s) : """Verify that the given string does not contain any metadata strings that might interfere with dbtables database operation. """ if (s.find(_table_names_key) >= 0 or s.find(_columns) >= 0 or s.find(_data) >= 0 or s.find(_rowid) >= 0): # Then return 1 else: return 0 class bsdTableDB : def __init__(self, filename, dbhome, create=0, truncate=0, mode=0o600, recover=0, dbflags=0): """bsdTableDB(filename, dbhome, create=0, truncate=0, mode=0600) Open database name in the dbhome Berkeley DB directory. Use keyword arguments when calling this constructor. """ self.db = None myflags = db.DB_THREAD if create: myflags |= db.DB_CREATE flagsforenv = (db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_LOG | db.DB_INIT_TXN | dbflags) # DB_AUTO_COMMIT isn't a valid flag for env.open() try: dbflags |= db.DB_AUTO_COMMIT except AttributeError: pass if recover: flagsforenv = flagsforenv | db.DB_RECOVER self.env = db.DBEnv() # enable auto deadlock avoidance self.env.set_lk_detect(db.DB_LOCK_DEFAULT) self.env.open(dbhome, myflags | flagsforenv) if truncate: myflags |= db.DB_TRUNCATE self.db = db.DB(self.env) # this code relies on DBCursor.set* methods to raise exceptions # rather than returning None self.db.set_get_returns_none(1) # allow duplicate entries [warning: be careful w/ metadata] self.db.set_flags(db.DB_DUP) self.db.open(filename, db.DB_BTREE, dbflags | myflags, mode) self.dbfilename = filename if sys.version_info[0] >= 3 : class cursor_py3k(object) : def __init__(self, dbcursor) : self._dbcursor = dbcursor def close(self) : return self._dbcursor.close() def set_range(self, search) : v = self._dbcursor.set_range(bytes(search, "iso8859-1")) if v is not None : v = (v[0].decode("iso8859-1"), v[1].decode("iso8859-1")) return v def __next__(self) : v = getattr(self._dbcursor, "next")() if v is not None : v = (v[0].decode("iso8859-1"), v[1].decode("iso8859-1")) return v class db_py3k(object) : def __init__(self, db) : self._db = db def cursor(self, txn=None) : return cursor_py3k(self._db.cursor(txn=txn)) def has_key(self, key, txn=None) : return getattr(self._db,"has_key")(bytes(key, "iso8859-1"), txn=txn) def put(self, key, value, flags=0, txn=None) : key = bytes(key, "iso8859-1") if value is not None : value = bytes(value, "iso8859-1") return self._db.put(key, value, flags=flags, txn=txn) def put_bytes(self, key, value, txn=None) : key = bytes(key, "iso8859-1") return self._db.put(key, value, txn=txn) def get(self, key, txn=None, flags=0) : key = bytes(key, "iso8859-1") v = self._db.get(key, txn=txn, flags=flags) if v is not None : v = v.decode("iso8859-1") return v def get_bytes(self, key, txn=None, flags=0) : key = bytes(key, "iso8859-1") return self._db.get(key, txn=txn, flags=flags) def delete(self, key, txn=None) : key = bytes(key, "iso8859-1") return self._db.delete(key, txn=txn) def close (self) : return self._db.close() self.db = db_py3k(self.db) else : # Python 2.x pass # Initialize the table names list if this is a new database txn = self.env.txn_begin() try: if not getattr(self.db, "has_key")(_table_names_key, txn): getattr(self.db, "put_bytes", self.db.put) \ (_table_names_key, pickle.dumps([], 1), txn=txn) # Yes, bare except except: txn.abort() raise else: txn.commit() # TODO verify more of the database's metadata? self.__tablecolumns = {} def __del__(self): self.close() def close(self): if self.db is not None: self.db.close() self.db = None if self.env is not None: self.env.close() self.env = None def checkpoint(self, mins=0): self.env.txn_checkpoint(mins) def sync(self): self.db.sync() def _db_print(self) : """Print the database to stdout for debugging""" print("******** Printing raw database for debugging ********") cur = self.db.cursor() try: key, data = cur.first() while 1: print(repr({key: data})) next = next(cur) if next: key, data = next else: cur.close() return except db.DBNotFoundError: cur.close() def CreateTable(self, table, columns): """CreateTable(table, columns) - Create a new table in the database. raises TableDBError if it already exists or for other DB errors. """ assert isinstance(columns, list) txn = None try: # checking sanity of the table and column names here on # table creation will prevent problems elsewhere. if contains_metastrings(table): raise ValueError( "bad table name: contains reserved metastrings") for column in columns : if contains_metastrings(column): raise ValueError( "bad column name: contains reserved metastrings") columnlist_key = _columns_key(table) if getattr(self.db, "has_key")(columnlist_key): raise TableAlreadyExists("table already exists") txn = self.env.txn_begin() # store the table's column info getattr(self.db, "put_bytes", self.db.put)(columnlist_key, pickle.dumps(columns, 1), txn=txn) # add the table name to the tablelist tablelist = pickle.loads(getattr(self.db, "get_bytes", self.db.get) (_table_names_key, txn=txn, flags=db.DB_RMW)) tablelist.append(table) # delete 1st, in case we opened with DB_DUP self.db.delete(_table_names_key, txn=txn) getattr(self.db, "put_bytes", self.db.put)(_table_names_key, pickle.dumps(tablelist, 1), txn=txn) txn.commit() txn = None except db.DBError as dberror: if txn: txn.abort() if sys.version_info < (2, 6) : raise TableDBError(dberror[1]) else : raise TableDBError(dberror.args[1]) def ListTableColumns(self, table): """Return a list of columns in the given table. [] if the table doesn't exist. """ assert isinstance(table, str) if contains_metastrings(table): raise ValueError("bad table name: contains reserved metastrings") columnlist_key = _columns_key(table) if not getattr(self.db, "has_key")(columnlist_key): return [] pickledcolumnlist = getattr(self.db, "get_bytes", self.db.get)(columnlist_key) if pickledcolumnlist: return pickle.loads(pickledcolumnlist) else: return [] def ListTables(self): """Return a list of tables in this database.""" pickledtablelist = getattr(self.db, "get_bytes", self.db.get)(_table_names_key) if pickledtablelist: return pickle.loads(pickledtablelist) else: return [] def CreateOrExtendTable(self, table, columns): """CreateOrExtendTable(table, columns) Create a new table in the database. If a table of this name already exists, extend it to have any additional columns present in the given list as well as all of its current columns. """ assert isinstance(columns, list) try: self.CreateTable(table, columns) except TableAlreadyExists: # the table already existed, add any new columns txn = None try: columnlist_key = _columns_key(table) txn = self.env.txn_begin() # load the current column list oldcolumnlist = pickle.loads( getattr(self.db, "get_bytes", self.db.get)(columnlist_key, txn=txn, flags=db.DB_RMW)) # create a hash table for fast lookups of column names in the # loop below oldcolumnhash = {} for c in oldcolumnlist: oldcolumnhash[c] = c # create a new column list containing both the old and new # column names newcolumnlist = copy.copy(oldcolumnlist) for c in columns: if not c in oldcolumnhash: newcolumnlist.append(c) # store the table's new extended column list if newcolumnlist != oldcolumnlist : # delete the old one first since we opened with DB_DUP self.db.delete(columnlist_key, txn=txn) getattr(self.db, "put_bytes", self.db.put)(columnlist_key, pickle.dumps(newcolumnlist, 1), txn=txn) txn.commit() txn = None self.__load_column_info(table) except db.DBError as dberror: if txn: txn.abort() if sys.version_info < (2, 6) : raise TableDBError(dberror[1]) else : raise TableDBError(dberror.args[1]) def __load_column_info(self, table) : """initialize the self.__tablecolumns dict""" # check the column names try: tcolpickles = getattr(self.db, "get_bytes", self.db.get)(_columns_key(table)) except db.DBNotFoundError: raise TableDBError("unknown table: %r" % (table,)) if not tcolpickles: raise TableDBError("unknown table: %r" % (table,)) self.__tablecolumns[table] = pickle.loads(tcolpickles) def __new_rowid(self, table, txn) : """Create a new unique row identifier""" unique = 0 while not unique: # Generate a random 64-bit row ID string # (note: might have <64 bits of true randomness # but it's plenty for our database id needs!) blist = [] for x in range(_rowid_str_len): blist.append(random.randint(0,255)) newid = struct.pack('B'*_rowid_str_len, *blist) if sys.version_info[0] >= 3 : newid = newid.decode("iso8859-1") # 8 bits # Guarantee uniqueness by adding this key to the database try: self.db.put(_rowid_key(table, newid), None, txn=txn, flags=db.DB_NOOVERWRITE) except db.DBKeyExistError: pass else: unique = 1 return newid def Insert(self, table, rowdict) : """Insert(table, datadict) - Insert a new row into the table using the keys+values from rowdict as the column values. """ txn = None try: if not getattr(self.db, "has_key")(_columns_key(table)): raise TableDBError("unknown table") # check the validity of each column name if not table in self.__tablecolumns: self.__load_column_info(table) for column in list(rowdict.keys()) : if not self.__tablecolumns[table].count(column): raise TableDBError("unknown column: %r" % (column,)) # get a unique row identifier for this row txn = self.env.txn_begin() rowid = self.__new_rowid(table, txn=txn) # insert the row values into the table database for column, dataitem in list(rowdict.items()): # store the value self.db.put(_data_key(table, column, rowid), dataitem, txn=txn) txn.commit() txn = None except db.DBError as dberror: # WIBNI we could just abort the txn and re-raise the exception? # But no, because TableDBError is not related to DBError via # inheritance, so it would be backwards incompatible. Do the next # best thing. info = sys.exc_info() if txn: txn.abort() self.db.delete(_rowid_key(table, rowid)) if sys.version_info < (2, 6) : raise TableDBError(dberror[1]).with_traceback(info[2]) else : raise TableDBError(dberror.args[1]).with_traceback(info[2]) def Modify(self, table, conditions={}, mappings={}): """Modify(table, conditions={}, mappings={}) - Modify items in rows matching 'conditions' using mapping functions in 'mappings' * table - the table name * conditions - a dictionary keyed on column names containing a condition callable expecting the data string as an argument and returning a boolean. * mappings - a dictionary keyed on column names containing a condition callable expecting the data string as an argument and returning the new string for that column. """ try: matching_rowids = self.__Select(table, [], conditions) # modify only requested columns columns = list(mappings.keys()) for rowid in list(matching_rowids.keys()): txn = None try: for column in columns: txn = self.env.txn_begin() # modify the requested column try: dataitem = self.db.get( _data_key(table, column, rowid), txn=txn) self.db.delete( _data_key(table, column, rowid), txn=txn) except db.DBNotFoundError: # XXXXXXX row key somehow didn't exist, assume no # error dataitem = None dataitem = mappings[column](dataitem) if dataitem is not None: self.db.put( _data_key(table, column, rowid), dataitem, txn=txn) txn.commit() txn = None # catch all exceptions here since we call unknown callables except: if txn: txn.abort() raise except db.DBError as dberror: if sys.version_info < (2, 6) : raise TableDBError(dberror[1]) else : raise TableDBError(dberror.args[1]) def Delete(self, table, conditions={}): """Delete(table, conditions) - Delete items matching the given conditions from the table. * conditions - a dictionary keyed on column names containing condition functions expecting the data string as an argument and returning a boolean. """ try: matching_rowids = self.__Select(table, [], conditions) # delete row data from all columns columns = self.__tablecolumns[table] for rowid in list(matching_rowids.keys()): txn = None try: txn = self.env.txn_begin() for column in columns: # delete the data key try: self.db.delete(_data_key(table, column, rowid), txn=txn) except db.DBNotFoundError: # XXXXXXX column may not exist, assume no error pass try: self.db.delete(_rowid_key(table, rowid), txn=txn) except db.DBNotFoundError: # XXXXXXX row key somehow didn't exist, assume no error pass txn.commit() txn = None except db.DBError as dberror: if txn: txn.abort() raise except db.DBError as dberror: if sys.version_info < (2, 6) : raise TableDBError(dberror[1]) else : raise TableDBError(dberror.args[1]) def Select(self, table, columns, conditions={}): """Select(table, columns, conditions) - retrieve specific row data Returns a list of row column->value mapping dictionaries. * columns - a list of which column data to return. If columns is None, all columns will be returned. * conditions - a dictionary keyed on column names containing callable conditions expecting the data string as an argument and returning a boolean. """ try: if not table in self.__tablecolumns: self.__load_column_info(table) if columns is None: columns = self.__tablecolumns[table] matching_rowids = self.__Select(table, columns, conditions) except db.DBError as dberror: if sys.version_info < (2, 6) : raise TableDBError(dberror[1]) else : raise TableDBError(dberror.args[1]) # return the matches as a list of dictionaries return list(matching_rowids.values()) def __Select(self, table, columns, conditions): """__Select() - Used to implement Select and Delete (above) Returns a dictionary keyed on rowids containing dicts holding the row data for columns listed in the columns param that match the given conditions. * conditions is a dictionary keyed on column names containing callable conditions expecting the data string as an argument and returning a boolean. """ # check the validity of each column name if not table in self.__tablecolumns: self.__load_column_info(table) if columns is None: columns = self.tablecolumns[table] for column in (columns + list(conditions.keys())): if not self.__tablecolumns[table].count(column): raise TableDBError("unknown column: %r" % (column,)) # keyed on rows that match so far, containings dicts keyed on # column names containing the data for that row and column. matching_rowids = {} # keys are rowids that do not match rejected_rowids = {} # attempt to sort the conditions in such a way as to minimize full # column lookups def cmp_conditions(atuple, btuple): a = atuple[1] b = btuple[1] if type(a) is type(b): # Needed for python 3. "cmp" vanished in 3.0.1 def cmp(a, b) : if a==b : return 0 if a 0: for rowid, rowdata in list(matching_rowids.items()): for column in columns: if column in rowdata: continue try: rowdata[column] = self.db.get( _data_key(table, column, rowid)) except db.DBError as dberror: if sys.version_info < (2, 6) : if dberror[0] != db.DB_NOTFOUND: raise else : if dberror.args[0] != db.DB_NOTFOUND: raise rowdata[column] = None # return the matches return matching_rowids def Drop(self, table): """Remove an entire table from the database""" txn = None try: txn = self.env.txn_begin() # delete the column list self.db.delete(_columns_key(table), txn=txn) cur = self.db.cursor(txn) # delete all keys containing this tables column and row info table_key = _search_all_data_key(table) while 1: try: key, data = cur.set_range(table_key) except db.DBNotFoundError: break # only delete items in this table if key[:len(table_key)] != table_key: break cur.delete() # delete all rowids used by this table table_key = _search_rowid_key(table) while 1: try: key, data = cur.set_range(table_key) except db.DBNotFoundError: break # only delete items in this table if key[:len(table_key)] != table_key: break cur.delete() cur.close() # delete the tablename from the table name list tablelist = pickle.loads( getattr(self.db, "get_bytes", self.db.get)(_table_names_key, txn=txn, flags=db.DB_RMW)) try: tablelist.remove(table) except ValueError: # hmm, it wasn't there, oh well, that's what we want. pass # delete 1st, incase we opened with DB_DUP self.db.delete(_table_names_key, txn=txn) getattr(self.db, "put_bytes", self.db.put)(_table_names_key, pickle.dumps(tablelist, 1), txn=txn) txn.commit() txn = None if table in self.__tablecolumns: del self.__tablecolumns[table] except db.DBError as dberror: if txn: txn.abort() raise TableDBError(dberror.args[1]) bsddb3-6.1.0/Lib3/bsddb/test/0000755000000000000000000000000012363235112015441 5ustar rootroot00000000000000bsddb3-6.1.0/Lib3/bsddb/test/test_compat.py0000644000000000000000000001402412363206576020352 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ Test cases adapted from the test_bsddb.py module in Python's regression test suite. """ import os, string import unittest from .test_all import db, hashopen, btopen, rnopen, verbose, \ get_new_database_path class CompatibilityTestCase(unittest.TestCase): def setUp(self): self.filename = get_new_database_path() def tearDown(self): try: os.remove(self.filename) except os.error: pass def test01_btopen(self): self.do_bthash_test(btopen, 'btopen') def test02_hashopen(self): self.do_bthash_test(hashopen, 'hashopen') def test03_rnopen(self): data = "The quick brown fox jumped over the lazy dog.".split() if verbose: print("\nTesting: rnopen") f = rnopen(self.filename, 'c') for x in range(len(data)): f[x+1] = data[x] getTest = (f[1], f[2], f[3]) if verbose: print('%s %s %s' % getTest) self.assertEqual(getTest[1], 'quick', 'data mismatch!') rv = f.set_location(3) if rv != (3, 'brown'): self.fail('recno database set_location failed: '+repr(rv)) f[25] = 'twenty-five' f.close() del f f = rnopen(self.filename, 'w') f[20] = 'twenty' def noRec(f): rec = f[15] self.assertRaises(KeyError, noRec, f) def badKey(f): rec = f['a string'] self.assertRaises(TypeError, badKey, f) del f[3] rec = f.first() while rec: if verbose: print(rec) try: rec = next(f) except KeyError: break f.close() def test04_n_flag(self): f = hashopen(self.filename, 'n') f.close() def do_bthash_test(self, factory, what): if verbose: print('\nTesting: ', what) f = factory(self.filename, 'c') if verbose: print('creation...') # truth test if f: if verbose: print("truth test: true") else: if verbose: print("truth test: false") f['0'] = '' f['a'] = 'Guido' f['b'] = 'van' f['c'] = 'Rossum' f['d'] = 'invented' # 'e' intentionally left out f['f'] = 'Python' if verbose: print('%s %s %s' % (f['a'], f['b'], f['c'])) if verbose: print('key ordering...') start = f.set_location(f.first()[0]) if start != ('0', ''): self.fail("incorrect first() result: "+repr(start)) while 1: try: rec = next(f) except KeyError: self.assertEqual(rec, f.last(), 'Error, last <> last!') f.previous() break if verbose: print(rec) self.assertTrue('f' in f, 'Error, missing key!') # test that set_location() returns the next nearest key, value # on btree databases and raises KeyError on others. if factory == btopen: e = f.set_location('e') if e != ('f', 'Python'): self.fail('wrong key,value returned: '+repr(e)) else: try: e = f.set_location('e') except KeyError: pass else: self.fail("set_location on non-existent key did not raise KeyError") f.sync() f.close() # truth test try: if f: if verbose: print("truth test: true") else: if verbose: print("truth test: false") except db.DBError: pass else: self.fail("Exception expected") del f if verbose: print('modification...') f = factory(self.filename, 'w') f['d'] = 'discovered' if verbose: print('access...') for key in list(f.keys()): word = f[key] if verbose: print(word) def noRec(f): rec = f['no such key'] self.assertRaises(KeyError, noRec, f) def badKey(f): rec = f[15] self.assertRaises(TypeError, badKey, f) f.close() #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(CompatibilityTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_get_none.py0000644000000000000000000000742212363206615020663 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for checking set_get_returns_none. """ import os, string import unittest from .test_all import db, verbose, get_new_database_path #---------------------------------------------------------------------- class GetReturnsNoneTestCase(unittest.TestCase): def setUp(self): self.filename = get_new_database_path() def tearDown(self): try: os.remove(self.filename) except os.error: pass def test01_get_returns_none(self): d = db.DB() d.open(self.filename, db.DB_BTREE, db.DB_CREATE) d.set_get_returns_none(1) for x in string.letters: d.put(x, x * 40) data = d.get('bad key') self.assertEqual(data, None) data = d.get(string.letters[0]) self.assertEqual(data, string.letters[0]*40) count = 0 c = d.cursor() rec = c.first() while rec: count = count + 1 rec = next(c) self.assertEqual(rec, None) self.assertEqual(count, len(string.letters)) c.close() d.close() def test02_get_raises_exception(self): d = db.DB() d.open(self.filename, db.DB_BTREE, db.DB_CREATE) d.set_get_returns_none(0) for x in string.letters: d.put(x, x * 40) self.assertRaises(db.DBNotFoundError, d.get, 'bad key') self.assertRaises(KeyError, d.get, 'bad key') data = d.get(string.letters[0]) self.assertEqual(data, string.letters[0]*40) count = 0 exceptionHappened = 0 c = d.cursor() rec = c.first() while rec: count = count + 1 try: rec = next(c) except db.DBNotFoundError: # end of the records exceptionHappened = 1 break self.assertNotEqual(rec, None) self.assertTrue(exceptionHappened) self.assertEqual(count, len(string.letters)) c.close() d.close() #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(GetReturnsNoneTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_fileid.py0000644000000000000000000000652612363206601020320 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCase for reseting File ID. """ import os import shutil import unittest from .test_all import db, test_support, get_new_environment_path, get_new_database_path class FileidResetTestCase(unittest.TestCase): def setUp(self): self.db_path_1 = get_new_database_path() self.db_path_2 = get_new_database_path() self.db_env_path = get_new_environment_path() def test_fileid_reset(self): # create DB 1 self.db1 = db.DB() self.db1.open(self.db_path_1, dbtype=db.DB_HASH, flags=(db.DB_CREATE|db.DB_EXCL)) self.db1.put('spam', 'eggs') self.db1.close() shutil.copy(self.db_path_1, self.db_path_2) self.db2 = db.DB() self.db2.open(self.db_path_2, dbtype=db.DB_HASH) self.db2.put('spam', 'spam') self.db2.close() self.db_env = db.DBEnv() self.db_env.open(self.db_env_path, db.DB_CREATE|db.DB_INIT_MPOOL) # use fileid_reset() here self.db_env.fileid_reset(self.db_path_2) self.db1 = db.DB(self.db_env) self.db1.open(self.db_path_1, dbtype=db.DB_HASH, flags=db.DB_RDONLY) self.assertEqual(self.db1.get('spam'), 'eggs') self.db2 = db.DB(self.db_env) self.db2.open(self.db_path_2, dbtype=db.DB_HASH, flags=db.DB_RDONLY) self.assertEqual(self.db2.get('spam'), 'spam') self.db1.close() self.db2.close() self.db_env.close() def tearDown(self): test_support.unlink(self.db_path_1) test_support.unlink(self.db_path_2) test_support.rmtree(self.db_env_path) def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(FileidResetTestCase)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_cursor_pget_bug.py0000644000000000000000000000665012363206612022255 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import unittest import os, glob from .test_all import db, test_support, get_new_environment_path, \ get_new_database_path #---------------------------------------------------------------------- class pget_bugTestCase(unittest.TestCase): """Verify that cursor.pget works properly""" db_name = 'test-cursor_pget.db' def setUp(self): self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) self.primary_db = db.DB(self.env) self.primary_db.open(self.db_name, 'primary', db.DB_BTREE, db.DB_CREATE) self.secondary_db = db.DB(self.env) self.secondary_db.set_flags(db.DB_DUP) self.secondary_db.open(self.db_name, 'secondary', db.DB_BTREE, db.DB_CREATE) self.primary_db.associate(self.secondary_db, lambda key, data: data) self.primary_db.put('salad', 'eggs') self.primary_db.put('spam', 'ham') self.primary_db.put('omelet', 'eggs') def tearDown(self): self.secondary_db.close() self.primary_db.close() self.env.close() del self.secondary_db del self.primary_db del self.env test_support.rmtree(self.homeDir) def test_pget(self): cursor = self.secondary_db.cursor() self.assertEqual(('eggs', 'salad', 'eggs'), cursor.pget(key='eggs', flags=db.DB_SET)) self.assertEqual(('eggs', 'omelet', 'eggs'), cursor.pget(db.DB_NEXT_DUP)) self.assertEqual(None, cursor.pget(db.DB_NEXT_DUP)) self.assertEqual(('ham', 'spam', 'ham'), cursor.pget('ham', 'spam', flags=db.DB_SET)) self.assertEqual(None, cursor.pget(db.DB_NEXT_DUP)) cursor.close() def test_suite(): return unittest.makeSuite(pget_bugTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/__init__.py0000644000000000000000000000000012363206572017550 0ustar rootroot00000000000000bsddb3-6.1.0/Lib3/bsddb/test/test_distributed_transactions.py0000644000000000000000000001435112363206575024203 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for distributed transactions. """ import os import unittest from .test_all import db, test_support, get_new_environment_path, \ get_new_database_path from .test_all import verbose #---------------------------------------------------------------------- class DBTxn_distributed(unittest.TestCase): num_txns=1234 nosync=True must_open_db=False def _create_env(self, must_open_db) : self.dbenv = db.DBEnv() self.dbenv.set_tx_max(self.num_txns) self.dbenv.set_lk_max_lockers(self.num_txns*2) self.dbenv.set_lk_max_locks(self.num_txns*2) self.dbenv.set_lk_max_objects(self.num_txns*2) if self.nosync : self.dbenv.set_flags(db.DB_TXN_NOSYNC,True) self.dbenv.open(self.homeDir, db.DB_CREATE | db.DB_THREAD | db.DB_RECOVER | db.DB_INIT_TXN | db.DB_INIT_LOG | db.DB_INIT_MPOOL | db.DB_INIT_LOCK, 0o666) self.db = db.DB(self.dbenv) self.db.set_re_len(db.DB_GID_SIZE) if must_open_db : txn=self.dbenv.txn_begin() self.db.open(self.filename, db.DB_QUEUE, db.DB_CREATE | db.DB_THREAD, 0o666, txn=txn) txn.commit() def setUp(self) : self.homeDir = get_new_environment_path() self.filename = "test" return self._create_env(must_open_db=True) def _destroy_env(self): if self.nosync : self.dbenv.log_flush() self.db.close() self.dbenv.close() def tearDown(self): self._destroy_env() test_support.rmtree(self.homeDir) def _recreate_env(self,must_open_db) : self._destroy_env() self._create_env(must_open_db) def test01_distributed_transactions(self) : txns=set() adapt = lambda x : x import sys if sys.version_info[0] >= 3 : adapt = lambda x : bytes(x, "ascii") # Create transactions, "prepare" them, and # let them be garbage collected. for i in range(self.num_txns) : txn = self.dbenv.txn_begin() gid = "%%%dd" %db.DB_GID_SIZE gid = adapt(gid %i) self.db.put(i, gid, txn=txn, flags=db.DB_APPEND) txns.add(gid) txn.prepare(gid) del txn self._recreate_env(self.must_open_db) # Get "to be recovered" transactions but # let them be garbage collected. recovered_txns=self.dbenv.txn_recover() self.assertEqual(self.num_txns,len(recovered_txns)) for gid,txn in recovered_txns : self.assertTrue(gid in txns) del txn del recovered_txns self._recreate_env(self.must_open_db) # Get "to be recovered" transactions. Commit, abort and # discard them. recovered_txns=self.dbenv.txn_recover() self.assertEqual(self.num_txns,len(recovered_txns)) discard_txns=set() committed_txns=set() state=0 for gid,txn in recovered_txns : if state==0 or state==1: committed_txns.add(gid) txn.commit() elif state==2 : txn.abort() elif state==3 : txn.discard() discard_txns.add(gid) state=-1 state+=1 del txn del recovered_txns self._recreate_env(self.must_open_db) # Verify the discarded transactions are still # around, and dispose them. recovered_txns=self.dbenv.txn_recover() self.assertEqual(len(discard_txns),len(recovered_txns)) for gid,txn in recovered_txns : txn.abort() del txn del recovered_txns self._recreate_env(must_open_db=True) # Be sure there are not pending transactions. # Check also database size. recovered_txns=self.dbenv.txn_recover() self.assertTrue(len(recovered_txns)==0) self.assertEqual(len(committed_txns),self.db.stat()["nkeys"]) class DBTxn_distributedSYNC(DBTxn_distributed): nosync=False class DBTxn_distributed_must_open_db(DBTxn_distributed): must_open_db=True class DBTxn_distributedSYNC_must_open_db(DBTxn_distributed): nosync=False must_open_db=True #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBTxn_distributed)) suite.addTest(unittest.makeSuite(DBTxn_distributedSYNC)) suite.addTest(unittest.makeSuite(DBTxn_distributed_must_open_db)) suite.addTest(unittest.makeSuite(DBTxn_distributedSYNC_must_open_db)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_misc.py0000644000000000000000000001455112363206605020020 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """Miscellaneous bsddb module test cases """ import os, sys import unittest from .test_all import db, dbshelve, hashopen, test_support, get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- class MiscTestCase(unittest.TestCase): def setUp(self): self.filename = get_new_database_path() self.homeDir = get_new_environment_path() def tearDown(self): test_support.unlink(self.filename) test_support.rmtree(self.homeDir) def test01_badpointer(self): dbs = dbshelve.open(self.filename) dbs.close() self.assertRaises(db.DBError, dbs.get, "foo") def test02_db_home(self): env = db.DBEnv() # check for crash fixed when db_home is used before open() self.assertTrue(env.db_home is None) env.open(self.homeDir, db.DB_CREATE) if sys.version_info[0] < 3 : self.assertEqual(self.homeDir, env.db_home) else : self.assertEqual(bytes(self.homeDir, "ascii"), env.db_home) def test03_repr_closed_db(self): db = hashopen(self.filename) db.close() rp = repr(db) self.assertEqual(rp, "{}") def test04_repr_db(self) : db = hashopen(self.filename) d = {} for i in range(100) : db[repr(i)] = repr(100*i) d[repr(i)] = repr(100*i) db.close() db = hashopen(self.filename) rp = repr(sorted(db.items())) rd = repr(sorted(d.items())) self.assertEqual(rp, rd) db.close() # http://sourceforge.net/tracker/index.php?func=detail&aid=1708868&group_id=13900&atid=313900 # # See the bug report for details. # # The problem was that make_key_dbt() was not allocating a copy of # string keys but FREE_DBT() was always being told to free it when the # database was opened with DB_THREAD. def test05_double_free_make_key_dbt(self): try: db1 = db.DB() db1.open(self.filename, None, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD) curs = db1.cursor() t = curs.get("/foo", db.DB_SET) # double free happened during exit from DBC_get finally: db1.close() test_support.unlink(self.filename) def test06_key_with_null_bytes(self): try: db1 = db.DB() db1.open(self.filename, None, db.DB_HASH, db.DB_CREATE) db1['a'] = 'eh?' db1['a\x00'] = 'eh zed.' db1['a\x00a'] = 'eh zed eh?' db1['aaa'] = 'eh eh eh!' keys = list(db1.keys()) keys.sort() self.assertEqual(['a', 'a\x00', 'a\x00a', 'aaa'], keys) self.assertEqual(db1['a'], 'eh?') self.assertEqual(db1['a\x00'], 'eh zed.') self.assertEqual(db1['a\x00a'], 'eh zed eh?') self.assertEqual(db1['aaa'], 'eh eh eh!') finally: db1.close() test_support.unlink(self.filename) def test07_DB_set_flags_persists(self): try: db1 = db.DB() db1.set_flags(db.DB_DUPSORT) db1.open(self.filename, db.DB_HASH, db.DB_CREATE) db1['a'] = 'eh' db1['a'] = 'A' self.assertEqual([('a', 'A')], list(db1.items())) db1.put('a', 'Aa') self.assertEqual([('a', 'A'), ('a', 'Aa')], list(db1.items())) db1.close() db1 = db.DB() # no set_flags call, we're testing that it reads and obeys # the flags on open. db1.open(self.filename, db.DB_HASH) self.assertEqual([('a', 'A'), ('a', 'Aa')], list(db1.items())) # if it read the flags right this will replace all values # for key 'a' instead of adding a new one. (as a dict should) db1['a'] = 'new A' self.assertEqual([('a', 'new A')], list(db1.items())) finally: db1.close() test_support.unlink(self.filename) def test08_ExceptionTypes(self) : self.assertTrue(issubclass(db.DBError, Exception)) for i, j in list(db.__dict__.items()) : if i.startswith("DB") and i.endswith("Error") : self.assertTrue(issubclass(j, db.DBError), msg=i) if i not in ("DBKeyEmptyError", "DBNotFoundError") : self.assertFalse(issubclass(j, KeyError), msg=i) # This two exceptions have two bases self.assertTrue(issubclass(db.DBKeyEmptyError, KeyError)) self.assertTrue(issubclass(db.DBNotFoundError, KeyError)) #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(MiscTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_associate.py0000644000000000000000000004040112363206621021027 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for DB.associate. """ import sys, os, string import time from pprint import pprint import unittest from .test_all import db, dbshelve, test_support, verbose, have_threads, \ get_new_environment_path #---------------------------------------------------------------------- musicdata = { 1 : ("Bad English", "The Price Of Love", "Rock"), 2 : ("DNA featuring Suzanne Vega", "Tom's Diner", "Rock"), 3 : ("George Michael", "Praying For Time", "Rock"), 4 : ("Gloria Estefan", "Here We Are", "Rock"), 5 : ("Linda Ronstadt", "Don't Know Much", "Rock"), 6 : ("Michael Bolton", "How Am I Supposed To Live Without You", "Blues"), 7 : ("Paul Young", "Oh Girl", "Rock"), 8 : ("Paula Abdul", "Opposites Attract", "Rock"), 9 : ("Richard Marx", "Should've Known Better", "Rock"), 10: ("Rod Stewart", "Forever Young", "Rock"), 11: ("Roxette", "Dangerous", "Rock"), 12: ("Sheena Easton", "The Lover In Me", "Rock"), 13: ("Sinead O'Connor", "Nothing Compares 2 U", "Rock"), 14: ("Stevie B.", "Because I Love You", "Rock"), 15: ("Taylor Dayne", "Love Will Lead You Back", "Rock"), 16: ("The Bangles", "Eternal Flame", "Rock"), 17: ("Wilson Phillips", "Release Me", "Rock"), 18: ("Billy Joel", "Blonde Over Blue", "Rock"), 19: ("Billy Joel", "Famous Last Words", "Rock"), 20: ("Billy Joel", "Lullabye (Goodnight, My Angel)", "Rock"), 21: ("Billy Joel", "The River Of Dreams", "Rock"), 22: ("Billy Joel", "Two Thousand Years", "Rock"), 23: ("Janet Jackson", "Alright", "Rock"), 24: ("Janet Jackson", "Black Cat", "Rock"), 25: ("Janet Jackson", "Come Back To Me", "Rock"), 26: ("Janet Jackson", "Escapade", "Rock"), 27: ("Janet Jackson", "Love Will Never Do (Without You)", "Rock"), 28: ("Janet Jackson", "Miss You Much", "Rock"), 29: ("Janet Jackson", "Rhythm Nation", "Rock"), 30: ("Janet Jackson", "State Of The World", "Rock"), 31: ("Janet Jackson", "The Knowledge", "Rock"), 32: ("Spyro Gyra", "End of Romanticism", "Jazz"), 33: ("Spyro Gyra", "Heliopolis", "Jazz"), 34: ("Spyro Gyra", "Jubilee", "Jazz"), 35: ("Spyro Gyra", "Little Linda", "Jazz"), 36: ("Spyro Gyra", "Morning Dance", "Jazz"), 37: ("Spyro Gyra", "Song for Lorraine", "Jazz"), 38: ("Yes", "Owner Of A Lonely Heart", "Rock"), 39: ("Yes", "Rhythm Of Love", "Rock"), 40: ("Cusco", "Dream Catcher", "New Age"), 41: ("Cusco", "Geronimos Laughter", "New Age"), 42: ("Cusco", "Ghost Dance", "New Age"), 43: ("Blue Man Group", "Drumbone", "New Age"), 44: ("Blue Man Group", "Endless Column", "New Age"), 45: ("Blue Man Group", "Klein Mandelbrot", "New Age"), 46: ("Kenny G", "Silhouette", "Jazz"), 47: ("Sade", "Smooth Operator", "Jazz"), 48: ("David Arkenstone", "Papillon (On The Wings Of The Butterfly)", "New Age"), 49: ("David Arkenstone", "Stepping Stars", "New Age"), 50: ("David Arkenstone", "Carnation Lily Lily Rose", "New Age"), 51: ("David Lanz", "Behind The Waterfall", "New Age"), 52: ("David Lanz", "Cristofori's Dream", "New Age"), 53: ("David Lanz", "Heartsounds", "New Age"), 54: ("David Lanz", "Leaves on the Seine", "New Age"), 99: ("unknown artist", "Unnamed song", "Unknown"), } #---------------------------------------------------------------------- class AssociateErrorTestCase(unittest.TestCase): def setUp(self): self.filename = self.__class__.__name__ + '.db' self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) def tearDown(self): self.env.close() self.env = None test_support.rmtree(self.homeDir) def test00_associateDBError(self): if verbose: print('\n', '-=' * 30) print("Running %s.test00_associateDBError..." % \ self.__class__.__name__) dupDB = db.DB(self.env) dupDB.set_flags(db.DB_DUP) dupDB.open(self.filename, "primary", db.DB_BTREE, db.DB_CREATE) secDB = db.DB(self.env) secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE) # dupDB has been configured to allow duplicates, it can't # associate with a secondary. Berkeley DB will return an error. try: def f(a,b): return a+b dupDB.associate(secDB, f) except db.DBError: # good secDB.close() dupDB.close() else: secDB.close() dupDB.close() self.fail("DBError exception was expected") #---------------------------------------------------------------------- class AssociateTestCase(unittest.TestCase): keytype = '' envFlags = 0 dbFlags = 0 def setUp(self): self.filename = self.__class__.__name__ + '.db' self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_THREAD | self.envFlags) def tearDown(self): self.closeDB() self.env.close() self.env = None test_support.rmtree(self.homeDir) def addDataToDB(self, d, txn=None): for key, value in list(musicdata.items()): if type(self.keytype) == type(''): key = "%02d" % key d.put(key, '|'.join(value), txn=txn) def createDB(self, txn=None): self.cur = None self.secDB = None self.primary = db.DB(self.env) self.primary.set_get_returns_none(2) self.primary.open(self.filename, "primary", self.dbtype, db.DB_CREATE | db.DB_THREAD | self.dbFlags, txn=txn) def closeDB(self): if self.cur: self.cur.close() self.cur = None if self.secDB: self.secDB.close() self.secDB = None self.primary.close() self.primary = None def getDB(self): return self.primary def _associateWithDB(self, getGenre): self.createDB() self.secDB = db.DB(self.env) self.secDB.set_flags(db.DB_DUP) self.secDB.set_get_returns_none(2) self.secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE | db.DB_THREAD | self.dbFlags) self.getDB().associate(self.secDB, getGenre) self.addDataToDB(self.getDB()) self.finish_test(self.secDB) def test01_associateWithDB(self): if verbose: print('\n', '-=' * 30) print("Running %s.test01_associateWithDB..." % \ self.__class__.__name__) return self._associateWithDB(self.getGenre) def _associateAfterDB(self, getGenre) : self.createDB() self.addDataToDB(self.getDB()) self.secDB = db.DB(self.env) self.secDB.set_flags(db.DB_DUP) self.secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE | db.DB_THREAD | self.dbFlags) # adding the DB_CREATE flag will cause it to index existing records self.getDB().associate(self.secDB, getGenre, db.DB_CREATE) self.finish_test(self.secDB) def test02_associateAfterDB(self): if verbose: print('\n', '-=' * 30) print("Running %s.test02_associateAfterDB..." % \ self.__class__.__name__) return self._associateAfterDB(self.getGenre) def test03_associateWithDB(self): if verbose: print('\n', '-=' * 30) print("Running %s.test03_associateWithDB..." % \ self.__class__.__name__) return self._associateWithDB(self.getGenreList) def test04_associateAfterDB(self): if verbose: print('\n', '-=' * 30) print("Running %s.test04_associateAfterDB..." % \ self.__class__.__name__) return self._associateAfterDB(self.getGenreList) def finish_test(self, secDB, txn=None): # 'Blues' should not be in the secondary database vals = secDB.pget('Blues', txn=txn) self.assertEqual(vals, None, vals) vals = secDB.pget('Unknown', txn=txn) self.assertTrue(vals[0] == 99 or vals[0] == '99', vals) vals[1].index('Unknown') vals[1].index('Unnamed') vals[1].index('unknown') if verbose: print("Primary key traversal:") self.cur = self.getDB().cursor(txn) count = 0 rec = self.cur.first() while rec is not None: if type(self.keytype) == type(''): self.assertTrue(int(rec[0])) # for primary db, key is a number else: self.assertTrue(rec[0] and type(rec[0]) == type(0)) count = count + 1 if verbose: print(rec) rec = getattr(self.cur, "next")() self.assertEqual(count, len(musicdata)) # all items accounted for if verbose: print("Secondary key traversal:") self.cur = secDB.cursor(txn) count = 0 # test cursor pget vals = self.cur.pget('Unknown', flags=db.DB_LAST) self.assertTrue(vals[1] == 99 or vals[1] == '99', vals) self.assertEqual(vals[0], 'Unknown') vals[2].index('Unknown') vals[2].index('Unnamed') vals[2].index('unknown') vals = self.cur.pget('Unknown', data='wrong value', flags=db.DB_GET_BOTH) self.assertEqual(vals, None, vals) rec = self.cur.first() self.assertEqual(rec[0], "Jazz") while rec is not None: count = count + 1 if verbose: print(rec) rec = getattr(self.cur, "next")() # all items accounted for EXCEPT for 1 with "Blues" genre self.assertEqual(count, len(musicdata)-1) self.cur = None def getGenre(self, priKey, priData): self.assertEqual(type(priData), type("")) genre = priData.split('|')[2] if verbose: print('getGenre key: %r data: %r' % (priKey, priData)) if genre == 'Blues': return db.DB_DONOTINDEX else: return genre def getGenreList(self, priKey, PriData) : v = self.getGenre(priKey, PriData) if type(v) == type("") : v = [v] return v #---------------------------------------------------------------------- class AssociateHashTestCase(AssociateTestCase): dbtype = db.DB_HASH class AssociateBTreeTestCase(AssociateTestCase): dbtype = db.DB_BTREE class AssociateRecnoTestCase(AssociateTestCase): dbtype = db.DB_RECNO keytype = 0 #---------------------------------------------------------------------- class AssociateBTreeTxnTestCase(AssociateBTreeTestCase): envFlags = db.DB_INIT_TXN dbFlags = 0 def txn_finish_test(self, sDB, txn): try: self.finish_test(sDB, txn=txn) finally: if self.cur: self.cur.close() self.cur = None if txn: txn.commit() def test13_associate_in_transaction(self): if verbose: print('\n', '-=' * 30) print("Running %s.test13_associateAutoCommit..." % \ self.__class__.__name__) txn = self.env.txn_begin() try: self.createDB(txn=txn) self.secDB = db.DB(self.env) self.secDB.set_flags(db.DB_DUP) self.secDB.set_get_returns_none(2) self.secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, txn=txn) self.getDB().associate(self.secDB, self.getGenre, txn=txn) self.addDataToDB(self.getDB(), txn=txn) except: txn.abort() raise self.txn_finish_test(self.secDB, txn=txn) #---------------------------------------------------------------------- class ShelveAssociateTestCase(AssociateTestCase): def createDB(self): self.primary = dbshelve.open(self.filename, dbname="primary", dbenv=self.env, filetype=self.dbtype) def addDataToDB(self, d): for key, value in list(musicdata.items()): if type(self.keytype) == type(''): key = "%02d" % key d.put(key, value) # save the value as is this time def getGenre(self, priKey, priData): self.assertEqual(type(priData), type(())) if verbose: print('getGenre key: %r data: %r' % (priKey, priData)) genre = priData[2] if genre == 'Blues': return db.DB_DONOTINDEX else: return genre class ShelveAssociateHashTestCase(ShelveAssociateTestCase): dbtype = db.DB_HASH class ShelveAssociateBTreeTestCase(ShelveAssociateTestCase): dbtype = db.DB_BTREE class ShelveAssociateRecnoTestCase(ShelveAssociateTestCase): dbtype = db.DB_RECNO keytype = 0 #---------------------------------------------------------------------- class ThreadedAssociateTestCase(AssociateTestCase): def addDataToDB(self, d): t1 = Thread(target = self.writer1, args = (d, )) t2 = Thread(target = self.writer2, args = (d, )) t1.setDaemon(True) t2.setDaemon(True) t1.start() t2.start() t1.join() t2.join() def writer1(self, d): for key, value in list(musicdata.items()): if type(self.keytype) == type(''): key = "%02d" % key d.put(key, '|'.join(value)) def writer2(self, d): for x in range(100, 600): key = 'z%2d' % x value = [key] * 4 d.put(key, '|'.join(value)) class ThreadedAssociateHashTestCase(ShelveAssociateTestCase): dbtype = db.DB_HASH class ThreadedAssociateBTreeTestCase(ShelveAssociateTestCase): dbtype = db.DB_BTREE class ThreadedAssociateRecnoTestCase(ShelveAssociateTestCase): dbtype = db.DB_RECNO keytype = 0 #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(AssociateErrorTestCase)) suite.addTest(unittest.makeSuite(AssociateHashTestCase)) suite.addTest(unittest.makeSuite(AssociateBTreeTestCase)) suite.addTest(unittest.makeSuite(AssociateRecnoTestCase)) suite.addTest(unittest.makeSuite(AssociateBTreeTxnTestCase)) suite.addTest(unittest.makeSuite(ShelveAssociateHashTestCase)) suite.addTest(unittest.makeSuite(ShelveAssociateBTreeTestCase)) suite.addTest(unittest.makeSuite(ShelveAssociateRecnoTestCase)) if have_threads: suite.addTest(unittest.makeSuite(ThreadedAssociateHashTestCase)) suite.addTest(unittest.makeSuite(ThreadedAssociateBTreeTestCase)) suite.addTest(unittest.makeSuite(ThreadedAssociateRecnoTestCase)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_early_close.py0000644000000000000000000001772312363206604021371 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for checking that it does not segfault when a DBEnv object is closed before its DB objects. """ import os, sys import unittest from .test_all import db, test_support, verbose, get_new_environment_path, get_new_database_path # We're going to get warnings in this module about trying to close the db when # its env is already closed. Let's just ignore those. try: import warnings except ImportError: pass else: warnings.filterwarnings('ignore', message='DB could not be closed in', category=RuntimeWarning) #---------------------------------------------------------------------- class DBEnvClosedEarlyCrash(unittest.TestCase): def setUp(self): self.homeDir = get_new_environment_path() self.filename = "test" def tearDown(self): test_support.rmtree(self.homeDir) def test01_close_dbenv_before_db(self): dbenv = db.DBEnv() dbenv.open(self.homeDir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL, 0o666) d = db.DB(dbenv) d2 = db.DB(dbenv) d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666) self.assertRaises(db.DBNoSuchFileError, d2.open, self.filename+"2", db.DB_BTREE, db.DB_THREAD, 0o666) d.put("test","this is a test") self.assertEqual(d.get("test"), "this is a test", "put!=get") dbenv.close() # This "close" should close the child db handle also self.assertRaises(db.DBError, d.get, "test") def test02_close_dbenv_before_dbcursor(self): dbenv = db.DBEnv() dbenv.open(self.homeDir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL, 0o666) d = db.DB(dbenv) d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666) d.put("test","this is a test") d.put("test2","another test") d.put("test3","another one") self.assertEqual(d.get("test"), "this is a test", "put!=get") c=d.cursor() c.first() next(c) d.close() # This "close" should close the child db handle also # db.close should close the child cursor self.assertRaises(db.DBError,c.__next__) d = db.DB(dbenv) d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666) c=d.cursor() c.first() next(c) dbenv.close() # The "close" should close the child db handle also, with cursors self.assertRaises(db.DBError, c.__next__) def test03_close_db_before_dbcursor_without_env(self): import os.path path=os.path.join(self.homeDir,self.filename) d = db.DB() d.open(path, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666) d.put("test","this is a test") d.put("test2","another test") d.put("test3","another one") self.assertEqual(d.get("test"), "this is a test", "put!=get") c=d.cursor() c.first() next(c) d.close() # The "close" should close the child db handle also self.assertRaises(db.DBError, c.__next__) def test04_close_massive(self): dbenv = db.DBEnv() dbenv.open(self.homeDir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL, 0o666) dbs=[db.DB(dbenv) for i in range(16)] cursors=[] for i in dbs : i.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666) dbs[10].put("test","this is a test") dbs[10].put("test2","another test") dbs[10].put("test3","another one") self.assertEqual(dbs[4].get("test"), "this is a test", "put!=get") for i in dbs : cursors.extend([i.cursor() for j in range(32)]) for i in dbs[::3] : i.close() for i in cursors[::3] : i.close() # Check for missing exception in DB! (after DB close) self.assertRaises(db.DBError, dbs[9].get, "test") # Check for missing exception in DBCursor! (after DB close) self.assertRaises(db.DBError, cursors[101].first) cursors[80].first() next(cursors[80]) dbenv.close() # This "close" should close the child db handle also # Check for missing exception! (after DBEnv close) self.assertRaises(db.DBError, cursors[80].__next__) def test05_close_dbenv_delete_db_success(self): dbenv = db.DBEnv() dbenv.open(self.homeDir, db.DB_INIT_CDB| db.DB_CREATE |db.DB_THREAD|db.DB_INIT_MPOOL, 0o666) d = db.DB(dbenv) d.open(self.filename, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666) dbenv.close() # This "close" should close the child db handle also del d try: import gc except ImportError: gc = None if gc: # force d.__del__ [DB_dealloc] to be called gc.collect() def test06_close_txn_before_dup_cursor(self) : dbenv = db.DBEnv() dbenv.open(self.homeDir,db.DB_INIT_TXN | db.DB_INIT_MPOOL | db.DB_INIT_LOG | db.DB_CREATE) d = db.DB(dbenv) txn = dbenv.txn_begin() d.open(self.filename, dbtype = db.DB_HASH, flags = db.DB_CREATE, txn=txn) d.put("XXX", "yyy", txn=txn) txn.commit() txn = dbenv.txn_begin() c1 = d.cursor(txn) c2 = c1.dup() self.assertEqual(("XXX", "yyy"), c1.first()) # Not interested in warnings about implicit close. import warnings with warnings.catch_warnings() : warnings.filterwarnings("ignore") txn.commit() self.assertRaises(db.DBCursorClosedError, c2.first) def test07_close_db_before_sequence(self): import os.path path=os.path.join(self.homeDir,self.filename) d = db.DB() d.open(path, db.DB_BTREE, db.DB_CREATE | db.DB_THREAD, 0o666) dbs=db.DBSequence(d) d.close() # This "close" should close the child DBSequence also dbs.close() # If not closed, core dump (in Berkeley DB 4.6.*) #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBEnvClosedEarlyCrash)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_dbobj.py0000644000000000000000000000767212363206611020150 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import os, string import unittest from .test_all import db, dbobj, test_support, get_new_environment_path, \ get_new_database_path #---------------------------------------------------------------------- class dbobjTestCase(unittest.TestCase): """Verify that dbobj.DB and dbobj.DBEnv work properly""" db_name = 'test-dbobj.db' def setUp(self): self.homeDir = get_new_environment_path() def tearDown(self): if hasattr(self, 'db'): del self.db if hasattr(self, 'env'): del self.env test_support.rmtree(self.homeDir) def test01_both(self): class TestDBEnv(dbobj.DBEnv): pass class TestDB(dbobj.DB): def put(self, key, *args, **kwargs): key = key.upper() # call our parent classes put method with an upper case key return dbobj.DB.put(self, key, *args, **kwargs) self.env = TestDBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) self.db = TestDB(self.env) self.db.open(self.db_name, db.DB_HASH, db.DB_CREATE) self.db.put('spam', 'eggs') self.assertEqual(self.db.get('spam'), None, "overridden dbobj.DB.put() method failed [1]") self.assertEqual(self.db.get('SPAM'), 'eggs', "overridden dbobj.DB.put() method failed [2]") self.db.close() self.env.close() def test02_dbobj_dict_interface(self): self.env = dbobj.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) self.db = dbobj.DB(self.env) self.db.open(self.db_name+'02', db.DB_HASH, db.DB_CREATE) # __setitem__ self.db['spam'] = 'eggs' # __len__ self.assertEqual(len(self.db), 1) # __getitem__ self.assertEqual(self.db['spam'], 'eggs') # __del__ del self.db['spam'] self.assertEqual(self.db.get('spam'), None, "dbobj __del__ failed") self.db.close() self.env.close() def test03_dbobj_type_before_open(self): # Ensure this doesn't cause a segfault. self.assertRaises(db.DBInvalidArgError, db.DB().type) #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(dbobjTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_thread.py0000644000000000000000000004153212363206574020340 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for multi-threaded access to a DB. """ import os import sys import time import errno from random import random DASH = '-' try: WindowsError except NameError: class WindowsError(Exception): pass import unittest from .test_all import db, dbutils, test_support, verbose, have_threads, \ get_new_environment_path, get_new_database_path if have_threads : from threading import Thread if sys.version_info[0] < 3 : from threading import currentThread else : from threading import current_thread as currentThread #---------------------------------------------------------------------- class BaseThreadedTestCase(unittest.TestCase): dbtype = db.DB_UNKNOWN # must be set in derived class dbopenflags = 0 dbsetflags = 0 envflags = 0 def setUp(self): if verbose: dbutils._deadlock_VerboseFile = sys.stdout self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.setEnvOpts() self.env.open(self.homeDir, self.envflags | db.DB_CREATE) self.filename = self.__class__.__name__ + '.db' self.d = db.DB(self.env) if self.dbsetflags: self.d.set_flags(self.dbsetflags) self.d.open(self.filename, self.dbtype, self.dbopenflags|db.DB_CREATE) def tearDown(self): self.d.close() self.env.close() test_support.rmtree(self.homeDir) def setEnvOpts(self): pass def makeData(self, key): return DASH.join([key] * 5) #---------------------------------------------------------------------- class ConcurrentDataStoreBase(BaseThreadedTestCase): dbopenflags = db.DB_THREAD envflags = db.DB_THREAD | db.DB_INIT_CDB | db.DB_INIT_MPOOL readers = 0 # derived class should set writers = 0 records = 1000 def test01_1WriterMultiReaders(self): if verbose: print('\n', '-=' * 30) print("Running %s.test01_1WriterMultiReaders..." % \ self.__class__.__name__) keys=list(range(self.records)) import random random.shuffle(keys) records_per_writer=self.records//self.writers readers_per_writer=self.readers//self.writers self.assertEqual(self.records,self.writers*records_per_writer) self.assertEqual(self.readers,self.writers*readers_per_writer) self.assertTrue((records_per_writer%readers_per_writer)==0) readers = [] for x in range(self.readers): rt = Thread(target = self.readerThread, args = (self.d, x), name = 'reader %d' % x, )#verbose = verbose) if sys.version_info[0] < 3 : rt.setDaemon(True) else : rt.daemon = True readers.append(rt) writers=[] for x in range(self.writers): a=keys[records_per_writer*x:records_per_writer*(x+1)] a.sort() # Generate conflicts b=readers[readers_per_writer*x:readers_per_writer*(x+1)] wt = Thread(target = self.writerThread, args = (self.d, a, b), name = 'writer %d' % x, )#verbose = verbose) writers.append(wt) for t in writers: if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() for t in writers: t.join() for t in readers: t.join() def writerThread(self, d, keys, readers): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name if verbose: print("%s: creating records %d - %d" % (name, start, stop)) count=len(keys)//len(readers) count2=count for x in keys : key = '%04d' % x dbutils.DeadlockWrap(d.put, key, self.makeData(key), max_retries=12) if verbose and x % 100 == 0: print("%s: records %d - %d finished" % (name, start, x)) count2-=1 if not count2 : readers.pop().start() count2=count if verbose: print("%s: finished creating records" % name) if verbose: print("%s: thread finished" % name) def readerThread(self, d, readerNum): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name for i in range(5) : c = d.cursor() count = 0 rec = c.first() while rec: count += 1 key, data = rec self.assertEqual(self.makeData(key), data) rec = next(c) if verbose: print("%s: found %d records" % (name, count)) c.close() if verbose: print("%s: thread finished" % name) class BTreeConcurrentDataStore(ConcurrentDataStoreBase): dbtype = db.DB_BTREE writers = 2 readers = 10 records = 1000 class HashConcurrentDataStore(ConcurrentDataStoreBase): dbtype = db.DB_HASH writers = 2 readers = 10 records = 1000 #---------------------------------------------------------------------- class SimpleThreadedBase(BaseThreadedTestCase): dbopenflags = db.DB_THREAD envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK readers = 10 writers = 2 records = 1000 def setEnvOpts(self): self.env.set_lk_detect(db.DB_LOCK_DEFAULT) def test02_SimpleLocks(self): if verbose: print('\n', '-=' * 30) print("Running %s.test02_SimpleLocks..." % self.__class__.__name__) keys=list(range(self.records)) import random random.shuffle(keys) records_per_writer=self.records//self.writers readers_per_writer=self.readers//self.writers self.assertEqual(self.records,self.writers*records_per_writer) self.assertEqual(self.readers,self.writers*readers_per_writer) self.assertTrue((records_per_writer%readers_per_writer)==0) readers = [] for x in range(self.readers): rt = Thread(target = self.readerThread, args = (self.d, x), name = 'reader %d' % x, )#verbose = verbose) if sys.version_info[0] < 3 : rt.setDaemon(True) else : rt.daemon = True readers.append(rt) writers = [] for x in range(self.writers): a=keys[records_per_writer*x:records_per_writer*(x+1)] a.sort() # Generate conflicts b=readers[readers_per_writer*x:readers_per_writer*(x+1)] wt = Thread(target = self.writerThread, args = (self.d, a, b), name = 'writer %d' % x, )#verbose = verbose) writers.append(wt) for t in writers: if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() for t in writers: t.join() for t in readers: t.join() def writerThread(self, d, keys, readers): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name if verbose: print("%s: creating records %d - %d" % (name, start, stop)) count=len(keys)//len(readers) count2=count for x in keys : key = '%04d' % x dbutils.DeadlockWrap(d.put, key, self.makeData(key), max_retries=12) if verbose and x % 100 == 0: print("%s: records %d - %d finished" % (name, start, x)) count2-=1 if not count2 : readers.pop().start() count2=count if verbose: print("%s: thread finished" % name) def readerThread(self, d, readerNum): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name c = d.cursor() count = 0 rec = dbutils.DeadlockWrap(c.first, max_retries=10) while rec: count += 1 key, data = rec self.assertEqual(self.makeData(key), data) rec = dbutils.DeadlockWrap(c.__next__, max_retries=10) if verbose: print("%s: found %d records" % (name, count)) c.close() if verbose: print("%s: thread finished" % name) class BTreeSimpleThreaded(SimpleThreadedBase): dbtype = db.DB_BTREE class HashSimpleThreaded(SimpleThreadedBase): dbtype = db.DB_HASH #---------------------------------------------------------------------- class ThreadedTransactionsBase(BaseThreadedTestCase): dbopenflags = db.DB_THREAD | db.DB_AUTO_COMMIT envflags = (db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_LOG | db.DB_INIT_TXN ) readers = 0 writers = 0 records = 2000 txnFlag = 0 def setEnvOpts(self): #self.env.set_lk_detect(db.DB_LOCK_DEFAULT) pass def test03_ThreadedTransactions(self): if verbose: print('\n', '-=' * 30) print("Running %s.test03_ThreadedTransactions..." % \ self.__class__.__name__) keys=list(range(self.records)) import random random.shuffle(keys) records_per_writer=self.records//self.writers readers_per_writer=self.readers//self.writers self.assertEqual(self.records,self.writers*records_per_writer) self.assertEqual(self.readers,self.writers*readers_per_writer) self.assertTrue((records_per_writer%readers_per_writer)==0) readers=[] for x in range(self.readers): rt = Thread(target = self.readerThread, args = (self.d, x), name = 'reader %d' % x, )#verbose = verbose) if sys.version_info[0] < 3 : rt.setDaemon(True) else : rt.daemon = True readers.append(rt) writers = [] for x in range(self.writers): a=keys[records_per_writer*x:records_per_writer*(x+1)] b=readers[readers_per_writer*x:readers_per_writer*(x+1)] wt = Thread(target = self.writerThread, args = (self.d, a, b), name = 'writer %d' % x, )#verbose = verbose) writers.append(wt) dt = Thread(target = self.deadlockThread) if sys.version_info[0] < 3 : dt.setDaemon(True) else : dt.daemon = True dt.start() for t in writers: if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() for t in writers: t.join() for t in readers: t.join() self.doLockDetect = False dt.join() def writerThread(self, d, keys, readers): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name count=len(keys)//len(readers) while len(keys): try: txn = self.env.txn_begin(None, self.txnFlag) keys2=keys[:count] for x in keys2 : key = '%04d' % x d.put(key, self.makeData(key), txn) if verbose and x % 100 == 0: print("%s: records %d - %d finished" % (name, start, x)) txn.commit() keys=keys[count:] readers.pop().start() except (db.DBLockDeadlockError, db.DBLockNotGrantedError) as val: if verbose: print("%s: Aborting transaction (%s)" % (name, val.args[1])) txn.abort() if verbose: print("%s: thread finished" % name) def readerThread(self, d, readerNum): if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name finished = False while not finished: try: txn = self.env.txn_begin(None, self.txnFlag) c = d.cursor(txn) count = 0 rec = c.first() while rec: count += 1 key, data = rec self.assertEqual(self.makeData(key), data) rec = next(c) if verbose: print("%s: found %d records" % (name, count)) c.close() txn.commit() finished = True except (db.DBLockDeadlockError, db.DBLockNotGrantedError) as val: if verbose: print("%s: Aborting transaction (%s)" % (name, val.args[1])) c.close() txn.abort() if verbose: print("%s: thread finished" % name) def deadlockThread(self): self.doLockDetect = True while self.doLockDetect: time.sleep(0.05) try: aborted = self.env.lock_detect( db.DB_LOCK_RANDOM, db.DB_LOCK_CONFLICT) if verbose and aborted: print("deadlock: Aborted %d deadlocked transaction(s)" \ % aborted) except db.DBError: pass class BTreeThreadedTransactions(ThreadedTransactionsBase): dbtype = db.DB_BTREE writers = 2 readers = 10 records = 1000 class HashThreadedTransactions(ThreadedTransactionsBase): dbtype = db.DB_HASH writers = 2 readers = 10 records = 1000 class BTreeThreadedNoWaitTransactions(ThreadedTransactionsBase): dbtype = db.DB_BTREE writers = 2 readers = 10 records = 1000 txnFlag = db.DB_TXN_NOWAIT class HashThreadedNoWaitTransactions(ThreadedTransactionsBase): dbtype = db.DB_HASH writers = 2 readers = 10 records = 1000 txnFlag = db.DB_TXN_NOWAIT #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() if have_threads: suite.addTest(unittest.makeSuite(BTreeConcurrentDataStore)) suite.addTest(unittest.makeSuite(HashConcurrentDataStore)) suite.addTest(unittest.makeSuite(BTreeSimpleThreaded)) suite.addTest(unittest.makeSuite(HashSimpleThreaded)) suite.addTest(unittest.makeSuite(BTreeThreadedTransactions)) suite.addTest(unittest.makeSuite(HashThreadedTransactions)) suite.addTest(unittest.makeSuite(BTreeThreadedNoWaitTransactions)) suite.addTest(unittest.makeSuite(HashThreadedNoWaitTransactions)) else: print("Threads not available, skipping thread tests.") return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_recno.py0000644000000000000000000002335212363206622020171 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for exercising a Recno DB. """ import os, sys import errno from pprint import pprint import unittest from .test_all import db, test_support, verbose, get_new_environment_path, get_new_database_path letters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' #---------------------------------------------------------------------- class SimpleRecnoTestCase(unittest.TestCase): if sys.version_info < (2, 7) : def assertIsInstance(self, obj, datatype, msg=None) : return self.assertEqual(type(obj), datatype, msg=msg) def assertGreaterEqual(self, a, b, msg=None) : return self.assertTrue(a>=b, msg=msg) def setUp(self): self.filename = get_new_database_path() self.homeDir = None def tearDown(self): test_support.unlink(self.filename) if self.homeDir: test_support.rmtree(self.homeDir) def test01_basic(self): d = db.DB() get_returns_none = d.set_get_returns_none(2) d.set_get_returns_none(get_returns_none) d.open(self.filename, db.DB_RECNO, db.DB_CREATE) for x in letters: recno = d.append(x * 60) self.assertIsInstance(recno, int) self.assertGreaterEqual(recno, 1) if verbose: print(recno, end=' ') if verbose: print() stat = d.stat() if verbose: pprint(stat) for recno in range(1, len(d)+1): data = d[recno] if verbose: print(data) self.assertIsInstance(data, str) self.assertEqual(data, d.get(recno)) try: data = d[0] # This should raise a KeyError!?!?! except db.DBInvalidArgError as val: self.assertEqual(val.args[0], db.EINVAL) if verbose: print(val) else: self.fail("expected exception") # test that has_key raises DB exceptions (fixed in pybsddb 4.3.2) try: 0 in d except db.DBError as val: pass else: self.fail("has_key did not raise a proper exception") try: data = d[100] except KeyError: pass else: self.fail("expected exception") try: data = d.get(100) except db.DBNotFoundError as val: if get_returns_none: self.fail("unexpected exception") else: self.assertEqual(data, None) keys = list(d.keys()) if verbose: print(keys) self.assertIsInstance(keys, list) self.assertIsInstance(keys[0], int) self.assertEqual(len(keys), len(d)) items = list(d.items()) if verbose: pprint(items) self.assertIsInstance(items, list) self.assertIsInstance(items[0], tuple) self.assertEqual(len(items[0]), 2) self.assertIsInstance(items[0][0], int) self.assertIsInstance(items[0][1], str) self.assertEqual(len(items), len(d)) self.assertTrue(25 in d) del d[25] self.assertFalse(25 in d) d.delete(13) self.assertFalse(13 in d) data = d.get_both(26, "z" * 60) self.assertEqual(data, "z" * 60, 'was %r' % data) if verbose: print(data) fd = d.fd() if verbose: print(fd) c = d.cursor() rec = c.first() while rec: if verbose: print(rec) rec = next(c) c.set(50) rec = c.current() if verbose: print(rec) c.put(-1, "a replacement record", db.DB_CURRENT) c.set(50) rec = c.current() self.assertEqual(rec, (50, "a replacement record")) if verbose: print(rec) rec = c.set_range(30) if verbose: print(rec) # test that non-existent key lookups work (and that # DBC_set_range doesn't have a memleak under valgrind) rec = c.set_range(999999) self.assertEqual(rec, None) if verbose: print(rec) c.close() d.close() d = db.DB() d.open(self.filename) c = d.cursor() # put a record beyond the consecutive end of the recno's d[100] = "way out there" self.assertEqual(d[100], "way out there") try: data = d[99] except KeyError: pass else: self.fail("expected exception") try: d.get(99) except db.DBKeyEmptyError as val: if get_returns_none: self.fail("unexpected DBKeyEmptyError exception") else: self.assertEqual(val.args[0], db.DB_KEYEMPTY) if verbose: print(val) else: if not get_returns_none: self.fail("expected exception") rec = c.set(40) while rec: if verbose: print(rec) rec = next(c) c.close() d.close() def test02_WithSource(self): """ A Recno file that is given a "backing source file" is essentially a simple ASCII file. Normally each record is delimited by \n and so is just a line in the file, but you can set a different record delimiter if needed. """ homeDir = get_new_environment_path() self.homeDir = homeDir source = os.path.join(homeDir, 'test_recno.txt') if not os.path.isdir(homeDir): os.mkdir(homeDir) f = open(source, 'w') # create the file f.close() d = db.DB() # This is the default value, just checking if both int d.set_re_delim(0x0A) d.set_re_delim('\n') # and char can be used... d.set_re_source(source) d.open(self.filename, db.DB_RECNO, db.DB_CREATE) data = "The quick brown fox jumped over the lazy dog".split() for datum in data: d.append(datum) d.sync() d.close() # get the text from the backing source f = open(source, 'r') text = f.read() f.close() text = text.strip() if verbose: print(text) print(data) print(text.split('\n')) self.assertEqual(text.split('\n'), data) # open as a DB again d = db.DB() d.set_re_source(source) d.open(self.filename, db.DB_RECNO) d[3] = 'reddish-brown' d[8] = 'comatose' d.sync() d.close() f = open(source, 'r') text = f.read() f.close() text = text.strip() if verbose: print(text) print(text.split('\n')) self.assertEqual(text.split('\n'), "The quick reddish-brown fox jumped over the comatose dog".split()) def test03_FixedLength(self): d = db.DB() d.set_re_len(40) # fixed length records, 40 bytes long d.set_re_pad('-') # sets the pad character... d.set_re_pad(45) # ...test both int and char d.open(self.filename, db.DB_RECNO, db.DB_CREATE) for x in letters: d.append(x * 35) # These will be padded d.append('.' * 40) # this one will be exact try: # this one will fail d.append('bad' * 20) except db.DBInvalidArgError as val: self.assertEqual(val.args[0], db.EINVAL) if verbose: print(val) else: self.fail("expected exception") c = d.cursor() rec = c.first() while rec: if verbose: print(rec) rec = next(c) c.close() d.close() def test04_get_size_empty(self) : d = db.DB() d.open(self.filename, dbtype=db.DB_RECNO, flags=db.DB_CREATE) row_id = d.append(' ') self.assertEqual(1, d.get_size(key=row_id)) row_id = d.append('') self.assertEqual(0, d.get_size(key=row_id)) #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(SimpleRecnoTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_join.py0000644000000000000000000001130112363206603020010 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for using the DB.join and DBCursor.join_item methods. """ import os import unittest from .test_all import db, dbshelve, test_support, verbose, \ get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- ProductIndex = [ ('apple', "Convenience Store"), ('blueberry', "Farmer's Market"), ('shotgun', "S-Mart"), # Aisle 12 ('pear', "Farmer's Market"), ('chainsaw', "S-Mart"), # "Shop smart. Shop S-Mart!" ('strawberry', "Farmer's Market"), ] ColorIndex = [ ('blue', "blueberry"), ('red', "apple"), ('red', "chainsaw"), ('red', "strawberry"), ('yellow', "peach"), ('yellow', "pear"), ('black', "shotgun"), ] class JoinTestCase(unittest.TestCase): keytype = '' def setUp(self): self.filename = self.__class__.__name__ + '.db' self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOCK ) def tearDown(self): self.env.close() test_support.rmtree(self.homeDir) def test01_join(self): if verbose: print('\n', '-=' * 30) print("Running %s.test01_join..." % \ self.__class__.__name__) # create and populate primary index priDB = db.DB(self.env) priDB.open(self.filename, "primary", db.DB_BTREE, db.DB_CREATE) list(map(lambda t, priDB=priDB: priDB.put(*t), ProductIndex)) # create and populate secondary index secDB = db.DB(self.env) secDB.set_flags(db.DB_DUP | db.DB_DUPSORT) secDB.open(self.filename, "secondary", db.DB_BTREE, db.DB_CREATE) list(map(lambda t, secDB=secDB: secDB.put(*t), ColorIndex)) sCursor = None jCursor = None try: # lets look up all of the red Products sCursor = secDB.cursor() # Don't do the .set() in an assert, or you can get a bogus failure # when running python -O tmp = sCursor.set('red') self.assertTrue(tmp) # FIXME: jCursor doesn't properly hold a reference to its # cursors, if they are closed before jcursor is used it # can cause a crash. jCursor = priDB.join([sCursor]) if jCursor.get(0) != ('apple', "Convenience Store"): self.fail("join cursor positioned wrong") if jCursor.join_item() != 'chainsaw': self.fail("DBCursor.join_item returned wrong item") if jCursor.get(0)[0] != 'strawberry': self.fail("join cursor returned wrong thing") if jCursor.get(0): # there were only three red items to return self.fail("join cursor returned too many items") finally: if jCursor: jCursor.close() if sCursor: sCursor.close() priDB.close() secDB.close() def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(JoinTestCase)) return suite bsddb3-6.1.0/Lib3/bsddb/test/test_basics.py0000644000000000000000000010716312363206614020333 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ Basic TestCases for BTree and hash DBs, with and without a DBEnv, with various DB flags, etc. """ import os import errno import string from pprint import pprint import unittest import time import sys from .test_all import db, test_support, verbose, get_new_environment_path, \ get_new_database_path DASH = '-' #---------------------------------------------------------------------- class VersionTestCase(unittest.TestCase): def test00_version(self): info = db.version() if verbose: print('\n', '-=' * 20) print('bsddb.db.version(): %s' % (info, )) print(db.DB_VERSION_STRING) print('-=' * 20) self.assertEqual(info, (db.DB_VERSION_MAJOR, db.DB_VERSION_MINOR, db.DB_VERSION_PATCH)) #---------------------------------------------------------------------- class BasicTestCase(unittest.TestCase): dbtype = db.DB_UNKNOWN # must be set in derived class cachesize = (0, 1024*1024, 1) dbopenflags = 0 dbsetflags = 0 dbmode = 0o660 dbname = None useEnv = 0 envflags = 0 envsetflags = 0 _numKeys = 1002 # PRIVATE. NOTE: must be an even value def setUp(self): if self.useEnv: self.homeDir=get_new_environment_path() try: self.env = db.DBEnv() self.env.set_lg_max(1024*1024) self.env.set_tx_max(30) self._t = int(time.time()) self.env.set_tx_timestamp(self._t) self.env.set_flags(self.envsetflags, 1) self.env.open(self.homeDir, self.envflags | db.DB_CREATE) self.filename = "test" # Yes, a bare except is intended, since we're re-raising the exc. except: test_support.rmtree(self.homeDir) raise else: self.env = None self.filename = get_new_database_path() # create and open the DB self.d = db.DB(self.env) if not self.useEnv : self.d.set_cachesize(*self.cachesize) cachesize = self.d.get_cachesize() self.assertEqual(cachesize[0], self.cachesize[0]) self.assertEqual(cachesize[2], self.cachesize[2]) # Berkeley DB expands the cache 25% accounting overhead, # if the cache is small. self.assertEqual(125, int(100.0*cachesize[1]/self.cachesize[1])) self.d.set_flags(self.dbsetflags) if self.dbname: self.d.open(self.filename, self.dbname, self.dbtype, self.dbopenflags|db.DB_CREATE, self.dbmode) else: self.d.open(self.filename, # try out keyword args mode = self.dbmode, dbtype = self.dbtype, flags = self.dbopenflags|db.DB_CREATE) if not self.useEnv: self.assertRaises(db.DBInvalidArgError, self.d.set_cachesize, *self.cachesize) self.populateDB() def tearDown(self): self.d.close() if self.env is not None: self.env.close() test_support.rmtree(self.homeDir) else: os.remove(self.filename) def populateDB(self, _txn=None): d = self.d for x in range(self._numKeys//2): key = '%04d' % (self._numKeys - x) # insert keys in reverse order data = self.makeData(key) d.put(key, data, _txn) d.put('empty value', '', _txn) for x in range(self._numKeys//2-1): key = '%04d' % x # and now some in forward order data = self.makeData(key) d.put(key, data, _txn) if _txn: _txn.commit() num = len(d) if verbose: print("created %d records" % num) def makeData(self, key): return DASH.join([key] * 5) #---------------------------------------- def test01_GetsAndPuts(self): d = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test01_GetsAndPuts..." % self.__class__.__name__) for key in ['0001', '0100', '0400', '0700', '0999']: data = d.get(key) if verbose: print(data) self.assertEqual(d.get('0321'), '0321-0321-0321-0321-0321') # By default non-existent keys return None... self.assertEqual(d.get('abcd'), None) # ...but they raise exceptions in other situations. Call # set_get_returns_none() to change it. try: d.delete('abcd') except db.DBNotFoundError as val: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print(val) else: self.fail("expected exception") d.put('abcd', 'a new record') self.assertEqual(d.get('abcd'), 'a new record') d.put('abcd', 'same key') if self.dbsetflags & db.DB_DUP: self.assertEqual(d.get('abcd'), 'a new record') else: self.assertEqual(d.get('abcd'), 'same key') try: d.put('abcd', 'this should fail', flags=db.DB_NOOVERWRITE) except db.DBKeyExistError as val: self.assertEqual(val.args[0], db.DB_KEYEXIST) if verbose: print(val) else: self.fail("expected exception") if self.dbsetflags & db.DB_DUP: self.assertEqual(d.get('abcd'), 'a new record') else: self.assertEqual(d.get('abcd'), 'same key') d.sync() d.close() del d self.d = db.DB(self.env) if self.dbname: self.d.open(self.filename, self.dbname) else: self.d.open(self.filename) d = self.d self.assertEqual(d.get('0321'), '0321-0321-0321-0321-0321') if self.dbsetflags & db.DB_DUP: self.assertEqual(d.get('abcd'), 'a new record') else: self.assertEqual(d.get('abcd'), 'same key') rec = d.get_both('0555', '0555-0555-0555-0555-0555') if verbose: print(rec) self.assertEqual(d.get_both('0555', 'bad data'), None) # test default value data = d.get('bad key', 'bad data') self.assertEqual(data, 'bad data') # any object can pass through data = d.get('bad key', self) self.assertEqual(data, self) s = d.stat() self.assertEqual(type(s), type({})) if verbose: print('d.stat() returned this dictionary:') pprint(s) #---------------------------------------- def test02_DictionaryMethods(self): d = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test02_DictionaryMethods..." % \ self.__class__.__name__) for key in ['0002', '0101', '0401', '0701', '0998']: data = d[key] self.assertEqual(data, self.makeData(key)) if verbose: print(data) self.assertEqual(len(d), self._numKeys) keys = list(d.keys()) self.assertEqual(len(keys), self._numKeys) self.assertEqual(type(keys), type([])) d['new record'] = 'a new record' self.assertEqual(len(d), self._numKeys+1) keys = list(d.keys()) self.assertEqual(len(keys), self._numKeys+1) d['new record'] = 'a replacement record' self.assertEqual(len(d), self._numKeys+1) keys = list(d.keys()) self.assertEqual(len(keys), self._numKeys+1) if verbose: print("the first 10 keys are:") pprint(keys[:10]) self.assertEqual(d['new record'], 'a replacement record') # We check also the positional parameter self.assertEqual(d.has_key('0001', None), 1) # We check also the keyword parameter self.assertEqual(d.has_key('spam', txn=None), 0) items = list(d.items()) self.assertEqual(len(items), self._numKeys+1) self.assertEqual(type(items), type([])) self.assertEqual(type(items[0]), type(())) self.assertEqual(len(items[0]), 2) if verbose: print("the first 10 items are:") pprint(items[:10]) values = list(d.values()) self.assertEqual(len(values), self._numKeys+1) self.assertEqual(type(values), type([])) if verbose: print("the first 10 values are:") pprint(values[:10]) #---------------------------------------- def test02b_SequenceMethods(self): d = self.d for key in ['0002', '0101', '0401', '0701', '0998']: data = d[key] self.assertEqual(data, self.makeData(key)) if verbose: print(data) self.assertTrue(hasattr(d, "__contains__")) self.assertTrue("0401" in d) self.assertFalse("1234" in d) #---------------------------------------- def test03_SimpleCursorStuff(self, get_raises_error=0, set_raises_error=0): if verbose: print('\n', '-=' * 30) print("Running %s.test03_SimpleCursorStuff (get_error %s, set_error %s)..." % \ (self.__class__.__name__, get_raises_error, set_raises_error)) if self.env and self.dbopenflags & db.DB_AUTO_COMMIT: txn = self.env.txn_begin() else: txn = None c = self.d.cursor(txn=txn) rec = c.first() count = 0 while rec is not None: count = count + 1 if verbose and count % 100 == 0: print(rec) try: rec = next(c) except db.DBNotFoundError as val: if get_raises_error: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print(val) rec = None else: self.fail("unexpected DBNotFoundError") self.assertEqual(c.get_current_size(), len(c.current()[1]), "%s != len(%r)" % (c.get_current_size(), c.current()[1])) self.assertEqual(count, self._numKeys) rec = c.last() count = 0 while rec is not None: count = count + 1 if verbose and count % 100 == 0: print(rec) try: rec = c.prev() except db.DBNotFoundError as val: if get_raises_error: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print(val) rec = None else: self.fail("unexpected DBNotFoundError") self.assertEqual(count, self._numKeys) rec = c.set('0505') rec2 = c.current() self.assertEqual(rec, rec2) self.assertEqual(rec[0], '0505') self.assertEqual(rec[1], self.makeData('0505')) self.assertEqual(c.get_current_size(), len(rec[1])) # make sure we get empty values properly rec = c.set('empty value') self.assertEqual(rec[1], '') self.assertEqual(c.get_current_size(), 0) try: n = c.set('bad key') except db.DBNotFoundError as val: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print(val) else: if set_raises_error: self.fail("expected exception") if n is not None: self.fail("expected None: %r" % (n,)) rec = c.get_both('0404', self.makeData('0404')) self.assertEqual(rec, ('0404', self.makeData('0404'))) try: n = c.get_both('0404', 'bad data') except db.DBNotFoundError as val: self.assertEqual(val.args[0], db.DB_NOTFOUND) if verbose: print(val) else: if get_raises_error: self.fail("expected exception") if n is not None: self.fail("expected None: %r" % (n,)) if self.d.get_type() == db.DB_BTREE: rec = c.set_range('011') if verbose: print("searched for '011', found: ", rec) rec = c.set_range('011',dlen=0,doff=0) if verbose: print("searched (partial) for '011', found: ", rec) if rec[1] != '': self.fail('expected empty data portion') ev = c.set_range('empty value') if verbose: print("search for 'empty value' returned", ev) if ev[1] != '': self.fail('empty value lookup failed') c.set('0499') c.delete() try: rec = c.current() except db.DBKeyEmptyError as val: if get_raises_error: self.assertEqual(val.args[0], db.DB_KEYEMPTY) if verbose: print(val) else: self.fail("unexpected DBKeyEmptyError") else: if get_raises_error: self.fail('DBKeyEmptyError exception expected') next(c) c2 = c.dup(db.DB_POSITION) self.assertEqual(c.current(), c2.current()) c2.put('', 'a new value', db.DB_CURRENT) self.assertEqual(c.current(), c2.current()) self.assertEqual(c.current()[1], 'a new value') c2.put('', 'er', db.DB_CURRENT, dlen=0, doff=5) self.assertEqual(c2.current()[1], 'a newer value') c.close() c2.close() if txn: txn.commit() # time to abuse the closed cursors and hope we don't crash methods_to_test = { 'current': (), 'delete': (), 'dup': (db.DB_POSITION,), 'first': (), 'get': (0,), 'next': (), 'prev': (), 'last': (), 'put':('', 'spam', db.DB_CURRENT), 'set': ("0505",), } for method, args in list(methods_to_test.items()): try: if verbose: print("attempting to use a closed cursor's %s method" % \ method) # a bug may cause a NULL pointer dereference... getattr(c, method)(*args) except db.DBError as val: self.assertEqual(val.args[0], 0) if verbose: print(val) else: self.fail("no exception raised when using a buggy cursor's" "%s method" % method) # # free cursor referencing a closed database, it should not barf: # oldcursor = self.d.cursor(txn=txn) self.d.close() # this would originally cause a segfault when the cursor for a # closed database was cleaned up. it should not anymore. # SF pybsddb bug id 667343 del oldcursor def test03b_SimpleCursorWithoutGetReturnsNone0(self): # same test but raise exceptions instead of returning None if verbose: print('\n', '-=' * 30) print("Running %s.test03b_SimpleCursorStuffWithoutGetReturnsNone..." % \ self.__class__.__name__) old = self.d.set_get_returns_none(0) self.assertEqual(old, 2) self.test03_SimpleCursorStuff(get_raises_error=1, set_raises_error=1) def test03b_SimpleCursorWithGetReturnsNone1(self): # same test but raise exceptions instead of returning None if verbose: print('\n', '-=' * 30) print("Running %s.test03b_SimpleCursorStuffWithoutGetReturnsNone..." % \ self.__class__.__name__) old = self.d.set_get_returns_none(1) self.test03_SimpleCursorStuff(get_raises_error=0, set_raises_error=1) def test03c_SimpleCursorGetReturnsNone2(self): # same test but raise exceptions instead of returning None if verbose: print('\n', '-=' * 30) print("Running %s.test03c_SimpleCursorStuffWithoutSetReturnsNone..." % \ self.__class__.__name__) old = self.d.set_get_returns_none(1) self.assertEqual(old, 2) old = self.d.set_get_returns_none(2) self.assertEqual(old, 1) self.test03_SimpleCursorStuff(get_raises_error=0, set_raises_error=0) def test03d_SimpleCursorPriority(self) : c = self.d.cursor() c.set_priority(db.DB_PRIORITY_VERY_LOW) # Positional self.assertEqual(db.DB_PRIORITY_VERY_LOW, c.get_priority()) c.set_priority(priority=db.DB_PRIORITY_HIGH) # Keyword self.assertEqual(db.DB_PRIORITY_HIGH, c.get_priority()) c.close() #---------------------------------------- def test04_PartialGetAndPut(self): d = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test04_PartialGetAndPut..." % \ self.__class__.__name__) key = "partialTest" data = "1" * 1000 + "2" * 1000 d.put(key, data) self.assertEqual(d.get(key), data) self.assertEqual(d.get(key, dlen=20, doff=990), ("1" * 10) + ("2" * 10)) d.put("partialtest2", ("1" * 30000) + "robin" ) self.assertEqual(d.get("partialtest2", dlen=5, doff=30000), "robin") # There seems to be a bug in DB here... Commented out the test for # now. ##self.assertEqual(d.get("partialtest2", dlen=5, doff=30010), "") if self.dbsetflags != db.DB_DUP: # Partial put with duplicate records requires a cursor d.put(key, "0000", dlen=2000, doff=0) self.assertEqual(d.get(key), "0000") d.put(key, "1111", dlen=1, doff=2) self.assertEqual(d.get(key), "0011110") #---------------------------------------- def test05_GetSize(self): d = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test05_GetSize..." % self.__class__.__name__) for i in range(1, 50000, 500): key = "size%s" % i #print "before ", i, d.put(key, "1" * i) #print "after", self.assertEqual(d.get_size(key), i) #print "done" #---------------------------------------- def test06_Truncate(self): d = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test06_Truncate..." % self.__class__.__name__) d.put("abcde", "ABCDE"); num = d.truncate() self.assertTrue(num >= 1, "truncate returned <= 0 on non-empty database") num = d.truncate() self.assertEqual(num, 0, "truncate on empty DB returned nonzero (%r)" % (num,)) #---------------------------------------- def test07_verify(self): # Verify bug solved in 4.7.3pre8 self.d.close() d = db.DB(self.env) d.verify(self.filename) #---------------------------------------- def test08_exists(self) : self.d.put("abcde", "ABCDE") self.assertTrue(self.d.exists("abcde") == True, "DB->exists() returns wrong value") self.assertTrue(self.d.exists("x") == False, "DB->exists() returns wrong value") #---------------------------------------- def test_compact(self) : d = self.d self.assertEqual(0, d.compact(flags=db.DB_FREELIST_ONLY)) self.assertEqual(0, d.compact(flags=db.DB_FREELIST_ONLY)) d.put("abcde", "ABCDE"); d.put("bcde", "BCDE"); d.put("abc", "ABC"); d.put("monty", "python"); d.delete("abc") d.delete("bcde") d.compact(start='abcde', stop='monty', txn=None, compact_fillpercent=42, compact_pages=1, compact_timeout=50000000, flags=db.DB_FREELIST_ONLY|db.DB_FREE_SPACE) #---------------------------------------- #---------------------------------------------------------------------- class BasicBTreeTestCase(BasicTestCase): dbtype = db.DB_BTREE class BasicHashTestCase(BasicTestCase): dbtype = db.DB_HASH class BasicBTreeWithThreadFlagTestCase(BasicTestCase): dbtype = db.DB_BTREE dbopenflags = db.DB_THREAD class BasicHashWithThreadFlagTestCase(BasicTestCase): dbtype = db.DB_HASH dbopenflags = db.DB_THREAD class BasicWithEnvTestCase(BasicTestCase): dbopenflags = db.DB_THREAD useEnv = 1 envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK #---------------------------------------- def test09_EnvRemoveAndRename(self): if not self.env: return if verbose: print('\n', '-=' * 30) print("Running %s.test09_EnvRemoveAndRename..." % self.__class__.__name__) # can't rename or remove an open DB self.d.close() newname = self.filename + '.renamed' self.env.dbrename(self.filename, None, newname) self.env.dbremove(newname) #---------------------------------------- class BasicBTreeWithEnvTestCase(BasicWithEnvTestCase): dbtype = db.DB_BTREE class BasicHashWithEnvTestCase(BasicWithEnvTestCase): dbtype = db.DB_HASH #---------------------------------------------------------------------- class BasicTransactionTestCase(BasicTestCase): if sys.version_info < (2, 7) : def assertIn(self, a, b, msg=None) : return self.assertTrue(a in b, msg=msg) dbopenflags = db.DB_THREAD | db.DB_AUTO_COMMIT useEnv = 1 envflags = (db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_TXN) envsetflags = db.DB_AUTO_COMMIT def tearDown(self): self.txn.commit() BasicTestCase.tearDown(self) def populateDB(self): txn = self.env.txn_begin() BasicTestCase.populateDB(self, _txn=txn) self.txn = self.env.txn_begin() def test06_Transactions(self): d = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test06_Transactions..." % self.__class__.__name__) self.assertEqual(d.get('new rec', txn=self.txn), None) d.put('new rec', 'this is a new record', self.txn) self.assertEqual(d.get('new rec', txn=self.txn), 'this is a new record') self.txn.abort() self.assertEqual(d.get('new rec'), None) self.txn = self.env.txn_begin() self.assertEqual(d.get('new rec', txn=self.txn), None) d.put('new rec', 'this is a new record', self.txn) self.assertEqual(d.get('new rec', txn=self.txn), 'this is a new record') self.txn.commit() self.assertEqual(d.get('new rec'), 'this is a new record') self.txn = self.env.txn_begin() c = d.cursor(self.txn) rec = c.first() count = 0 while rec is not None: count = count + 1 if verbose and count % 100 == 0: print(rec) rec = next(c) self.assertEqual(count, self._numKeys+1) c.close() # Cursors *MUST* be closed before commit! self.txn.commit() # flush pending updates self.env.txn_checkpoint (0, 0, 0) statDict = self.env.log_stat(0); self.assertIn('magic', statDict) self.assertIn('version', statDict) self.assertIn('cur_file', statDict) self.assertIn('region_nowait', statDict) # must have at least one log file present: logs = self.env.log_archive(db.DB_ARCH_ABS | db.DB_ARCH_LOG) self.assertNotEqual(logs, None) for log in logs: if verbose: print('log file: ' + log) logs = self.env.log_archive(db.DB_ARCH_REMOVE) self.assertTrue(not logs) self.txn = self.env.txn_begin() #---------------------------------------- def test08_exists(self) : txn = self.env.txn_begin() self.d.put("abcde", "ABCDE", txn=txn) txn.commit() txn = self.env.txn_begin() self.assertTrue(self.d.exists("abcde", txn=txn) == True, "DB->exists() returns wrong value") self.assertTrue(self.d.exists("x", txn=txn) == False, "DB->exists() returns wrong value") txn.abort() #---------------------------------------- def test09_TxnTruncate(self): d = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test09_TxnTruncate..." % self.__class__.__name__) d.put("abcde", "ABCDE"); txn = self.env.txn_begin() num = d.truncate(txn) self.assertTrue(num >= 1, "truncate returned <= 0 on non-empty database") num = d.truncate(txn) self.assertEqual(num, 0, "truncate on empty DB returned nonzero (%r)" % (num,)) txn.commit() #---------------------------------------- def test10_TxnLateUse(self): txn = self.env.txn_begin() txn.abort() try: txn.abort() except db.DBError as e: pass else: raise RuntimeError("DBTxn.abort() called after DB_TXN no longer valid w/o an exception") txn = self.env.txn_begin() txn.commit() try: txn.commit() except db.DBError as e: pass else: raise RuntimeError("DBTxn.commit() called after DB_TXN no longer valid w/o an exception") #---------------------------------------- def test_txn_name(self) : txn=self.env.txn_begin() self.assertEqual(txn.get_name(), "") txn.set_name("XXYY") self.assertEqual(txn.get_name(), "XXYY") txn.set_name("") self.assertEqual(txn.get_name(), "") txn.abort() #---------------------------------------- def test_txn_set_timeout(self) : txn=self.env.txn_begin() txn.set_timeout(1234567, db.DB_SET_LOCK_TIMEOUT) txn.set_timeout(2345678, flags=db.DB_SET_TXN_TIMEOUT) txn.abort() #---------------------------------------- def test_get_tx_max(self) : self.assertEqual(self.env.get_tx_max(), 30) def test_get_tx_timestamp(self) : self.assertEqual(self.env.get_tx_timestamp(), self._t) class BTreeTransactionTestCase(BasicTransactionTestCase): dbtype = db.DB_BTREE class HashTransactionTestCase(BasicTransactionTestCase): dbtype = db.DB_HASH #---------------------------------------------------------------------- class BTreeRecnoTestCase(BasicTestCase): dbtype = db.DB_BTREE dbsetflags = db.DB_RECNUM def test09_RecnoInBTree(self): d = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test09_RecnoInBTree..." % self.__class__.__name__) rec = d.get(200) self.assertEqual(type(rec), type(())) self.assertEqual(len(rec), 2) if verbose: print("Record #200 is ", rec) c = d.cursor() c.set('0200') num = c.get_recno() self.assertEqual(type(num), type(1)) if verbose: print("recno of d['0200'] is ", num) rec = c.current() self.assertEqual(c.set_recno(num), rec) c.close() class BTreeRecnoWithThreadFlagTestCase(BTreeRecnoTestCase): dbopenflags = db.DB_THREAD #---------------------------------------------------------------------- class BasicDUPTestCase(BasicTestCase): dbsetflags = db.DB_DUP def test10_DuplicateKeys(self): d = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test10_DuplicateKeys..." % \ self.__class__.__name__) d.put("dup0", "before") for x in "The quick brown fox jumped over the lazy dog.".split(): d.put("dup1", x) d.put("dup2", "after") data = d.get("dup1") self.assertEqual(data, "The") if verbose: print(data) c = d.cursor() rec = c.set("dup1") self.assertEqual(rec, ('dup1', 'The')) next_reg = next(c) self.assertEqual(next_reg, ('dup1', 'quick')) rec = c.set("dup1") count = c.count() self.assertEqual(count, 9) next_dup = c.next_dup() self.assertEqual(next_dup, ('dup1', 'quick')) rec = c.set('dup1') while rec is not None: if verbose: print(rec) rec = c.next_dup() c.set('dup1') rec = c.next_nodup() self.assertNotEqual(rec[0], 'dup1') if verbose: print(rec) c.close() class BTreeDUPTestCase(BasicDUPTestCase): dbtype = db.DB_BTREE class HashDUPTestCase(BasicDUPTestCase): dbtype = db.DB_HASH class BTreeDUPWithThreadTestCase(BasicDUPTestCase): dbtype = db.DB_BTREE dbopenflags = db.DB_THREAD class HashDUPWithThreadTestCase(BasicDUPTestCase): dbtype = db.DB_HASH dbopenflags = db.DB_THREAD #---------------------------------------------------------------------- class BasicMultiDBTestCase(BasicTestCase): dbname = 'first' def otherType(self): if self.dbtype == db.DB_BTREE: return db.DB_HASH else: return db.DB_BTREE def test11_MultiDB(self): d1 = self.d if verbose: print('\n', '-=' * 30) print("Running %s.test11_MultiDB..." % self.__class__.__name__) d2 = db.DB(self.env) d2.open(self.filename, "second", self.dbtype, self.dbopenflags|db.DB_CREATE) d3 = db.DB(self.env) d3.open(self.filename, "third", self.otherType(), self.dbopenflags|db.DB_CREATE) for x in "The quick brown fox jumped over the lazy dog".split(): d2.put(x, self.makeData(x)) for x in string.letters: d3.put(x, x*70) d1.sync() d2.sync() d3.sync() d1.close() d2.close() d3.close() self.d = d1 = d2 = d3 = None self.d = d1 = db.DB(self.env) d1.open(self.filename, self.dbname, flags = self.dbopenflags) d2 = db.DB(self.env) d2.open(self.filename, "second", flags = self.dbopenflags) d3 = db.DB(self.env) d3.open(self.filename, "third", flags = self.dbopenflags) c1 = d1.cursor() c2 = d2.cursor() c3 = d3.cursor() count = 0 rec = c1.first() while rec is not None: count = count + 1 if verbose and (count % 50) == 0: print(rec) rec = next(c1) self.assertEqual(count, self._numKeys) count = 0 rec = c2.first() while rec is not None: count = count + 1 if verbose: print(rec) rec = next(c2) self.assertEqual(count, 9) count = 0 rec = c3.first() while rec is not None: count = count + 1 if verbose: print(rec) rec = next(c3) self.assertEqual(count, len(string.letters)) c1.close() c2.close() c3.close() d2.close() d3.close() # Strange things happen if you try to use Multiple DBs per file without a # DBEnv with MPOOL and LOCKing... class BTreeMultiDBTestCase(BasicMultiDBTestCase): dbtype = db.DB_BTREE dbopenflags = db.DB_THREAD useEnv = 1 envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK class HashMultiDBTestCase(BasicMultiDBTestCase): dbtype = db.DB_HASH dbopenflags = db.DB_THREAD useEnv = 1 envflags = db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK class PrivateObject(unittest.TestCase) : def tearDown(self) : del self.obj def test01_DefaultIsNone(self) : self.assertEqual(self.obj.get_private(), None) def test02_assignment(self) : a = "example of private object" self.obj.set_private(a) b = self.obj.get_private() self.assertTrue(a is b) # Object identity def test03_leak_assignment(self) : a = "example of private object" refcount = sys.getrefcount(a) self.obj.set_private(a) self.assertEqual(refcount+1, sys.getrefcount(a)) self.obj.set_private(None) self.assertEqual(refcount, sys.getrefcount(a)) def test04_leak_GC(self) : a = "example of private object" refcount = sys.getrefcount(a) self.obj.set_private(a) self.obj = None self.assertEqual(refcount, sys.getrefcount(a)) class DBEnvPrivateObject(PrivateObject) : def setUp(self) : self.obj = db.DBEnv() class DBPrivateObject(PrivateObject) : def setUp(self) : self.obj = db.DB() class CrashAndBurn(unittest.TestCase) : #def test01_OpenCrash(self) : # # See http://bugs.python.org/issue3307 # self.assertRaises(db.DBInvalidArgError, db.DB, None, 65535) if db.version() < (4, 8) : def test02_DBEnv_dealloc(self): # http://bugs.python.org/issue3885 import gc self.assertRaises(db.DBInvalidArgError, db.DBEnv, ~db.DB_RPCCLIENT) gc.collect() #---------------------------------------------------------------------- #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(VersionTestCase)) suite.addTest(unittest.makeSuite(BasicBTreeTestCase)) suite.addTest(unittest.makeSuite(BasicHashTestCase)) suite.addTest(unittest.makeSuite(BasicBTreeWithThreadFlagTestCase)) suite.addTest(unittest.makeSuite(BasicHashWithThreadFlagTestCase)) suite.addTest(unittest.makeSuite(BasicBTreeWithEnvTestCase)) suite.addTest(unittest.makeSuite(BasicHashWithEnvTestCase)) suite.addTest(unittest.makeSuite(BTreeTransactionTestCase)) suite.addTest(unittest.makeSuite(HashTransactionTestCase)) suite.addTest(unittest.makeSuite(BTreeRecnoTestCase)) suite.addTest(unittest.makeSuite(BTreeRecnoWithThreadFlagTestCase)) suite.addTest(unittest.makeSuite(BTreeDUPTestCase)) suite.addTest(unittest.makeSuite(HashDUPTestCase)) suite.addTest(unittest.makeSuite(BTreeDUPWithThreadTestCase)) suite.addTest(unittest.makeSuite(HashDUPWithThreadTestCase)) suite.addTest(unittest.makeSuite(BTreeMultiDBTestCase)) suite.addTest(unittest.makeSuite(HashMultiDBTestCase)) suite.addTest(unittest.makeSuite(DBEnvPrivateObject)) suite.addTest(unittest.makeSuite(DBPrivateObject)) suite.addTest(unittest.makeSuite(CrashAndBurn)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_dbtables.py0000644000000000000000000003610712363206606020647 0ustar rootroot00000000000000#!/usr/bin/env python # #----------------------------------------------------------------------- # A test suite for the table interface built on bsddb.db #----------------------------------------------------------------------- # # Copyright (C) 2000, 2001 by Autonomous Zone Industries # Copyright (C) 2002 Gregory P. Smith # # March 20, 2000 # # License: This is free software. You may use this software for any # purpose including modification/redistribution, so long as # this header remains intact and that you do not claim any # rights of ownership or authorship of this software. This # software has been tested, but no warranty is expressed or # implied. # # -- Gregory P. Smith # # $Id: test_dbtables.py,v 53e5f052c511 2012/01/16 18:18:15 jcea $ import os, re, sys if sys.version_info[0] < 3 : try: import pickle pickle = cPickle except ImportError: import pickle else : import pickle import unittest from .test_all import db, dbtables, test_support, verbose, \ get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- class TableDBTestCase(unittest.TestCase): db_name = 'test-table.db' def setUp(self): import sys if sys.version_info[0] >= 3 : from .test_all import do_proxy_db_py3k self._flag_proxy_db_py3k = do_proxy_db_py3k(False) self.testHomeDir = get_new_environment_path() self.tdb = dbtables.bsdTableDB( filename='tabletest.db', dbhome=self.testHomeDir, create=1) def tearDown(self): self.tdb.close() import sys if sys.version_info[0] >= 3 : from .test_all import do_proxy_db_py3k do_proxy_db_py3k(self._flag_proxy_db_py3k) test_support.rmtree(self.testHomeDir) def test01(self): tabname = "test01" colname = 'cool numbers' try: self.tdb.Drop(tabname) except dbtables.TableDBError: pass self.tdb.CreateTable(tabname, [colname]) import sys if sys.version_info[0] < 3 : self.tdb.Insert(tabname, {colname: pickle.dumps(3.14159, 1)}) else : self.tdb.Insert(tabname, {colname: pickle.dumps(3.14159, 1).decode("iso8859-1")}) # 8 bits if verbose: self.tdb._db_print() values = self.tdb.Select( tabname, [colname], conditions={colname: None}) import sys if sys.version_info[0] < 3 : colval = pickle.loads(values[0][colname]) else : colval = pickle.loads(bytes(values[0][colname], "iso8859-1")) self.assertTrue(colval > 3.141) self.assertTrue(colval < 3.142) def test02(self): tabname = "test02" col0 = 'coolness factor' col1 = 'but can it fly?' col2 = 'Species' import sys if sys.version_info[0] < 3 : testinfo = [ {col0: pickle.dumps(8, 1), col1: 'no', col2: 'Penguin'}, {col0: pickle.dumps(-1, 1), col1: 'no', col2: 'Turkey'}, {col0: pickle.dumps(9, 1), col1: 'yes', col2: 'SR-71A Blackbird'} ] else : testinfo = [ {col0: pickle.dumps(8, 1).decode("iso8859-1"), col1: 'no', col2: 'Penguin'}, {col0: pickle.dumps(-1, 1).decode("iso8859-1"), col1: 'no', col2: 'Turkey'}, {col0: pickle.dumps(9, 1).decode("iso8859-1"), col1: 'yes', col2: 'SR-71A Blackbird'} ] try: self.tdb.Drop(tabname) except dbtables.TableDBError: pass self.tdb.CreateTable(tabname, [col0, col1, col2]) for row in testinfo : self.tdb.Insert(tabname, row) import sys if sys.version_info[0] < 3 : values = self.tdb.Select(tabname, [col2], conditions={col0: lambda x: pickle.loads(x) >= 8}) else : values = self.tdb.Select(tabname, [col2], conditions={col0: lambda x: pickle.loads(bytes(x, "iso8859-1")) >= 8}) self.assertEqual(len(values), 2) if values[0]['Species'] == 'Penguin' : self.assertEqual(values[1]['Species'], 'SR-71A Blackbird') elif values[0]['Species'] == 'SR-71A Blackbird' : self.assertEqual(values[1]['Species'], 'Penguin') else : if verbose: print("values= %r" % (values,)) raise RuntimeError("Wrong values returned!") def test03(self): tabname = "test03" try: self.tdb.Drop(tabname) except dbtables.TableDBError: pass if verbose: print('...before CreateTable...') self.tdb._db_print() self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e']) if verbose: print('...after CreateTable...') self.tdb._db_print() self.tdb.Drop(tabname) if verbose: print('...after Drop...') self.tdb._db_print() self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e']) try: self.tdb.Insert(tabname, {'a': "", 'e': pickle.dumps([{4:5, 6:7}, 'foo'], 1), 'f': "Zero"}) self.fail('Expected an exception') except dbtables.TableDBError: pass try: self.tdb.Select(tabname, [], conditions={'foo': '123'}) self.fail('Expected an exception') except dbtables.TableDBError: pass self.tdb.Insert(tabname, {'a': '42', 'b': "bad", 'c': "meep", 'e': 'Fuzzy wuzzy was a bear'}) self.tdb.Insert(tabname, {'a': '581750', 'b': "good", 'd': "bla", 'c': "black", 'e': 'fuzzy was here'}) self.tdb.Insert(tabname, {'a': '800000', 'b': "good", 'd': "bla", 'c': "black", 'e': 'Fuzzy wuzzy is a bear'}) if verbose: self.tdb._db_print() # this should return two rows values = self.tdb.Select(tabname, ['b', 'a', 'd'], conditions={'e': re.compile('wuzzy').search, 'a': re.compile('^[0-9]+$').match}) self.assertEqual(len(values), 2) # now lets delete one of them and try again self.tdb.Delete(tabname, conditions={'b': dbtables.ExactCond('good')}) values = self.tdb.Select( tabname, ['a', 'd', 'b'], conditions={'e': dbtables.PrefixCond('Fuzzy')}) self.assertEqual(len(values), 1) self.assertEqual(values[0]['d'], None) values = self.tdb.Select(tabname, ['b'], conditions={'c': lambda c: c == 'meep'}) self.assertEqual(len(values), 1) self.assertEqual(values[0]['b'], "bad") def test04_MultiCondSelect(self): tabname = "test04_MultiCondSelect" try: self.tdb.Drop(tabname) except dbtables.TableDBError: pass self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e']) try: self.tdb.Insert(tabname, {'a': "", 'e': pickle.dumps([{4:5, 6:7}, 'foo'], 1), 'f': "Zero"}) self.fail('Expected an exception') except dbtables.TableDBError: pass self.tdb.Insert(tabname, {'a': "A", 'b': "B", 'c': "C", 'd': "D", 'e': "E"}) self.tdb.Insert(tabname, {'a': "-A", 'b': "-B", 'c': "-C", 'd': "-D", 'e': "-E"}) self.tdb.Insert(tabname, {'a': "A-", 'b': "B-", 'c': "C-", 'd': "D-", 'e': "E-"}) if verbose: self.tdb._db_print() # This select should return 0 rows. it is designed to test # the bug identified and fixed in sourceforge bug # 590449 # (Big Thanks to "Rob Tillotson (n9mtb)" for tracking this down # and supplying a fix!! This one caused many headaches to say # the least...) values = self.tdb.Select(tabname, ['b', 'a', 'd'], conditions={'e': dbtables.ExactCond('E'), 'a': dbtables.ExactCond('A'), 'd': dbtables.PrefixCond('-') } ) self.assertEqual(len(values), 0, values) def test_CreateOrExtend(self): tabname = "test_CreateOrExtend" self.tdb.CreateOrExtendTable( tabname, ['name', 'taste', 'filling', 'alcohol content', 'price']) try: self.tdb.Insert(tabname, {'taste': 'crap', 'filling': 'no', 'is it Guinness?': 'no'}) self.fail("Insert should've failed due to bad column name") except: pass self.tdb.CreateOrExtendTable(tabname, ['name', 'taste', 'is it Guinness?']) # these should both succeed as the table should contain the union of both sets of columns. self.tdb.Insert(tabname, {'taste': 'crap', 'filling': 'no', 'is it Guinness?': 'no'}) self.tdb.Insert(tabname, {'taste': 'great', 'filling': 'yes', 'is it Guinness?': 'yes', 'name': 'Guinness'}) def test_CondObjs(self): tabname = "test_CondObjs" self.tdb.CreateTable(tabname, ['a', 'b', 'c', 'd', 'e', 'p']) self.tdb.Insert(tabname, {'a': "the letter A", 'b': "the letter B", 'c': "is for cookie"}) self.tdb.Insert(tabname, {'a': "is for aardvark", 'e': "the letter E", 'c': "is for cookie", 'd': "is for dog"}) self.tdb.Insert(tabname, {'a': "the letter A", 'e': "the letter E", 'c': "is for cookie", 'p': "is for Python"}) values = self.tdb.Select( tabname, ['p', 'e'], conditions={'e': dbtables.PrefixCond('the l')}) self.assertEqual(len(values), 2, values) self.assertEqual(values[0]['e'], values[1]['e'], values) self.assertNotEqual(values[0]['p'], values[1]['p'], values) values = self.tdb.Select( tabname, ['d', 'a'], conditions={'a': dbtables.LikeCond('%aardvark%')}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['d'], "is for dog", values) self.assertEqual(values[0]['a'], "is for aardvark", values) values = self.tdb.Select(tabname, None, {'b': dbtables.Cond(), 'e':dbtables.LikeCond('%letter%'), 'a':dbtables.PrefixCond('is'), 'd':dbtables.ExactCond('is for dog'), 'c':dbtables.PrefixCond('is for'), 'p':lambda s: not s}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['d'], "is for dog", values) self.assertEqual(values[0]['a'], "is for aardvark", values) def test_Delete(self): tabname = "test_Delete" self.tdb.CreateTable(tabname, ['x', 'y', 'z']) # prior to 2001-05-09 there was a bug where Delete() would # fail if it encountered any rows that did not have values in # every column. # Hunted and Squashed by (Jukka Santala - donwulff@nic.fi) self.tdb.Insert(tabname, {'x': 'X1', 'y':'Y1'}) self.tdb.Insert(tabname, {'x': 'X2', 'y':'Y2', 'z': 'Z2'}) self.tdb.Delete(tabname, conditions={'x': dbtables.PrefixCond('X')}) values = self.tdb.Select(tabname, ['y'], conditions={'x': dbtables.PrefixCond('X')}) self.assertEqual(len(values), 0) def test_Modify(self): tabname = "test_Modify" self.tdb.CreateTable(tabname, ['Name', 'Type', 'Access']) self.tdb.Insert(tabname, {'Name': 'Index to MP3 files.doc', 'Type': 'Word', 'Access': '8'}) self.tdb.Insert(tabname, {'Name': 'Nifty.MP3', 'Access': '1'}) self.tdb.Insert(tabname, {'Type': 'Unknown', 'Access': '0'}) def set_type(type): if type is None: return 'MP3' return type def increment_access(count): return str(int(count)+1) def remove_value(value): return None self.tdb.Modify(tabname, conditions={'Access': dbtables.ExactCond('0')}, mappings={'Access': remove_value}) self.tdb.Modify(tabname, conditions={'Name': dbtables.LikeCond('%MP3%')}, mappings={'Type': set_type}) self.tdb.Modify(tabname, conditions={'Name': dbtables.LikeCond('%')}, mappings={'Access': increment_access}) try: self.tdb.Modify(tabname, conditions={'Name': dbtables.LikeCond('%')}, mappings={'Access': 'What is your quest?'}) except TypeError: # success, the string value in mappings isn't callable pass else: raise RuntimeError("why was TypeError not raised for bad callable?") # Delete key in select conditions values = self.tdb.Select( tabname, None, conditions={'Type': dbtables.ExactCond('Unknown')}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['Name'], None, values) self.assertEqual(values[0]['Access'], None, values) # Modify value by select conditions values = self.tdb.Select( tabname, None, conditions={'Name': dbtables.ExactCond('Nifty.MP3')}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['Type'], "MP3", values) self.assertEqual(values[0]['Access'], "2", values) # Make sure change applied only to select conditions values = self.tdb.Select( tabname, None, conditions={'Name': dbtables.LikeCond('%doc%')}) self.assertEqual(len(values), 1, values) self.assertEqual(values[0]['Type'], "Word", values) self.assertEqual(values[0]['Access'], "9", values) def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(TableDBTestCase)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_compare.py0000644000000000000000000004013412363206627020513 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for python DB duplicate and Btree key comparison function. """ import sys, os, re from . import test_all from io import StringIO import unittest from .test_all import db, dbshelve, test_support, \ get_new_environment_path, get_new_database_path # Needed for python 3. "cmp" vanished in 3.0.1 def cmp(a, b) : if a==b : return 0 if a All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import unittest import os from .test_all import db, test_support, get_new_environment_path, get_new_database_path class DBSequenceTest(unittest.TestCase): def setUp(self): self.int_32_max = 0x100000000 self.homeDir = get_new_environment_path() self.filename = "test" self.dbenv = db.DBEnv() self.dbenv.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL, 0o666) self.d = db.DB(self.dbenv) self.d.open(self.filename, db.DB_BTREE, db.DB_CREATE, 0o666) def tearDown(self): if hasattr(self, 'seq'): self.seq.close() del self.seq if hasattr(self, 'd'): self.d.close() del self.d if hasattr(self, 'dbenv'): self.dbenv.close() del self.dbenv test_support.rmtree(self.homeDir) def test_get(self): self.seq = db.DBSequence(self.d, flags=0) start_value = 10 * self.int_32_max self.assertEqual(0xA00000000, start_value) self.assertEqual(None, self.seq.initial_value(start_value)) self.assertEqual(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) self.assertEqual(start_value, self.seq.get(5)) self.assertEqual(start_value + 5, self.seq.get()) def test_remove(self): self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(None, self.seq.remove(txn=None, flags=0)) del self.seq def test_get_key(self): self.seq = db.DBSequence(self.d, flags=0) key = 'foo' self.assertEqual(None, self.seq.open(key=key, txn=None, flags=db.DB_CREATE)) self.assertEqual(key, self.seq.get_key()) def test_get_dbp(self): self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(self.d, self.seq.get_dbp()) def test_cachesize(self): self.seq = db.DBSequence(self.d, flags=0) cashe_size = 10 self.assertEqual(None, self.seq.set_cachesize(cashe_size)) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(cashe_size, self.seq.get_cachesize()) def test_flags(self): self.seq = db.DBSequence(self.d, flags=0) flag = db.DB_SEQ_WRAP; self.assertEqual(None, self.seq.set_flags(flag)) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(flag, self.seq.get_flags() & flag) def test_range(self): self.seq = db.DBSequence(self.d, flags=0) seq_range = (10 * self.int_32_max, 11 * self.int_32_max - 1) self.assertEqual(None, self.seq.set_range(seq_range)) self.seq.initial_value(seq_range[0]) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) self.assertEqual(seq_range, self.seq.get_range()) def test_stat(self): self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.open(key='foo', txn=None, flags=db.DB_CREATE)) stat = self.seq.stat() for param in ('nowait', 'min', 'max', 'value', 'current', 'flags', 'cache_size', 'last_value', 'wait'): self.assertTrue(param in stat, "parameter %s isn't in stat info" % param) # This code checks a crash solved in Berkeley DB 4.7 def test_stat_crash(self) : d=db.DB() d.open(None,dbtype=db.DB_HASH,flags=db.DB_CREATE) # In RAM seq = db.DBSequence(d, flags=0) self.assertRaises(db.DBNotFoundError, seq.open, key='id', txn=None, flags=0) self.assertRaises(db.DBInvalidArgError, seq.stat) d.close() def test_64bits(self) : # We don't use both extremes because they are problematic value_plus=(1<<63)-2 self.assertEqual(9223372036854775806,value_plus) value_minus=(-1<<63)+1 # Two complement self.assertEqual(-9223372036854775807,value_minus) self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.initial_value(value_plus-1)) self.assertEqual(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) self.assertEqual(value_plus-1, self.seq.get(1)) self.assertEqual(value_plus, self.seq.get(1)) self.seq.remove(txn=None, flags=0) self.seq = db.DBSequence(self.d, flags=0) self.assertEqual(None, self.seq.initial_value(value_minus)) self.assertEqual(None, self.seq.open(key='id', txn=None, flags=db.DB_CREATE)) self.assertEqual(value_minus, self.seq.get(1)) self.assertEqual(value_minus+1, self.seq.get(1)) def test_multiple_close(self): self.seq = db.DBSequence(self.d) self.seq.close() # You can close a Sequence multiple times self.seq.close() self.seq.close() def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBSequenceTest)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_all.py0000644000000000000000000005031312363206634017633 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """Run all test cases. """ import sys import os import unittest import bsddb3 as bsddb if sys.version_info[0] >= 3 : charset = "iso8859-1" # Full 8 bit class logcursor_py3k(object) : def __init__(self, env) : self._logcursor = env.log_cursor() def __getattr__(self, v) : return getattr(self._logcursor, v) def __next__(self) : v = getattr(self._logcursor, "next")() if v is not None : v = (v[0], v[1].decode(charset)) return v next = __next__ def first(self) : v = self._logcursor.first() if v is not None : v = (v[0], v[1].decode(charset)) return v def last(self) : v = self._logcursor.last() if v is not None : v = (v[0], v[1].decode(charset)) return v def prev(self) : v = self._logcursor.prev() if v is not None : v = (v[0], v[1].decode(charset)) return v def current(self) : v = self._logcursor.current() if v is not None : v = (v[0], v[1].decode(charset)) return v def set(self, lsn) : v = self._logcursor.set(lsn) if v is not None : v = (v[0], v[1].decode(charset)) return v class cursor_py3k(object) : def __init__(self, db, *args, **kwargs) : self._dbcursor = db.cursor(*args, **kwargs) def __getattr__(self, v) : return getattr(self._dbcursor, v) def _fix(self, v) : if v is None : return None key, value = v if isinstance(key, bytes) : key = key.decode(charset) return (key, value.decode(charset)) def __next__(self, flags=0, dlen=-1, doff=-1) : v = getattr(self._dbcursor, "next")(flags=flags, dlen=dlen, doff=doff) return self._fix(v) next = __next__ def previous(self) : v = self._dbcursor.previous() return self._fix(v) def last(self) : v = self._dbcursor.last() return self._fix(v) def set(self, k) : if isinstance(k, str) : k = bytes(k, charset) v = self._dbcursor.set(k) return self._fix(v) def set_recno(self, num) : v = self._dbcursor.set_recno(num) return self._fix(v) def set_range(self, k, dlen=-1, doff=-1) : if isinstance(k, str) : k = bytes(k, charset) v = self._dbcursor.set_range(k, dlen=dlen, doff=doff) return self._fix(v) def dup(self, flags=0) : cursor = self._dbcursor.dup(flags) return dup_cursor_py3k(cursor) def next_dup(self) : v = self._dbcursor.next_dup() return self._fix(v) def next_nodup(self) : v = self._dbcursor.next_nodup() return self._fix(v) def put(self, key, data, flags=0, dlen=-1, doff=-1) : if isinstance(key, str) : key = bytes(key, charset) if isinstance(data, str) : value = bytes(data, charset) return self._dbcursor.put(key, data, flags=flags, dlen=dlen, doff=doff) def current(self, flags=0, dlen=-1, doff=-1) : v = self._dbcursor.current(flags=flags, dlen=dlen, doff=doff) return self._fix(v) def first(self, flags=0, dlen=-1, doff=-1) : v = self._dbcursor.first(flags=flags, dlen=dlen, doff=doff) return self._fix(v) def pget(self, key=None, data=None, flags=0) : # Incorrect because key can be a bare number, # but enough to pass testsuite if isinstance(key, int) and (data is None) and (flags == 0) : flags = key key = None if isinstance(key, str) : key = bytes(key, charset) if isinstance(data, int) and (flags==0) : flags = data data = None if isinstance(data, str) : data = bytes(data, charset) v=self._dbcursor.pget(key=key, data=data, flags=flags) if v is not None : v1, v2, v3 = v if isinstance(v1, bytes) : v1 = v1.decode(charset) if isinstance(v2, bytes) : v2 = v2.decode(charset) v = (v1, v2, v3.decode(charset)) return v def join_item(self) : v = self._dbcursor.join_item() if v is not None : v = v.decode(charset) return v def get(self, *args, **kwargs) : l = len(args) if l == 2 : k, f = args if isinstance(k, str) : k = bytes(k, "iso8859-1") args = (k, f) elif l == 3 : k, d, f = args if isinstance(k, str) : k = bytes(k, charset) if isinstance(d, str) : d = bytes(d, charset) args =(k, d, f) v = self._dbcursor.get(*args, **kwargs) if v is not None : k, v = v if isinstance(k, bytes) : k = k.decode(charset) v = (k, v.decode(charset)) return v def get_both(self, key, value) : if isinstance(key, str) : key = bytes(key, charset) if isinstance(value, str) : value = bytes(value, charset) v=self._dbcursor.get_both(key, value) return self._fix(v) class dup_cursor_py3k(cursor_py3k) : def __init__(self, dbcursor) : self._dbcursor = dbcursor class DB_py3k(object) : def __init__(self, *args, **kwargs) : args2=[] for i in args : if isinstance(i, DBEnv_py3k) : i = i._dbenv args2.append(i) args = tuple(args2) for k, v in list(kwargs.items()) : if isinstance(v, DBEnv_py3k) : kwargs[k] = v._dbenv self._db = bsddb._db.DB_orig(*args, **kwargs) def __contains__(self, k) : if isinstance(k, str) : k = bytes(k, charset) return getattr(self._db, "has_key")(k) def __getitem__(self, k) : if isinstance(k, str) : k = bytes(k, charset) v = self._db[k] if v is not None : v = v.decode(charset) return v def __setitem__(self, k, v) : if isinstance(k, str) : k = bytes(k, charset) if isinstance(v, str) : v = bytes(v, charset) self._db[k] = v def __delitem__(self, k) : if isinstance(k, str) : k = bytes(k, charset) del self._db[k] def __getattr__(self, v) : return getattr(self._db, v) def __len__(self) : return len(self._db) def has_key(self, k, txn=None) : if isinstance(k, str) : k = bytes(k, charset) return self._db.has_key(k, txn=txn) def set_re_delim(self, c) : if isinstance(c, str) : # We can use a numeric value byte too c = bytes(c, charset) return self._db.set_re_delim(c) def set_re_pad(self, c) : if isinstance(c, str) : # We can use a numeric value byte too c = bytes(c, charset) return self._db.set_re_pad(c) def get_re_source(self) : source = self._db.get_re_source() return source.decode(charset) def put(self, key, data, txn=None, flags=0, dlen=-1, doff=-1) : if isinstance(key, str) : key = bytes(key, charset) if isinstance(data, str) : value = bytes(data, charset) return self._db.put(key, data, flags=flags, txn=txn, dlen=dlen, doff=doff) def append(self, value, txn=None) : if isinstance(value, str) : value = bytes(value, charset) return self._db.append(value, txn=txn) def get_size(self, key) : if isinstance(key, str) : key = bytes(key, charset) return self._db.get_size(key) def exists(self, key, *args, **kwargs) : if isinstance(key, str) : key = bytes(key, charset) return self._db.exists(key, *args, **kwargs) def get(self, key, default="MagicCookie", txn=None, flags=0, dlen=-1, doff=-1) : if isinstance(key, str) : key = bytes(key, charset) if default != "MagicCookie" : # Magic for 'test_get_none.py' v=self._db.get(key, default=default, txn=txn, flags=flags, dlen=dlen, doff=doff) else : v=self._db.get(key, txn=txn, flags=flags, dlen=dlen, doff=doff) if (v is not None) and isinstance(v, bytes) : v = v.decode(charset) return v def pget(self, key, txn=None) : if isinstance(key, str) : key = bytes(key, charset) v=self._db.pget(key, txn=txn) if v is not None : v1, v2 = v if isinstance(v1, bytes) : v1 = v1.decode(charset) v = (v1, v2.decode(charset)) return v def get_both(self, key, value, txn=None, flags=0) : if isinstance(key, str) : key = bytes(key, charset) if isinstance(value, str) : value = bytes(value, charset) v=self._db.get_both(key, value, txn=txn, flags=flags) if v is not None : v = v.decode(charset) return v def delete(self, key, txn=None) : if isinstance(key, str) : key = bytes(key, charset) return self._db.delete(key, txn=txn) def keys(self) : k = list(self._db.keys()) if len(k) and isinstance(k[0], bytes) : return [i.decode(charset) for i in list(self._db.keys())] else : return k def items(self) : data = list(self._db.items()) if not len(data) : return data data2 = [] for k, v in data : if isinstance(k, bytes) : k = k.decode(charset) data2.append((k, v.decode(charset))) return data2 def associate(self, secondarydb, callback, flags=0, txn=None) : class associate_callback(object) : def __init__(self, callback) : self._callback = callback def callback(self, key, data) : if isinstance(key, str) : key = key.decode(charset) data = data.decode(charset) key = self._callback(key, data) if (key != bsddb._db.DB_DONOTINDEX) : if isinstance(key, str) : key = bytes(key, charset) elif isinstance(key, list) : key2 = [] for i in key : if isinstance(i, str) : i = bytes(i, charset) key2.append(i) key = key2 return key return self._db.associate(secondarydb._db, associate_callback(callback).callback, flags=flags, txn=txn) def cursor(self, txn=None, flags=0) : return cursor_py3k(self._db, txn=txn, flags=flags) def join(self, cursor_list) : cursor_list = [i._dbcursor for i in cursor_list] return dup_cursor_py3k(self._db.join(cursor_list)) class DBEnv_py3k(object) : def __init__(self, *args, **kwargs) : self._dbenv = bsddb._db.DBEnv_orig(*args, **kwargs) def __getattr__(self, v) : return getattr(self._dbenv, v) def log_cursor(self, flags=0) : return logcursor_py3k(self._dbenv) def get_lg_dir(self) : return self._dbenv.get_lg_dir().decode(charset) def get_tmp_dir(self) : return self._dbenv.get_tmp_dir().decode(charset) def get_data_dirs(self) : return tuple( (i.decode(charset) for i in self._dbenv.get_data_dirs())) class DBSequence_py3k(object) : def __init__(self, db, *args, **kwargs) : self._db=db self._dbsequence = bsddb._db.DBSequence_orig(db._db, *args, **kwargs) def __getattr__(self, v) : return getattr(self._dbsequence, v) def open(self, key, *args, **kwargs) : return self._dbsequence.open(bytes(key, charset), *args, **kwargs) def get_key(self) : return self._dbsequence.get_key().decode(charset) def get_dbp(self) : return self._db import string string.letters=[chr(i) for i in range(65,91)] bsddb._db.DBEnv_orig = bsddb._db.DBEnv bsddb._db.DB_orig = bsddb._db.DB bsddb._db.DBSequence_orig = bsddb._db.DBSequence def do_proxy_db_py3k(flag) : flag2 = do_proxy_db_py3k.flag do_proxy_db_py3k.flag = flag if flag : bsddb.DBEnv = bsddb.db.DBEnv = bsddb._db.DBEnv = DBEnv_py3k bsddb.DB = bsddb.db.DB = bsddb._db.DB = DB_py3k bsddb._db.DBSequence = DBSequence_py3k else : bsddb.DBEnv = bsddb.db.DBEnv = bsddb._db.DBEnv = bsddb._db.DBEnv_orig bsddb.DB = bsddb.db.DB = bsddb._db.DB = bsddb._db.DB_orig bsddb._db.DBSequence = bsddb._db.DBSequence_orig return flag2 do_proxy_db_py3k.flag = False do_proxy_db_py3k(True) from bsddb3 import db, dbtables, dbutils, dbshelve, \ hashopen, btopen, rnopen, dbobj if sys.version_info[0] < 3 : from test import test_support else : from test import support as test_support try: if sys.version_info[0] < 3 : from threading import Thread, currentThread del Thread, currentThread else : from threading import Thread, current_thread del Thread, current_thread have_threads = True except ImportError: have_threads = False verbose = 0 if 'verbose' in sys.argv: verbose = 1 sys.argv.remove('verbose') if 'silent' in sys.argv: # take care of old flag, just in case verbose = 0 sys.argv.remove('silent') def print_versions(): print() print('-=' * 38) print(db.DB_VERSION_STRING) print('bsddb.db.version(): %s' % (db.version(), )) if db.version() >= (5, 0) : print('bsddb.db.full_version(): %s' %repr(db.full_version())) print('bsddb.db.__version__: %s' % db.__version__) # Workaround for allowing generating an EGGs as a ZIP files. suffix="__" print('py module: %s' % getattr(bsddb, "__file"+suffix)) print('extension module: %s' % getattr(bsddb, "__file"+suffix)) print('Test working dir: %s' % get_test_path_prefix()) import platform print('python version: %s %s' % \ (sys.version.replace("\r", "").replace("\n", ""), \ platform.architecture()[0])) print('My pid: %s' % os.getpid()) print('-=' * 38) def get_new_path(name) : get_new_path.mutex.acquire() try : import os path=os.path.join(get_new_path.prefix, name+"_"+str(os.getpid())+"_"+str(get_new_path.num)) get_new_path.num+=1 finally : get_new_path.mutex.release() return path def get_new_environment_path() : path=get_new_path("environment") import os try: os.makedirs(path,mode=0o700) except os.error: test_support.rmtree(path) os.makedirs(path) return path def get_new_database_path() : path=get_new_path("database") import os if os.path.exists(path) : os.remove(path) return path # This path can be overriden via "set_test_path_prefix()". import os, os.path get_new_path.prefix=os.path.join(os.environ.get("TMPDIR", os.path.join(os.sep,"tmp")), "z-Berkeley_DB") get_new_path.num=0 def get_test_path_prefix() : return get_new_path.prefix def set_test_path_prefix(path) : get_new_path.prefix=path def remove_test_path_directory() : test_support.rmtree(get_new_path.prefix) if have_threads : import threading get_new_path.mutex=threading.Lock() del threading else : class Lock(object) : def acquire(self) : pass def release(self) : pass get_new_path.mutex=Lock() del Lock class PrintInfoFakeTest(unittest.TestCase): def testPrintVersions(self): print_versions() # This little hack is for when this module is run as main and all the # other modules import it so they will still be able to get the right # verbose setting. It's confusing but it works. if sys.version_info[0] < 3 : from . import test_all test_all.verbose = verbose else : import sys print("Work to do!", file=sys.stderr) def suite(module_prefix='', timing_check=None): test_modules = [ 'test_associate', 'test_basics', 'test_dbenv', 'test_db', 'test_compare', 'test_compat', 'test_cursor_pget_bug', 'test_dbobj', 'test_dbshelve', 'test_dbtables', 'test_distributed_transactions', 'test_early_close', 'test_fileid', 'test_get_none', 'test_join', 'test_lock', 'test_misc', 'test_pickle', 'test_queue', 'test_recno', 'test_replication', 'test_sequence', 'test_thread', ] alltests = unittest.TestSuite() for name in test_modules: #module = __import__(name) # Do it this way so that suite may be called externally via # python's Lib/test/test_bsddb3. module = __import__(module_prefix+name, globals(), locals(), name) alltests.addTest(module.test_suite()) if timing_check: alltests.addTest(unittest.makeSuite(timing_check)) return alltests def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(PrintInfoFakeTest)) return suite if __name__ == '__main__': print_versions() unittest.main(defaultTest='suite') bsddb3-6.1.0/Lib3/bsddb/test/test_db.py0000644000000000000000000001623212363206635017453 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import unittest import os, glob from .test_all import db, test_support, get_new_environment_path, \ get_new_database_path #---------------------------------------------------------------------- class DB(unittest.TestCase): def setUp(self): self.path = get_new_database_path() self.db = db.DB() def tearDown(self): self.db.close() del self.db test_support.unlink(self.path) class DB_general(DB) : def test_get_open_flags(self) : self.db.open(self.path, dbtype=db.DB_HASH, flags = db.DB_CREATE) self.assertEqual(db.DB_CREATE, self.db.get_open_flags()) def test_get_open_flags2(self) : self.db.open(self.path, dbtype=db.DB_HASH, flags = db.DB_CREATE | db.DB_THREAD) self.assertEqual(db.DB_CREATE | db.DB_THREAD, self.db.get_open_flags()) def test_get_dbname_filename(self) : self.db.open(self.path, dbtype=db.DB_HASH, flags = db.DB_CREATE) self.assertEqual((self.path, None), self.db.get_dbname()) def test_get_dbname_filename_database(self) : name = "jcea-random-name" self.db.open(self.path, dbname=name, dbtype=db.DB_HASH, flags = db.DB_CREATE) self.assertEqual((self.path, name), self.db.get_dbname()) def test_bt_minkey(self) : for i in [17, 108, 1030] : self.db.set_bt_minkey(i) self.assertEqual(i, self.db.get_bt_minkey()) def test_lorder(self) : self.db.set_lorder(1234) self.assertEqual(1234, self.db.get_lorder()) self.db.set_lorder(4321) self.assertEqual(4321, self.db.get_lorder()) self.assertRaises(db.DBInvalidArgError, self.db.set_lorder, 9182) def test_priority(self) : flags = [db.DB_PRIORITY_VERY_LOW, db.DB_PRIORITY_LOW, db.DB_PRIORITY_DEFAULT, db.DB_PRIORITY_HIGH, db.DB_PRIORITY_VERY_HIGH] for flag in flags : self.db.set_priority(flag) self.assertEqual(flag, self.db.get_priority()) def test_get_transactional(self) : self.assertFalse(self.db.get_transactional()) self.db.open(self.path, dbtype=db.DB_HASH, flags = db.DB_CREATE) self.assertFalse(self.db.get_transactional()) class DB_hash(DB) : def test_h_ffactor(self) : for ffactor in [4, 16, 256] : self.db.set_h_ffactor(ffactor) self.assertEqual(ffactor, self.db.get_h_ffactor()) def test_h_nelem(self) : for nelem in [1, 2, 4] : nelem = nelem*1024*1024 # Millions self.db.set_h_nelem(nelem) self.assertEqual(nelem, self.db.get_h_nelem()) def test_pagesize(self) : for i in range(9, 17) : # From 512 to 65536 i = 1< All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for exercising a Queue DB. """ import os, string from pprint import pprint import unittest from .test_all import db, verbose, get_new_database_path #---------------------------------------------------------------------- class SimpleQueueTestCase(unittest.TestCase): def setUp(self): self.filename = get_new_database_path() def tearDown(self): try: os.remove(self.filename) except os.error: pass def test01_basic(self): # Basic Queue tests using the deprecated DBCursor.consume method. if verbose: print('\n', '-=' * 30) print("Running %s.test01_basic..." % self.__class__.__name__) d = db.DB() d.set_re_len(40) # Queues must be fixed length d.open(self.filename, db.DB_QUEUE, db.DB_CREATE) if verbose: print("before appends" + '-' * 30) pprint(d.stat()) for x in string.letters: d.append(x * 40) self.assertEqual(len(d), len(string.letters)) d.put(100, "some more data") d.put(101, "and some more ") d.put(75, "out of order") d.put(1, "replacement data") self.assertEqual(len(d), len(string.letters)+3) if verbose: print("before close" + '-' * 30) pprint(d.stat()) d.close() del d d = db.DB() d.open(self.filename) if verbose: print("after open" + '-' * 30) pprint(d.stat()) # Test "txn" as a positional parameter d.append("one more", None) # Test "txn" as a keyword parameter d.append("another one", txn=None) c = d.cursor() if verbose: print("after append" + '-' * 30) pprint(d.stat()) rec = c.consume() while rec: if verbose: print(rec) rec = c.consume() c.close() if verbose: print("after consume loop" + '-' * 30) pprint(d.stat()) self.assertEqual(len(d), 0, \ "if you see this message then you need to rebuild " \ "Berkeley DB 3.1.17 with the patch in patches/qam_stat.diff") d.close() def test02_basicPost32(self): # Basic Queue tests using the new DB.consume method in DB 3.2+ # (No cursor needed) if verbose: print('\n', '-=' * 30) print("Running %s.test02_basicPost32..." % self.__class__.__name__) d = db.DB() d.set_re_len(40) # Queues must be fixed length d.open(self.filename, db.DB_QUEUE, db.DB_CREATE) if verbose: print("before appends" + '-' * 30) pprint(d.stat()) for x in string.letters: d.append(x * 40) self.assertEqual(len(d), len(string.letters)) d.put(100, "some more data") d.put(101, "and some more ") d.put(75, "out of order") d.put(1, "replacement data") self.assertEqual(len(d), len(string.letters)+3) if verbose: print("before close" + '-' * 30) pprint(d.stat()) d.close() del d d = db.DB() d.open(self.filename) #d.set_get_returns_none(true) if verbose: print("after open" + '-' * 30) pprint(d.stat()) d.append("one more") if verbose: print("after append" + '-' * 30) pprint(d.stat()) rec = d.consume() while rec: if verbose: print(rec) rec = d.consume() if verbose: print("after consume loop" + '-' * 30) pprint(d.stat()) d.close() #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(SimpleQueueTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_pickle.py0000644000000000000000000000701312363206577020337 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import os import pickle import sys if sys.version_info[0] < 3 : try: import pickle except ImportError: cPickle = None else : cPickle = None import unittest from .test_all import db, test_support, get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- class pickleTestCase(unittest.TestCase): """Verify that DBError can be pickled and unpickled""" db_name = 'test-dbobj.db' def setUp(self): self.homeDir = get_new_environment_path() def tearDown(self): if hasattr(self, 'db'): del self.db if hasattr(self, 'env'): del self.env test_support.rmtree(self.homeDir) def _base_test_pickle_DBError(self, pickle): self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) self.db = db.DB(self.env) self.db.open(self.db_name, db.DB_HASH, db.DB_CREATE) self.db.put('spam', 'eggs') self.assertEqual(self.db['spam'], 'eggs') try: self.db.put('spam', 'ham', flags=db.DB_NOOVERWRITE) except db.DBError as egg: pickledEgg = pickle.dumps(egg) #print repr(pickledEgg) rottenEgg = pickle.loads(pickledEgg) if rottenEgg.args != egg.args or type(rottenEgg) != type(egg): raise Exception(rottenEgg, '!=', egg) else: raise Exception("where's my DBError exception?!?") self.db.close() self.env.close() def test01_pickle_DBError(self): self._base_test_pickle_DBError(pickle=pickle) if cPickle: def test02_cPickle_DBError(self): self._base_test_pickle_DBError(pickle=cPickle) #---------------------------------------------------------------------- def test_suite(): return unittest.makeSuite(pickleTestCase) if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_dbenv.py0000644000000000000000000004647312363206633020174 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import unittest import os, glob from .test_all import db, test_support, get_new_environment_path, \ get_new_database_path #---------------------------------------------------------------------- class DBEnv(unittest.TestCase): def setUp(self): self.homeDir = get_new_environment_path() self.env = db.DBEnv() def tearDown(self): self.env.close() del self.env test_support.rmtree(self.homeDir) class DBEnv_general(DBEnv) : def test_get_open_flags(self) : flags = db.DB_CREATE | db.DB_INIT_MPOOL self.env.open(self.homeDir, flags) self.assertEqual(flags, self.env.get_open_flags()) def test_get_open_flags2(self) : flags = db.DB_CREATE | db.DB_INIT_MPOOL | \ db.DB_INIT_LOCK | db.DB_THREAD self.env.open(self.homeDir, flags) self.assertEqual(flags, self.env.get_open_flags()) def test_lk_partitions(self) : for i in [10, 20, 40] : self.env.set_lk_partitions(i) self.assertEqual(i, self.env.get_lk_partitions()) def test_getset_intermediate_dir_mode(self) : self.assertEqual(None, self.env.get_intermediate_dir_mode()) for mode in ["rwx------", "rw-rw-rw-", "rw-r--r--"] : self.env.set_intermediate_dir_mode(mode) self.assertEqual(mode, self.env.get_intermediate_dir_mode()) self.assertRaises(db.DBInvalidArgError, self.env.set_intermediate_dir_mode, "abcde") def test_thread(self) : for i in [16, 100, 1000] : self.env.set_thread_count(i) self.assertEqual(i, self.env.get_thread_count()) def test_cache_max(self) : for size in [64, 128] : size = size*1024*1024 # Megabytes self.env.set_cache_max(0, size) size2 = self.env.get_cache_max() self.assertEqual(0, size2[0]) self.assertTrue(size <= size2[1]) self.assertTrue(2*size > size2[1]) def test_mutex_stat(self) : self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOCK) stat = self.env.mutex_stat() self.assertTrue("mutex_inuse_max" in stat) def test_lg_filemode(self) : for i in [0o600, 0o660, 0o666] : self.env.set_lg_filemode(i) self.assertEqual(i, self.env.get_lg_filemode()) def test_mp_max_openfd(self) : for i in [17, 31, 42] : self.env.set_mp_max_openfd(i) self.assertEqual(i, self.env.get_mp_max_openfd()) def test_mp_max_write(self) : for i in [100, 200, 300] : for j in [1, 2, 3] : j *= 1000000 self.env.set_mp_max_write(i, j) v=self.env.get_mp_max_write() self.assertEqual((i, j), v) def test_invalid_txn(self) : # This environment doesn't support transactions self.assertRaises(db.DBInvalidArgError, self.env.txn_begin) def test_mp_mmapsize(self) : for i in [16, 32, 64] : i *= 1024*1024 self.env.set_mp_mmapsize(i) self.assertEqual(i, self.env.get_mp_mmapsize()) def test_tmp_dir(self) : for i in ["a", "bb", "ccc"] : self.env.set_tmp_dir(i) self.assertEqual(i, self.env.get_tmp_dir()) def test_flags(self) : self.env.set_flags(db.DB_AUTO_COMMIT, 1) self.assertEqual(db.DB_AUTO_COMMIT, self.env.get_flags()) self.env.set_flags(db.DB_TXN_NOSYNC, 1) self.assertEqual(db.DB_AUTO_COMMIT | db.DB_TXN_NOSYNC, self.env.get_flags()) self.env.set_flags(db.DB_AUTO_COMMIT, 0) self.assertEqual(db.DB_TXN_NOSYNC, self.env.get_flags()) self.env.set_flags(db.DB_TXN_NOSYNC, 0) self.assertEqual(0, self.env.get_flags()) def test_lk_max_objects(self) : for i in [1000, 2000, 3000] : self.env.set_lk_max_objects(i) self.assertEqual(i, self.env.get_lk_max_objects()) def test_lk_max_locks(self) : for i in [1000, 2000, 3000] : self.env.set_lk_max_locks(i) self.assertEqual(i, self.env.get_lk_max_locks()) def test_lk_max_lockers(self) : for i in [1000, 2000, 3000] : self.env.set_lk_max_lockers(i) self.assertEqual(i, self.env.get_lk_max_lockers()) def test_lg_regionmax(self) : for i in [128, 256, 1000] : i = i*1024*1024 self.env.set_lg_regionmax(i) j = self.env.get_lg_regionmax() self.assertTrue(i <= j) self.assertTrue(2*i > j) def test_lk_detect(self) : flags= [db.DB_LOCK_DEFAULT, db.DB_LOCK_EXPIRE, db.DB_LOCK_MAXLOCKS, db.DB_LOCK_MINLOCKS, db.DB_LOCK_MINWRITE, db.DB_LOCK_OLDEST, db.DB_LOCK_RANDOM, db.DB_LOCK_YOUNGEST] flags.append(db.DB_LOCK_MAXWRITE) for i in flags : self.env.set_lk_detect(i) self.assertEqual(i, self.env.get_lk_detect()) def test_lg_dir(self) : for i in ["a", "bb", "ccc", "dddd"] : self.env.set_lg_dir(i) self.assertEqual(i, self.env.get_lg_dir()) def test_lg_bsize(self) : log_size = 70*1024 self.env.set_lg_bsize(log_size) self.assertTrue(self.env.get_lg_bsize() >= log_size) self.assertTrue(self.env.get_lg_bsize() < 4*log_size) self.env.set_lg_bsize(4*log_size) self.assertTrue(self.env.get_lg_bsize() >= 4*log_size) def test_setget_data_dirs(self) : dirs = ("a", "b", "c", "d") for i in dirs : self.env.set_data_dir(i) self.assertEqual(dirs, self.env.get_data_dirs()) def test_setget_cachesize(self) : cachesize = (0, 512*1024*1024, 3) self.env.set_cachesize(*cachesize) self.assertEqual(cachesize, self.env.get_cachesize()) cachesize = (0, 1*1024*1024, 5) self.env.set_cachesize(*cachesize) cachesize2 = self.env.get_cachesize() self.assertEqual(cachesize[0], cachesize2[0]) self.assertEqual(cachesize[2], cachesize2[2]) # Berkeley DB expands the cache 25% accounting overhead, # if the cache is small. self.assertEqual(125, int(100.0*cachesize2[1]/cachesize[1])) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) cachesize = (0, 2*1024*1024, 1) self.assertRaises(db.DBInvalidArgError, self.env.set_cachesize, *cachesize) cachesize3 = self.env.get_cachesize() self.assertEqual(cachesize2[0], cachesize3[0]) self.assertEqual(cachesize2[2], cachesize3[2]) # In Berkeley DB 5.1, the cachesize can change when opening the Env self.assertTrue(cachesize2[1] <= cachesize3[1]) def test_set_cachesize_dbenv_db(self) : # You can not configure the cachesize using # the database handle, if you are using an environment. d = db.DB(self.env) self.assertRaises(db.DBInvalidArgError, d.set_cachesize, 0, 1024*1024, 1) def test_setget_shm_key(self) : shm_key=137 self.env.set_shm_key(shm_key) self.assertEqual(shm_key, self.env.get_shm_key()) self.env.set_shm_key(shm_key+1) self.assertEqual(shm_key+1, self.env.get_shm_key()) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL) # If we try to reconfigure cache after opening the # environment, core dump. self.assertRaises(db.DBInvalidArgError, self.env.set_shm_key, shm_key) self.assertEqual(shm_key+1, self.env.get_shm_key()) def test_mutex_setget_max(self) : v = self.env.mutex_get_max() v2 = v*2+1 self.env.mutex_set_max(v2) self.assertEqual(v2, self.env.mutex_get_max()) self.env.mutex_set_max(v) self.assertEqual(v, self.env.mutex_get_max()) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE) self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_max, v2) def test_mutex_setget_increment(self) : v = self.env.mutex_get_increment() v2 = 127 self.env.mutex_set_increment(v2) self.assertEqual(v2, self.env.mutex_get_increment()) self.env.mutex_set_increment(v) self.assertEqual(v, self.env.mutex_get_increment()) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE) self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_increment, v2) def test_mutex_setget_tas_spins(self) : self.env.mutex_set_tas_spins(0) # Default = BDB decides v = self.env.mutex_get_tas_spins() v2 = v*2+1 self.env.mutex_set_tas_spins(v2) self.assertEqual(v2, self.env.mutex_get_tas_spins()) self.env.mutex_set_tas_spins(v) self.assertEqual(v, self.env.mutex_get_tas_spins()) # In this case, you can change configuration # after opening the environment. self.env.open(self.homeDir, db.DB_CREATE) self.env.mutex_set_tas_spins(v2) def test_mutex_setget_align(self) : v = self.env.mutex_get_align() v2 = 64 if v == 64 : v2 = 128 self.env.mutex_set_align(v2) self.assertEqual(v2, self.env.mutex_get_align()) # Requires a nonzero power of two self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_align, 0) self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_align, 17) self.env.mutex_set_align(2*v2) self.assertEqual(2*v2, self.env.mutex_get_align()) # You can not change configuration after opening # the environment. self.env.open(self.homeDir, db.DB_CREATE) self.assertRaises(db.DBInvalidArgError, self.env.mutex_set_align, v2) class DBEnv_log(DBEnv) : def setUp(self): DBEnv.setUp(self) self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOG) def test_log_file(self) : log_file = self.env.log_file((1, 1)) self.assertEqual("log.0000000001", log_file[-14:]) # The version with transactions is checked in other test object def test_log_printf(self) : msg = "This is a test..." self.env.log_printf(msg) logc = self.env.log_cursor() self.assertTrue(msg in (logc.last()[1])) if db.version() >= (4, 7) : def test_log_config(self) : self.env.log_set_config(db.DB_LOG_DSYNC | db.DB_LOG_ZERO, 1) self.assertTrue(self.env.log_get_config(db.DB_LOG_DSYNC)) self.assertTrue(self.env.log_get_config(db.DB_LOG_ZERO)) self.env.log_set_config(db.DB_LOG_ZERO, 0) self.assertTrue(self.env.log_get_config(db.DB_LOG_DSYNC)) self.assertFalse(self.env.log_get_config(db.DB_LOG_ZERO)) class DBEnv_log_txn(DBEnv) : def setUp(self): DBEnv.setUp(self) self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOG | db.DB_INIT_TXN) if db.version() < (5, 2) : def test_tx_max(self) : txns=[] def tx() : for i in range(self.env.get_tx_max()) : txns.append(self.env.txn_begin()) tx() self.assertRaises(MemoryError, tx) # Abort the transactions before garbage collection, # to avoid "warnings". for i in txns : i.abort() # The version without transactions is checked in other test object def test_log_printf(self) : msg = "This is a test..." txn = self.env.txn_begin() self.env.log_printf(msg, txn=txn) txn.commit() logc = self.env.log_cursor() logc.last() # Skip the commit self.assertTrue(msg in (logc.prev()[1])) msg = "This is another test..." txn = self.env.txn_begin() self.env.log_printf(msg, txn=txn) txn.abort() # Do not store the new message logc.last() # Skip the abort self.assertTrue(msg not in (logc.prev()[1])) msg = "This is a third test..." txn = self.env.txn_begin() self.env.log_printf(msg, txn=txn) txn.commit() # Do not store the new message logc.last() # Skip the commit self.assertTrue(msg in (logc.prev()[1])) class DBEnv_memp(DBEnv): def setUp(self): DBEnv.setUp(self) self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOG) self.db = db.DB(self.env) self.db.open("test", db.DB_HASH, db.DB_CREATE, 0o660) def tearDown(self): self.db.close() del self.db DBEnv.tearDown(self) def test_memp_1_trickle(self) : self.db.put("hi", "bye") self.assertTrue(self.env.memp_trickle(100) > 0) # Preserve the order, do "memp_trickle" test first def test_memp_2_sync(self) : self.db.put("hi", "bye") self.env.memp_sync() # Full flush # Nothing to do... self.assertTrue(self.env.memp_trickle(100) == 0) self.db.put("hi", "bye2") self.env.memp_sync((1, 0)) # NOP, probably # Something to do... or not self.assertTrue(self.env.memp_trickle(100) >= 0) self.db.put("hi", "bye3") self.env.memp_sync((123, 99)) # Full flush # Nothing to do... self.assertTrue(self.env.memp_trickle(100) == 0) def test_memp_stat_1(self) : stats = self.env.memp_stat() # No param self.assertTrue(len(stats)==2) self.assertTrue("cache_miss" in stats[0]) stats = self.env.memp_stat(db.DB_STAT_CLEAR) # Positional param self.assertTrue("cache_miss" in stats[0]) stats = self.env.memp_stat(flags=0) # Keyword param self.assertTrue("cache_miss" in stats[0]) def test_memp_stat_2(self) : stats=self.env.memp_stat()[1] self.assertTrue(len(stats))==1 self.assertTrue("test" in stats) self.assertTrue("page_in" in stats["test"]) class DBEnv_logcursor(DBEnv): def setUp(self): DBEnv.setUp(self) self.env.open(self.homeDir, db.DB_CREATE | db.DB_INIT_MPOOL | db.DB_INIT_LOG | db.DB_INIT_TXN) txn = self.env.txn_begin() self.db = db.DB(self.env) self.db.open("test", db.DB_HASH, db.DB_CREATE, 0o660, txn=txn) txn.commit() for i in ["2", "8", "20"] : txn = self.env.txn_begin() self.db.put(key = i, data = i*int(i), txn=txn) txn.commit() def tearDown(self): self.db.close() del self.db DBEnv.tearDown(self) def _check_return(self, value) : self.assertTrue(isinstance(value, tuple)) self.assertEqual(len(value), 2) self.assertTrue(isinstance(value[0], tuple)) self.assertEqual(len(value[0]), 2) self.assertTrue(isinstance(value[0][0], int)) self.assertTrue(isinstance(value[0][1], int)) self.assertTrue(isinstance(value[1], str)) # Preserve test order def test_1_first(self) : logc = self.env.log_cursor() v = logc.first() self._check_return(v) self.assertTrue((1, 1) < v[0]) self.assertTrue(len(v[1])>0) def test_2_last(self) : logc = self.env.log_cursor() lsn_first = logc.first()[0] v = logc.last() self._check_return(v) self.assertTrue(lsn_first < v[0]) def test_3_next(self) : logc = self.env.log_cursor() lsn_last = logc.last()[0] self.assertEqual(next(logc), None) lsn_first = logc.first()[0] v = next(logc) self._check_return(v) self.assertTrue(lsn_first < v[0]) self.assertTrue(lsn_last > v[0]) v2 = next(logc) self.assertTrue(v2[0] > v[0]) self.assertTrue(lsn_last > v2[0]) v3 = next(logc) self.assertTrue(v3[0] > v2[0]) self.assertTrue(lsn_last > v3[0]) def test_4_prev(self) : logc = self.env.log_cursor() lsn_first = logc.first()[0] self.assertEqual(logc.prev(), None) lsn_last = logc.last()[0] v = logc.prev() self._check_return(v) self.assertTrue(lsn_first < v[0]) self.assertTrue(lsn_last > v[0]) v2 = logc.prev() self.assertTrue(v2[0] < v[0]) self.assertTrue(lsn_first < v2[0]) v3 = logc.prev() self.assertTrue(v3[0] < v2[0]) self.assertTrue(lsn_first < v3[0]) def test_5_current(self) : logc = self.env.log_cursor() logc.first() v = next(logc) self.assertEqual(v, logc.current()) def test_6_set(self) : logc = self.env.log_cursor() logc.first() v = next(logc) self.assertNotEqual(v, next(logc)) self.assertNotEqual(v, next(logc)) self.assertEqual(v, logc.set(v[0])) def test_explicit_close(self) : logc = self.env.log_cursor() logc.close() self.assertRaises(db.DBCursorClosedError, logc.__next__) def test_implicit_close(self) : logc = [self.env.log_cursor() for i in range(10)] self.env.close() # This close should close too all its tree for i in logc : self.assertRaises(db.DBCursorClosedError, i.__next__) def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBEnv_general)) suite.addTest(unittest.makeSuite(DBEnv_memp)) suite.addTest(unittest.makeSuite(DBEnv_logcursor)) suite.addTest(unittest.makeSuite(DBEnv_log)) suite.addTest(unittest.makeSuite(DBEnv_log_txn)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_dbshelve.py0000644000000000000000000003145412363206625020664 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for checking dbShelve objects. """ import os, string, sys import random import unittest from .test_all import db, dbshelve, test_support, verbose, \ get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- # We want the objects to be comparable so we can test dbshelve.values # later on. class DataClass: def __init__(self): self.value = random.random() def __repr__(self) : # For Python 3.0 comparison return "DataClass %f" %self.value def __cmp__(self, other): # For Python 2.x comparison return cmp(self.value, other) class DBShelveTestCase(unittest.TestCase): if sys.version_info < (2, 7) : def assertIn(self, a, b, msg=None) : return self.assertTrue(a in b, msg=msg) def setUp(self): if sys.version_info[0] >= 3 : from .test_all import do_proxy_db_py3k self._flag_proxy_db_py3k = do_proxy_db_py3k(False) self.filename = get_new_database_path() self.do_open() def tearDown(self): if sys.version_info[0] >= 3 : from .test_all import do_proxy_db_py3k do_proxy_db_py3k(self._flag_proxy_db_py3k) self.do_close() test_support.unlink(self.filename) def mk(self, key): """Turn key into an appropriate key type for this db""" # override in child class for RECNO if sys.version_info[0] < 3 : return key else : return bytes(key, "iso8859-1") # 8 bits def populateDB(self, d): for x in string.letters: d[self.mk('S' + x)] = 10 * x # add a string d[self.mk('I' + x)] = ord(x) # add an integer d[self.mk('L' + x)] = [x] * 10 # add a list inst = DataClass() # add an instance inst.S = 10 * x inst.I = ord(x) inst.L = [x] * 10 d[self.mk('O' + x)] = inst # overridable in derived classes to affect how the shelf is created/opened def do_open(self): self.d = dbshelve.open(self.filename) # and closed... def do_close(self): self.d.close() def test01_basics(self): if verbose: print('\n', '-=' * 30) print("Running %s.test01_basics..." % self.__class__.__name__) self.populateDB(self.d) self.d.sync() self.do_close() self.do_open() d = self.d l = len(d) k = list(d.keys()) s = d.stat() f = d.fd() if verbose: print("length:", l) print("keys:", k) print("stats:", s) self.assertEqual(0, self.mk('bad key') in d) self.assertEqual(1, self.mk('IA') in d) self.assertEqual(1, self.mk('OA') in d) d.delete(self.mk('IA')) del d[self.mk('OA')] self.assertEqual(0, self.mk('IA') in d) self.assertEqual(0, self.mk('OA') in d) self.assertEqual(len(d), l-2) values = [] for key in list(d.keys()): value = d[key] values.append(value) if verbose: print("%s: %s" % (key, value)) self.checkrec(key, value) dbvalues = list(d.values()) self.assertEqual(len(dbvalues), len(list(d.keys()))) # XXX: Convert all to strings. Please, improve values.sort(key=lambda x : str(x)) dbvalues.sort(key=lambda x : str(x)) self.assertEqual(repr(values), repr(dbvalues)) items = list(d.items()) self.assertEqual(len(items), len(values)) for key, value in items: self.checkrec(key, value) self.assertEqual(d.get(self.mk('bad key')), None) self.assertEqual(d.get(self.mk('bad key'), None), None) self.assertEqual(d.get(self.mk('bad key'), 'a string'), 'a string') self.assertEqual(d.get(self.mk('bad key'), [1, 2, 3]), [1, 2, 3]) d.set_get_returns_none(0) self.assertRaises(db.DBNotFoundError, d.get, self.mk('bad key')) d.set_get_returns_none(1) d.put(self.mk('new key'), 'new data') self.assertEqual(d.get(self.mk('new key')), 'new data') self.assertEqual(d[self.mk('new key')], 'new data') def test02_cursors(self): if verbose: print('\n', '-=' * 30) print("Running %s.test02_cursors..." % self.__class__.__name__) self.populateDB(self.d) d = self.d count = 0 c = d.cursor() rec = c.first() while rec is not None: count = count + 1 if verbose: print(rec) key, value = rec self.checkrec(key, value) # Hack to avoid conversion by 2to3 tool rec = getattr(c, "next")() del c self.assertEqual(count, len(d)) count = 0 c = d.cursor() rec = c.last() while rec is not None: count = count + 1 if verbose: print(rec) key, value = rec self.checkrec(key, value) rec = c.prev() self.assertEqual(count, len(d)) c.set(self.mk('SS')) key, value = c.current() self.checkrec(key, value) del c def test03_append(self): # NOTE: this is overridden in RECNO subclass, don't change its name. if verbose: print('\n', '-=' * 30) print("Running %s.test03_append..." % self.__class__.__name__) self.assertRaises(dbshelve.DBShelveError, self.d.append, 'unit test was here') def test04_iterable(self) : self.populateDB(self.d) d = self.d keys = list(d.keys()) keyset = set(keys) self.assertEqual(len(keyset), len(keys)) for key in d : self.assertIn(key, keyset) keyset.remove(key) self.assertEqual(len(keyset), 0) def checkrec(self, key, value): # override this in a subclass if the key type is different if sys.version_info[0] >= 3 : if isinstance(key, bytes) : key = key.decode("iso8859-1") # 8 bits x = key[1] if key[0] == 'S': self.assertEqual(type(value), str) self.assertEqual(value, 10 * x) elif key[0] == 'I': self.assertEqual(type(value), int) self.assertEqual(value, ord(x)) elif key[0] == 'L': self.assertEqual(type(value), list) self.assertEqual(value, [x] * 10) elif key[0] == 'O': if sys.version_info[0] < 3 : from types import InstanceType self.assertEqual(type(value), InstanceType) else : self.assertEqual(type(value), DataClass) self.assertEqual(value.S, 10 * x) self.assertEqual(value.I, ord(x)) self.assertEqual(value.L, [x] * 10) else: self.assertTrue(0, 'Unknown key type, fix the test') #---------------------------------------------------------------------- class BasicShelveTestCase(DBShelveTestCase): def do_open(self): self.d = dbshelve.DBShelf() self.d.open(self.filename, self.dbtype, self.dbflags) def do_close(self): self.d.close() class BTreeShelveTestCase(BasicShelveTestCase): dbtype = db.DB_BTREE dbflags = db.DB_CREATE class HashShelveTestCase(BasicShelveTestCase): dbtype = db.DB_HASH dbflags = db.DB_CREATE class ThreadBTreeShelveTestCase(BasicShelveTestCase): dbtype = db.DB_BTREE dbflags = db.DB_CREATE | db.DB_THREAD class ThreadHashShelveTestCase(BasicShelveTestCase): dbtype = db.DB_HASH dbflags = db.DB_CREATE | db.DB_THREAD #---------------------------------------------------------------------- class BasicEnvShelveTestCase(DBShelveTestCase): def do_open(self): self.env = db.DBEnv() self.env.open(self.homeDir, self.envflags | db.DB_INIT_MPOOL | db.DB_CREATE) self.filename = os.path.split(self.filename)[1] self.d = dbshelve.DBShelf(self.env) self.d.open(self.filename, self.dbtype, self.dbflags) def do_close(self): self.d.close() self.env.close() def setUp(self) : self.homeDir = get_new_environment_path() DBShelveTestCase.setUp(self) def tearDown(self): if sys.version_info[0] >= 3 : from .test_all import do_proxy_db_py3k do_proxy_db_py3k(self._flag_proxy_db_py3k) self.do_close() test_support.rmtree(self.homeDir) class EnvBTreeShelveTestCase(BasicEnvShelveTestCase): envflags = 0 dbtype = db.DB_BTREE dbflags = db.DB_CREATE class EnvHashShelveTestCase(BasicEnvShelveTestCase): envflags = 0 dbtype = db.DB_HASH dbflags = db.DB_CREATE class EnvThreadBTreeShelveTestCase(BasicEnvShelveTestCase): envflags = db.DB_THREAD dbtype = db.DB_BTREE dbflags = db.DB_CREATE | db.DB_THREAD class EnvThreadHashShelveTestCase(BasicEnvShelveTestCase): envflags = db.DB_THREAD dbtype = db.DB_HASH dbflags = db.DB_CREATE | db.DB_THREAD #---------------------------------------------------------------------- # test cases for a DBShelf in a RECNO DB. class RecNoShelveTestCase(BasicShelveTestCase): dbtype = db.DB_RECNO dbflags = db.DB_CREATE def setUp(self): BasicShelveTestCase.setUp(self) # pool to assign integer key values out of self.key_pool = list(range(1, 5000)) self.key_map = {} # map string keys to the number we gave them self.intkey_map = {} # reverse map of above def mk(self, key): if key not in self.key_map: self.key_map[key] = self.key_pool.pop(0) self.intkey_map[self.key_map[key]] = key return self.key_map[key] def checkrec(self, intkey, value): key = self.intkey_map[intkey] BasicShelveTestCase.checkrec(self, key, value) def test03_append(self): if verbose: print('\n', '-=' * 30) print("Running %s.test03_append..." % self.__class__.__name__) self.d[1] = 'spam' self.d[5] = 'eggs' self.assertEqual(6, self.d.append('spam')) self.assertEqual(7, self.d.append('baked beans')) self.assertEqual('spam', self.d.get(6)) self.assertEqual('spam', self.d.get(1)) self.assertEqual('baked beans', self.d.get(7)) self.assertEqual('eggs', self.d.get(5)) #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(DBShelveTestCase)) suite.addTest(unittest.makeSuite(BTreeShelveTestCase)) suite.addTest(unittest.makeSuite(HashShelveTestCase)) suite.addTest(unittest.makeSuite(ThreadBTreeShelveTestCase)) suite.addTest(unittest.makeSuite(ThreadHashShelveTestCase)) suite.addTest(unittest.makeSuite(EnvBTreeShelveTestCase)) suite.addTest(unittest.makeSuite(EnvHashShelveTestCase)) suite.addTest(unittest.makeSuite(EnvThreadBTreeShelveTestCase)) suite.addTest(unittest.makeSuite(EnvThreadHashShelveTestCase)) suite.addTest(unittest.makeSuite(RecNoShelveTestCase)) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_lock.py0000644000000000000000000001722012363206624020012 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """ TestCases for testing the locking sub-system. """ import time import unittest from .test_all import db, test_support, verbose, have_threads, \ get_new_environment_path, get_new_database_path if have_threads : from threading import Thread import sys if sys.version_info[0] < 3 : from threading import currentThread else : from threading import current_thread as currentThread #---------------------------------------------------------------------- class LockingTestCase(unittest.TestCase): def setUp(self): self.homeDir = get_new_environment_path() self.env = db.DBEnv() self.env.open(self.homeDir, db.DB_THREAD | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_CREATE) def tearDown(self): self.env.close() test_support.rmtree(self.homeDir) def test01_simple(self): if verbose: print('\n', '-=' * 30) print("Running %s.test01_simple..." % self.__class__.__name__) anID = self.env.lock_id() if verbose: print("locker ID: %s" % anID) lock = self.env.lock_get(anID, "some locked thing", db.DB_LOCK_WRITE) if verbose: print("Aquired lock: %s" % lock) self.env.lock_put(lock) if verbose: print("Released lock: %s" % lock) self.env.lock_id_free(anID) def test02_threaded(self): if verbose: print('\n', '-=' * 30) print("Running %s.test02_threaded..." % self.__class__.__name__) threads = [] threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_READ,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_READ,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_READ,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_READ,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) threads.append(Thread(target = self.theThread, args=(db.DB_LOCK_WRITE,))) for t in threads: import sys if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() for t in threads: t.join() def test03_lock_timeout(self): self.env.set_timeout(0, db.DB_SET_LOCK_TIMEOUT) self.assertEqual(self.env.get_timeout(db.DB_SET_LOCK_TIMEOUT), 0) self.env.set_timeout(0, db.DB_SET_TXN_TIMEOUT) self.assertEqual(self.env.get_timeout(db.DB_SET_TXN_TIMEOUT), 0) self.env.set_timeout(123456, db.DB_SET_LOCK_TIMEOUT) self.assertEqual(self.env.get_timeout(db.DB_SET_LOCK_TIMEOUT), 123456) self.env.set_timeout(7890123, db.DB_SET_TXN_TIMEOUT) self.assertEqual(self.env.get_timeout(db.DB_SET_TXN_TIMEOUT), 7890123) def test04_lock_timeout2(self): self.env.set_timeout(0, db.DB_SET_LOCK_TIMEOUT) self.env.set_timeout(0, db.DB_SET_TXN_TIMEOUT) self.env.set_timeout(123456, db.DB_SET_LOCK_TIMEOUT) self.env.set_timeout(7890123, db.DB_SET_TXN_TIMEOUT) def deadlock_detection() : while not deadlock_detection.end : deadlock_detection.count = \ self.env.lock_detect(db.DB_LOCK_EXPIRE) if deadlock_detection.count : while not deadlock_detection.end : pass break time.sleep(0.01) deadlock_detection.end=False deadlock_detection.count=0 t=Thread(target=deadlock_detection) import sys if sys.version_info[0] < 3 : t.setDaemon(True) else : t.daemon = True t.start() self.env.set_timeout(100000, db.DB_SET_LOCK_TIMEOUT) anID = self.env.lock_id() anID2 = self.env.lock_id() self.assertNotEqual(anID, anID2) lock = self.env.lock_get(anID, "shared lock", db.DB_LOCK_WRITE) start_time=time.time() self.assertRaises(db.DBLockNotGrantedError, self.env.lock_get,anID2, "shared lock", db.DB_LOCK_READ) end_time=time.time() deadlock_detection.end=True # Floating point rounding self.assertTrue((end_time-start_time) >= 0.0999) self.env.lock_put(lock) t.join() self.env.lock_id_free(anID) self.env.lock_id_free(anID2) self.assertTrue(deadlock_detection.count>0) def theThread(self, lockType): import sys if sys.version_info[0] < 3 : name = currentThread().getName() else : name = currentThread().name if lockType == db.DB_LOCK_WRITE: lt = "write" else: lt = "read" anID = self.env.lock_id() if verbose: print("%s: locker ID: %s" % (name, anID)) for i in range(1000) : lock = self.env.lock_get(anID, "some locked thing", lockType) if verbose: print("%s: Aquired %s lock: %s" % (name, lt, lock)) self.env.lock_put(lock) if verbose: print("%s: Released %s lock: %s" % (name, lt, lock)) self.env.lock_id_free(anID) #---------------------------------------------------------------------- def test_suite(): suite = unittest.TestSuite() if have_threads: suite.addTest(unittest.makeSuite(LockingTestCase)) else: suite.addTest(unittest.makeSuite(LockingTestCase, 'test01')) return suite if __name__ == '__main__': unittest.main(defaultTest='test_suite') bsddb3-6.1.0/Lib3/bsddb/test/test_replication.py0000644000000000000000000005202612363206617021400 0ustar rootroot00000000000000""" Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ """TestCases for distributed transactions. """ import os import time import unittest from .test_all import db, test_support, have_threads, verbose, \ get_new_environment_path, get_new_database_path #---------------------------------------------------------------------- class DBReplication(unittest.TestCase) : def setUp(self) : self.homeDirMaster = get_new_environment_path() self.homeDirClient = get_new_environment_path() self.dbenvMaster = db.DBEnv() self.dbenvClient = db.DBEnv() # Must use "DB_THREAD" because the Replication Manager will # be executed in other threads but will use the same environment. # http://forums.oracle.com/forums/thread.jspa?threadID=645788&tstart=0 self.dbenvMaster.open(self.homeDirMaster, db.DB_CREATE | db.DB_INIT_TXN | db.DB_INIT_LOG | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_REP | db.DB_RECOVER | db.DB_THREAD, 0o666) self.dbenvClient.open(self.homeDirClient, db.DB_CREATE | db.DB_INIT_TXN | db.DB_INIT_LOG | db.DB_INIT_MPOOL | db.DB_INIT_LOCK | db.DB_INIT_REP | db.DB_RECOVER | db.DB_THREAD, 0o666) self.confirmed_master=self.client_startupdone=False def confirmed_master(a,b,c) : if b==db.DB_EVENT_REP_MASTER : self.confirmed_master=True def client_startupdone(a,b,c) : if b==db.DB_EVENT_REP_STARTUPDONE : self.client_startupdone=True self.dbenvMaster.set_event_notify(confirmed_master) self.dbenvClient.set_event_notify(client_startupdone) #self.dbenvMaster.set_verbose(db.DB_VERB_REPLICATION, True) #self.dbenvMaster.set_verbose(db.DB_VERB_FILEOPS_ALL, True) #self.dbenvClient.set_verbose(db.DB_VERB_REPLICATION, True) #self.dbenvClient.set_verbose(db.DB_VERB_FILEOPS_ALL, True) self.dbMaster = self.dbClient = None def tearDown(self): if self.dbClient : self.dbClient.close() if self.dbMaster : self.dbMaster.close() # Here we assign dummy event handlers to allow GC of the test object. # Since the dummy handler doesn't use any outer scope variable, it # doesn't keep any reference to the test object. def dummy(*args) : pass self.dbenvMaster.set_event_notify(dummy) self.dbenvClient.set_event_notify(dummy) self.dbenvClient.close() self.dbenvMaster.close() test_support.rmtree(self.homeDirClient) test_support.rmtree(self.homeDirMaster) class DBReplicationManager(DBReplication) : def test01_basic_replication(self) : master_port = test_support.find_unused_port() client_port = test_support.find_unused_port() if db.version() >= (5, 2) : self.site = self.dbenvMaster.repmgr_site("127.0.0.1", master_port) self.site.set_config(db.DB_GROUP_CREATOR, True) self.site.set_config(db.DB_LOCAL_SITE, True) self.site2 = self.dbenvMaster.repmgr_site("127.0.0.1", client_port) self.site3 = self.dbenvClient.repmgr_site("127.0.0.1", master_port) self.site3.set_config(db.DB_BOOTSTRAP_HELPER, True) self.site4 = self.dbenvClient.repmgr_site("127.0.0.1", client_port) self.site4.set_config(db.DB_LOCAL_SITE, True) d = { db.DB_BOOTSTRAP_HELPER: [False, False, True, False], db.DB_GROUP_CREATOR: [True, False, False, False], db.DB_LEGACY: [False, False, False, False], db.DB_LOCAL_SITE: [True, False, False, True], db.DB_REPMGR_PEER: [False, False, False, False ], } for i, j in list(d.items()) : for k, v in \ zip([self.site, self.site2, self.site3, self.site4], j) : if v : self.assertTrue(k.get_config(i)) else : self.assertFalse(k.get_config(i)) self.assertNotEqual(self.site.get_eid(), self.site2.get_eid()) self.assertNotEqual(self.site3.get_eid(), self.site4.get_eid()) for i, j in zip([self.site, self.site2, self.site3, self.site4], \ [master_port, client_port, master_port, client_port]) : addr = i.get_address() self.assertEqual(addr, ("127.0.0.1", j)) for i in [self.site, self.site2] : self.assertEqual(i.get_address(), self.dbenvMaster.repmgr_site_by_eid(i.get_eid()).get_address()) for i in [self.site3, self.site4] : self.assertEqual(i.get_address(), self.dbenvClient.repmgr_site_by_eid(i.get_eid()).get_address()) else : self.dbenvMaster.repmgr_set_local_site("127.0.0.1", master_port) self.dbenvClient.repmgr_set_local_site("127.0.0.1", client_port) self.dbenvMaster.repmgr_add_remote_site("127.0.0.1", client_port) self.dbenvClient.repmgr_add_remote_site("127.0.0.1", master_port) self.dbenvMaster.rep_set_nsites(2) self.dbenvClient.rep_set_nsites(2) self.dbenvMaster.rep_set_priority(10) self.dbenvClient.rep_set_priority(0) self.dbenvMaster.rep_set_timeout(db.DB_REP_CONNECTION_RETRY,100123) self.dbenvClient.rep_set_timeout(db.DB_REP_CONNECTION_RETRY,100321) self.assertEqual(self.dbenvMaster.rep_get_timeout( db.DB_REP_CONNECTION_RETRY), 100123) self.assertEqual(self.dbenvClient.rep_get_timeout( db.DB_REP_CONNECTION_RETRY), 100321) self.dbenvMaster.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 100234) self.dbenvClient.rep_set_timeout(db.DB_REP_ELECTION_TIMEOUT, 100432) self.assertEqual(self.dbenvMaster.rep_get_timeout( db.DB_REP_ELECTION_TIMEOUT), 100234) self.assertEqual(self.dbenvClient.rep_get_timeout( db.DB_REP_ELECTION_TIMEOUT), 100432) self.dbenvMaster.rep_set_timeout(db.DB_REP_ELECTION_RETRY, 100345) self.dbenvClient.rep_set_timeout(db.DB_REP_ELECTION_RETRY, 100543) self.assertEqual(self.dbenvMaster.rep_get_timeout( db.DB_REP_ELECTION_RETRY), 100345) self.assertEqual(self.dbenvClient.rep_get_timeout( db.DB_REP_ELECTION_RETRY), 100543) self.dbenvMaster.repmgr_set_ack_policy(db.DB_REPMGR_ACKS_ALL) self.dbenvClient.repmgr_set_ack_policy(db.DB_REPMGR_ACKS_ALL) self.dbenvMaster.repmgr_start(1, db.DB_REP_MASTER); self.dbenvClient.repmgr_start(1, db.DB_REP_CLIENT); self.assertEqual(self.dbenvMaster.rep_get_nsites(),2) self.assertEqual(self.dbenvClient.rep_get_nsites(),2) self.assertEqual(self.dbenvMaster.rep_get_priority(),10) self.assertEqual(self.dbenvClient.rep_get_priority(),0) self.assertEqual(self.dbenvMaster.repmgr_get_ack_policy(), db.DB_REPMGR_ACKS_ALL) self.assertEqual(self.dbenvClient.repmgr_get_ack_policy(), db.DB_REPMGR_ACKS_ALL) # The timeout is necessary in BDB 4.5, since DB_EVENT_REP_STARTUPDONE # is not generated if the master has no new transactions. # This is solved in BDB 4.6 (#15542). import time timeout = time.time()+10 while (time.time()= 3) if absolute_import : from . import db else : from . import db import collections MutableMapping = collections.MutableMapping class DBEnv: def __init__(self, *args, **kwargs): self._cobj = db.DBEnv(*args, **kwargs) def close(self, *args, **kwargs): return self._cobj.close(*args, **kwargs) def open(self, *args, **kwargs): return self._cobj.open(*args, **kwargs) def remove(self, *args, **kwargs): return self._cobj.remove(*args, **kwargs) def set_shm_key(self, *args, **kwargs): return self._cobj.set_shm_key(*args, **kwargs) def set_cachesize(self, *args, **kwargs): return self._cobj.set_cachesize(*args, **kwargs) def set_data_dir(self, *args, **kwargs): return self._cobj.set_data_dir(*args, **kwargs) def set_flags(self, *args, **kwargs): return self._cobj.set_flags(*args, **kwargs) def set_lg_bsize(self, *args, **kwargs): return self._cobj.set_lg_bsize(*args, **kwargs) def set_lg_dir(self, *args, **kwargs): return self._cobj.set_lg_dir(*args, **kwargs) def set_lg_max(self, *args, **kwargs): return self._cobj.set_lg_max(*args, **kwargs) def set_lk_detect(self, *args, **kwargs): return self._cobj.set_lk_detect(*args, **kwargs) def set_lk_max_locks(self, *args, **kwargs): return self._cobj.set_lk_max_locks(*args, **kwargs) def set_lk_max_lockers(self, *args, **kwargs): return self._cobj.set_lk_max_lockers(*args, **kwargs) def set_lk_max_objects(self, *args, **kwargs): return self._cobj.set_lk_max_objects(*args, **kwargs) def set_mp_mmapsize(self, *args, **kwargs): return self._cobj.set_mp_mmapsize(*args, **kwargs) def set_timeout(self, *args, **kwargs): return self._cobj.set_timeout(*args, **kwargs) def set_tmp_dir(self, *args, **kwargs): return self._cobj.set_tmp_dir(*args, **kwargs) def txn_begin(self, *args, **kwargs): return self._cobj.txn_begin(*args, **kwargs) def txn_checkpoint(self, *args, **kwargs): return self._cobj.txn_checkpoint(*args, **kwargs) def txn_stat(self, *args, **kwargs): return self._cobj.txn_stat(*args, **kwargs) def set_tx_max(self, *args, **kwargs): return self._cobj.set_tx_max(*args, **kwargs) def set_tx_timestamp(self, *args, **kwargs): return self._cobj.set_tx_timestamp(*args, **kwargs) def lock_detect(self, *args, **kwargs): return self._cobj.lock_detect(*args, **kwargs) def lock_get(self, *args, **kwargs): return self._cobj.lock_get(*args, **kwargs) def lock_id(self, *args, **kwargs): return self._cobj.lock_id(*args, **kwargs) def lock_put(self, *args, **kwargs): return self._cobj.lock_put(*args, **kwargs) def lock_stat(self, *args, **kwargs): return self._cobj.lock_stat(*args, **kwargs) def log_archive(self, *args, **kwargs): return self._cobj.log_archive(*args, **kwargs) def set_get_returns_none(self, *args, **kwargs): return self._cobj.set_get_returns_none(*args, **kwargs) def log_stat(self, *args, **kwargs): return self._cobj.log_stat(*args, **kwargs) def dbremove(self, *args, **kwargs): return self._cobj.dbremove(*args, **kwargs) def dbrename(self, *args, **kwargs): return self._cobj.dbrename(*args, **kwargs) def set_encrypt(self, *args, **kwargs): return self._cobj.set_encrypt(*args, **kwargs) def fileid_reset(self, *args, **kwargs): return self._cobj.fileid_reset(*args, **kwargs) def lsn_reset(self, *args, **kwargs): return self._cobj.lsn_reset(*args, **kwargs) class DB(MutableMapping): def __init__(self, dbenv, *args, **kwargs): # give it the proper DBEnv C object that its expecting self._cobj = db.DB(*((dbenv._cobj,) + args), **kwargs) # TODO are there other dict methods that need to be overridden? def __len__(self): return len(self._cobj) def __getitem__(self, arg): return self._cobj[arg] def __setitem__(self, key, value): self._cobj[key] = value def __delitem__(self, arg): del self._cobj[arg] def __iter__(self) : return self._cobj.__iter__() def append(self, *args, **kwargs): return self._cobj.append(*args, **kwargs) def associate(self, *args, **kwargs): return self._cobj.associate(*args, **kwargs) def close(self, *args, **kwargs): return self._cobj.close(*args, **kwargs) def consume(self, *args, **kwargs): return self._cobj.consume(*args, **kwargs) def consume_wait(self, *args, **kwargs): return self._cobj.consume_wait(*args, **kwargs) def cursor(self, *args, **kwargs): return self._cobj.cursor(*args, **kwargs) def delete(self, *args, **kwargs): return self._cobj.delete(*args, **kwargs) def fd(self, *args, **kwargs): return self._cobj.fd(*args, **kwargs) def get(self, *args, **kwargs): return self._cobj.get(*args, **kwargs) def pget(self, *args, **kwargs): return self._cobj.pget(*args, **kwargs) def get_both(self, *args, **kwargs): return self._cobj.get_both(*args, **kwargs) def get_byteswapped(self, *args, **kwargs): return self._cobj.get_byteswapped(*args, **kwargs) def get_size(self, *args, **kwargs): return self._cobj.get_size(*args, **kwargs) def get_type(self, *args, **kwargs): return self._cobj.get_type(*args, **kwargs) def join(self, *args, **kwargs): return self._cobj.join(*args, **kwargs) def key_range(self, *args, **kwargs): return self._cobj.key_range(*args, **kwargs) def has_key(self, *args, **kwargs): return self._cobj.has_key(*args, **kwargs) def items(self, *args, **kwargs): return self._cobj.items(*args, **kwargs) def keys(self, *args, **kwargs): return self._cobj.keys(*args, **kwargs) def open(self, *args, **kwargs): return self._cobj.open(*args, **kwargs) def put(self, *args, **kwargs): return self._cobj.put(*args, **kwargs) def remove(self, *args, **kwargs): return self._cobj.remove(*args, **kwargs) def rename(self, *args, **kwargs): return self._cobj.rename(*args, **kwargs) def set_bt_minkey(self, *args, **kwargs): return self._cobj.set_bt_minkey(*args, **kwargs) def set_bt_compare(self, *args, **kwargs): return self._cobj.set_bt_compare(*args, **kwargs) def set_cachesize(self, *args, **kwargs): return self._cobj.set_cachesize(*args, **kwargs) def set_dup_compare(self, *args, **kwargs) : return self._cobj.set_dup_compare(*args, **kwargs) def set_flags(self, *args, **kwargs): return self._cobj.set_flags(*args, **kwargs) def set_h_ffactor(self, *args, **kwargs): return self._cobj.set_h_ffactor(*args, **kwargs) def set_h_nelem(self, *args, **kwargs): return self._cobj.set_h_nelem(*args, **kwargs) def set_lorder(self, *args, **kwargs): return self._cobj.set_lorder(*args, **kwargs) def set_pagesize(self, *args, **kwargs): return self._cobj.set_pagesize(*args, **kwargs) def set_re_delim(self, *args, **kwargs): return self._cobj.set_re_delim(*args, **kwargs) def set_re_len(self, *args, **kwargs): return self._cobj.set_re_len(*args, **kwargs) def set_re_pad(self, *args, **kwargs): return self._cobj.set_re_pad(*args, **kwargs) def set_re_source(self, *args, **kwargs): return self._cobj.set_re_source(*args, **kwargs) def set_q_extentsize(self, *args, **kwargs): return self._cobj.set_q_extentsize(*args, **kwargs) def stat(self, *args, **kwargs): return self._cobj.stat(*args, **kwargs) def sync(self, *args, **kwargs): return self._cobj.sync(*args, **kwargs) def type(self, *args, **kwargs): return self._cobj.type(*args, **kwargs) def upgrade(self, *args, **kwargs): return self._cobj.upgrade(*args, **kwargs) def values(self, *args, **kwargs): return self._cobj.values(*args, **kwargs) def verify(self, *args, **kwargs): return self._cobj.verify(*args, **kwargs) def set_get_returns_none(self, *args, **kwargs): return self._cobj.set_get_returns_none(*args, **kwargs) def set_encrypt(self, *args, **kwargs): return self._cobj.set_encrypt(*args, **kwargs) class DBSequence: def __init__(self, *args, **kwargs): self._cobj = db.DBSequence(*args, **kwargs) def close(self, *args, **kwargs): return self._cobj.close(*args, **kwargs) def get(self, *args, **kwargs): return self._cobj.get(*args, **kwargs) def get_dbp(self, *args, **kwargs): return self._cobj.get_dbp(*args, **kwargs) def get_key(self, *args, **kwargs): return self._cobj.get_key(*args, **kwargs) def init_value(self, *args, **kwargs): return self._cobj.init_value(*args, **kwargs) def open(self, *args, **kwargs): return self._cobj.open(*args, **kwargs) def remove(self, *args, **kwargs): return self._cobj.remove(*args, **kwargs) def stat(self, *args, **kwargs): return self._cobj.stat(*args, **kwargs) def set_cachesize(self, *args, **kwargs): return self._cobj.set_cachesize(*args, **kwargs) def set_flags(self, *args, **kwargs): return self._cobj.set_flags(*args, **kwargs) def set_range(self, *args, **kwargs): return self._cobj.set_range(*args, **kwargs) def get_cachesize(self, *args, **kwargs): return self._cobj.get_cachesize(*args, **kwargs) def get_flags(self, *args, **kwargs): return self._cobj.get_flags(*args, **kwargs) def get_range(self, *args, **kwargs): return self._cobj.get_range(*args, **kwargs) bsddb3-6.1.0/Lib3/bsddb/dbrecio.py0000644000000000000000000001227412363206630016454 0ustar rootroot00000000000000 """ File-like objects that read from or write to a bsddb record. This implements (nearly) all stdio methods. f = DBRecIO(db, key, txn=None) f.close() # explicitly release resources held flag = f.isatty() # always false pos = f.tell() # get current position f.seek(pos) # set current position f.seek(pos, mode) # mode 0: absolute; 1: relative; 2: relative to EOF buf = f.read() # read until EOF buf = f.read(n) # read up to n bytes f.truncate([size]) # truncate file at to at most size (default: current pos) f.write(buf) # write at current position f.writelines(list) # for line in list: f.write(line) Notes: - fileno() is left unimplemented so that code which uses it triggers an exception early. - There's a simple test set (see end of this file) - not yet updated for DBRecIO. - readline() is not implemented yet. From: Itamar Shtull-Trauring """ import errno import string class DBRecIO: def __init__(self, db, key, txn=None): self.db = db self.key = key self.txn = txn self.len = None self.pos = 0 self.closed = 0 self.softspace = 0 def close(self): if not self.closed: self.closed = 1 del self.db, self.txn def isatty(self): if self.closed: raise ValueError("I/O operation on closed file") return 0 def seek(self, pos, mode = 0): if self.closed: raise ValueError("I/O operation on closed file") if mode == 1: pos = pos + self.pos elif mode == 2: pos = pos + self.len self.pos = max(0, pos) def tell(self): if self.closed: raise ValueError("I/O operation on closed file") return self.pos def read(self, n = -1): if self.closed: raise ValueError("I/O operation on closed file") if n < 0: newpos = self.len else: newpos = min(self.pos+n, self.len) dlen = newpos - self.pos r = self.db.get(self.key, txn=self.txn, dlen=dlen, doff=self.pos) self.pos = newpos return r __fixme = """ def readline(self, length=None): if self.closed: raise ValueError, "I/O operation on closed file" if self.buflist: self.buf = self.buf + string.joinfields(self.buflist, '') self.buflist = [] i = string.find(self.buf, '\n', self.pos) if i < 0: newpos = self.len else: newpos = i+1 if length is not None: if self.pos + length < newpos: newpos = self.pos + length r = self.buf[self.pos:newpos] self.pos = newpos return r def readlines(self, sizehint = 0): total = 0 lines = [] line = self.readline() while line: lines.append(line) total += len(line) if 0 < sizehint <= total: break line = self.readline() return lines """ def truncate(self, size=None): if self.closed: raise ValueError("I/O operation on closed file") if size is None: size = self.pos elif size < 0: raise IOError(errno.EINVAL, "Negative size not allowed") elif size < self.pos: self.pos = size self.db.put(self.key, "", txn=self.txn, dlen=self.len-size, doff=size) def write(self, s): if self.closed: raise ValueError("I/O operation on closed file") if not s: return if self.pos > self.len: self.buflist.append('\0'*(self.pos - self.len)) self.len = self.pos newpos = self.pos + len(s) self.db.put(self.key, s, txn=self.txn, dlen=len(s), doff=self.pos) self.pos = newpos def writelines(self, list): self.write(string.joinfields(list, '')) def flush(self): if self.closed: raise ValueError("I/O operation on closed file") """ # A little test suite def _test(): import sys if sys.argv[1:]: file = sys.argv[1] else: file = '/etc/passwd' lines = open(file, 'r').readlines() text = open(file, 'r').read() f = StringIO() for line in lines[:-2]: f.write(line) f.writelines(lines[-2:]) if f.getvalue() != text: raise RuntimeError, 'write failed' length = f.tell() print 'File length =', length f.seek(len(lines[0])) f.write(lines[1]) f.seek(0) print 'First line =', repr(f.readline()) here = f.tell() line = f.readline() print 'Second line =', repr(line) f.seek(-len(line), 1) line2 = f.read(len(line)) if line != line2: raise RuntimeError, 'bad result after seek back' f.seek(len(line2), 1) list = f.readlines() line = list[-1] f.seek(f.tell() - len(line)) line2 = f.read() if line != line2: raise RuntimeError, 'bad result after seek back from EOF' print 'Read', len(list), 'more lines' print 'File length =', f.tell() if f.tell() != length: raise RuntimeError, 'bad length' f.close() if __name__ == '__main__': _test() """ bsddb3-6.1.0/Lib3/bsddb/db.py0000644000000000000000000000422412363206572015433 0ustar rootroot00000000000000#---------------------------------------------------------------------- # Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA # and Andrew Kuchling. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # o Redistributions of source code must retain the above copyright # notice, this list of conditions, and the disclaimer that follows. # # o Redistributions in binary form must reproduce the above copyright # notice, this list of conditions, and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # o Neither the name of Digital Creations nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS # IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED # TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DIGITAL # CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS # OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR # TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH # DAMAGE. #---------------------------------------------------------------------- # This module is just a placeholder for possible future expansion, in # case we ever want to augment the stuff in _db in any way. For now # it just simply imports everything from _db. import sys absolute_import = (sys.version_info[0] >= 3) if not absolute_import : from _pybsddb import * from _pybsddb import __version__ else : from ._pybsddb import * from ._pybsddb import __version__ bsddb3-6.1.0/Lib3/bsddb/dbshelve.py0000644000000000000000000002467012363206602016643 0ustar rootroot00000000000000#!/usr/bin/env python #------------------------------------------------------------------------ # Copyright (c) 1997-2001 by Total Control Software # All Rights Reserved #------------------------------------------------------------------------ # # Module Name: dbShelve.py # # Description: A reimplementation of the standard shelve.py that # forces the use of cPickle, and DB. # # Creation Date: 11/3/97 3:39:04PM # # License: This is free software. You may use this software for any # purpose including modification/redistribution, so long as # this header remains intact and that you do not claim any # rights of ownership or authorship of this software. This # software has been tested, but no warranty is expressed or # implied. # # 13-Dec-2000: Updated to be used with the new bsddb3 package. # Added DBShelfCursor class. # #------------------------------------------------------------------------ """Manage shelves of pickled objects using bsddb database files for the storage. """ #------------------------------------------------------------------------ import sys absolute_import = (sys.version_info[0] >= 3) if absolute_import : from . import db else : from . import db if sys.version_info[0] >= 3 : import pickle # Will be converted to "pickle" by "2to3" else : import warnings with warnings.catch_warnings() : warnings.filterwarnings("ignore", category=DeprecationWarning) import pickle HIGHEST_PROTOCOL = pickle.HIGHEST_PROTOCOL def _dumps(object, protocol): return pickle.dumps(object, protocol=protocol) import collections MutableMapping = collections.MutableMapping #------------------------------------------------------------------------ def open(filename, flags=db.DB_CREATE, mode=0o660, filetype=db.DB_HASH, dbenv=None, dbname=None): """ A simple factory function for compatibility with the standard shelve.py module. It can be used like this, where key is a string and data is a pickleable object: from bsddb import dbshelve db = dbshelve.open(filename) db[key] = data db.close() """ if type(flags) == type(''): sflag = flags if sflag == 'r': flags = db.DB_RDONLY elif sflag == 'rw': flags = 0 elif sflag == 'w': flags = db.DB_CREATE elif sflag == 'c': flags = db.DB_CREATE elif sflag == 'n': flags = db.DB_TRUNCATE | db.DB_CREATE else: raise db.DBError("flags should be one of 'r', 'w', 'c' or 'n' or use the bsddb.db.DB_* flags") d = DBShelf(dbenv) d.open(filename, dbname, filetype, flags, mode) return d #--------------------------------------------------------------------------- class DBShelveError(db.DBError): pass class DBShelf(MutableMapping): """A shelf to hold pickled objects, built upon a bsddb DB object. It automatically pickles/unpickles data objects going to/from the DB. """ def __init__(self, dbenv=None): self.db = db.DB(dbenv) self._closed = True if HIGHEST_PROTOCOL: self.protocol = HIGHEST_PROTOCOL else: self.protocol = 1 def __del__(self): self.close() def __getattr__(self, name): """Many methods we can just pass through to the DB object. (See below) """ return getattr(self.db, name) #----------------------------------- # Dictionary access methods def __len__(self): return len(self.db) def __getitem__(self, key): data = self.db[key] return pickle.loads(data) def __setitem__(self, key, value): data = _dumps(value, self.protocol) self.db[key] = data def __delitem__(self, key): del self.db[key] def keys(self, txn=None): if txn is not None: return self.db.keys(txn) else: return list(self.db.keys()) def __iter__(self) : # XXX: Load all keys in memory :-( for k in list(self.db.keys()) : yield k # Do this when "DB" support iteration # Or is it enough to pass thru "getattr"? # # def __iter__(self) : # return self.db.__iter__() def open(self, *args, **kwargs): self.db.open(*args, **kwargs) self._closed = False def close(self, *args, **kwargs): self.db.close(*args, **kwargs) self._closed = True def __repr__(self): if self._closed: return '' % (id(self)) else: return repr(dict(iter(self.items()))) def items(self, txn=None): if txn is not None: items = self.db.items(txn) else: items = list(self.db.items()) newitems = [] for k, v in items: newitems.append( (k, pickle.loads(v)) ) return newitems def values(self, txn=None): if txn is not None: values = self.db.values(txn) else: values = list(self.db.values()) return list(map(pickle.loads, values)) #----------------------------------- # Other methods def __append(self, value, txn=None): data = _dumps(value, self.protocol) return self.db.append(data, txn) def append(self, value, txn=None): if self.get_type() == db.DB_RECNO: return self.__append(value, txn=txn) raise DBShelveError("append() only supported when dbshelve opened with filetype=dbshelve.db.DB_RECNO") def associate(self, secondaryDB, callback, flags=0): def _shelf_callback(priKey, priData, realCallback=callback): # Safe in Python 2.x because expresion short circuit if sys.version_info[0] < 3 or isinstance(priData, bytes) : data = pickle.loads(priData) else : data = pickle.loads(bytes(priData, "iso8859-1")) # 8 bits return realCallback(priKey, data) return self.db.associate(secondaryDB, _shelf_callback, flags) #def get(self, key, default=None, txn=None, flags=0): def get(self, *args, **kw): # We do it with *args and **kw so if the default value wasn't # given nothing is passed to the extension module. That way # an exception can be raised if set_get_returns_none is turned # off. data = self.db.get(*args, **kw) try: return pickle.loads(data) except (EOFError, TypeError, pickle.UnpicklingError): return data # we may be getting the default value, or None, # so it doesn't need unpickled. def get_both(self, key, value, txn=None, flags=0): data = _dumps(value, self.protocol) data = self.db.get(key, data, txn, flags) return pickle.loads(data) def cursor(self, txn=None, flags=0): c = DBShelfCursor(self.db.cursor(txn, flags)) c.protocol = self.protocol return c def put(self, key, value, txn=None, flags=0): data = _dumps(value, self.protocol) return self.db.put(key, data, txn, flags) def join(self, cursorList, flags=0): raise NotImplementedError #---------------------------------------------- # Methods allowed to pass-through to self.db # # close, delete, fd, get_byteswapped, get_type, has_key, # key_range, open, remove, rename, stat, sync, # upgrade, verify, and all set_* methods. #--------------------------------------------------------------------------- class DBShelfCursor: """ """ def __init__(self, cursor): self.dbc = cursor def __del__(self): self.close() def __getattr__(self, name): """Some methods we can just pass through to the cursor object. (See below)""" return getattr(self.dbc, name) #---------------------------------------------- def dup(self, flags=0): c = DBShelfCursor(self.dbc.dup(flags)) c.protocol = self.protocol return c def put(self, key, value, flags=0): data = _dumps(value, self.protocol) return self.dbc.put(key, data, flags) def get(self, *args): count = len(args) # a method overloading hack method = getattr(self, 'get_%d' % count) method(*args) def get_1(self, flags): rec = self.dbc.get(flags) return self._extract(rec) def get_2(self, key, flags): rec = self.dbc.get(key, flags) return self._extract(rec) def get_3(self, key, value, flags): data = _dumps(value, self.protocol) rec = self.dbc.get(key, flags) return self._extract(rec) def current(self, flags=0): return self.get_1(flags|db.DB_CURRENT) def first(self, flags=0): return self.get_1(flags|db.DB_FIRST) def last(self, flags=0): return self.get_1(flags|db.DB_LAST) def next(self, flags=0): return self.get_1(flags|db.DB_NEXT) def prev(self, flags=0): return self.get_1(flags|db.DB_PREV) def consume(self, flags=0): return self.get_1(flags|db.DB_CONSUME) def next_dup(self, flags=0): return self.get_1(flags|db.DB_NEXT_DUP) def next_nodup(self, flags=0): return self.get_1(flags|db.DB_NEXT_NODUP) def prev_nodup(self, flags=0): return self.get_1(flags|db.DB_PREV_NODUP) def get_both(self, key, value, flags=0): data = _dumps(value, self.protocol) rec = self.dbc.get_both(key, flags) return self._extract(rec) def set(self, key, flags=0): rec = self.dbc.set(key, flags) return self._extract(rec) def set_range(self, key, flags=0): rec = self.dbc.set_range(key, flags) return self._extract(rec) def set_recno(self, recno, flags=0): rec = self.dbc.set_recno(recno, flags) return self._extract(rec) set_both = get_both def _extract(self, rec): if rec is None: return None else: key, data = rec # Safe in Python 2.x because expresion short circuit if sys.version_info[0] < 3 or isinstance(data, bytes) : return key, pickle.loads(data) else : return key, pickle.loads(bytes(data, "iso8859-1")) # 8 bits #---------------------------------------------- # Methods allowed to pass-through to self.dbc # # close, count, delete, get_recno, join_item #--------------------------------------------------------------------------- bsddb3-6.1.0/Lib3/bsddb/__init__.py0000644000000000000000000003456312363206637016620 0ustar rootroot00000000000000#---------------------------------------------------------------------- # Copyright (c) 1999-2001, Digital Creations, Fredericksburg, VA, USA # and Andrew Kuchling. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # o Redistributions of source code must retain the above copyright # notice, this list of conditions, and the disclaimer that follows. # # o Redistributions in binary form must reproduce the above copyright # notice, this list of conditions, and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # o Neither the name of Digital Creations nor the names of its # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS # IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED # TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DIGITAL # CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS # OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR # TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH # DAMAGE. #---------------------------------------------------------------------- """Support for Berkeley DB 4.3 through 5.3 with a simple interface. For the full featured object oriented interface use the bsddb.db module instead. It mirrors the Oracle Berkeley DB C API. """ import sys absolute_import = (sys.version_info[0] >= 3) try: if absolute_import : from . import _pybsddb else : import _pybsddb from bsddb3.dbutils import DeadlockWrap as _DeadlockWrap except ImportError: # Remove ourselves from sys.modules import sys del sys.modules[__name__] raise # bsddb3 calls it db, but provide _db for backwards compatibility db = _db = _pybsddb __version__ = db.__version__ error = db.DBError # So bsddb.error will mean something... #---------------------------------------------------------------------- import sys, os from weakref import ref import collections MutableMapping = collections.MutableMapping class _iter_mixin(MutableMapping): def _make_iter_cursor(self): cur = _DeadlockWrap(self.db.cursor) key = id(cur) self._cursor_refs[key] = ref(cur, self._gen_cref_cleaner(key)) return cur def _gen_cref_cleaner(self, key): # use generate the function for the weakref callback here # to ensure that we do not hold a strict reference to cur # in the callback. return lambda ref: self._cursor_refs.pop(key, None) def __iter__(self): self._kill_iteration = False self._in_iter += 1 try: try: cur = self._make_iter_cursor() # FIXME-20031102-greg: race condition. cursor could # be closed by another thread before this call. # since we're only returning keys, we call the cursor # methods with flags=0, dlen=0, dofs=0 key = _DeadlockWrap(cur.first, 0,0,0)[0] yield key next = getattr(cur, "next") while 1: try: key = _DeadlockWrap(next, 0,0,0)[0] yield key except _db.DBCursorClosedError: if self._kill_iteration: raise RuntimeError('Database changed size ' 'during iteration.') cur = self._make_iter_cursor() # FIXME-20031101-greg: race condition. cursor could # be closed by another thread before this call. _DeadlockWrap(cur.set, key,0,0,0) next = getattr(cur, "next") except _db.DBNotFoundError: pass except _db.DBCursorClosedError: # the database was modified during iteration. abort. pass finally : self._in_iter -= 1 def iteritems(self): if not self.db: return self._kill_iteration = False self._in_iter += 1 try: try: cur = self._make_iter_cursor() # FIXME-20031102-greg: race condition. cursor could # be closed by another thread before this call. kv = _DeadlockWrap(cur.first) key = kv[0] yield kv next = getattr(cur, "next") while 1: try: kv = _DeadlockWrap(next) key = kv[0] yield kv except _db.DBCursorClosedError: if self._kill_iteration: raise RuntimeError('Database changed size ' 'during iteration.') cur = self._make_iter_cursor() # FIXME-20031101-greg: race condition. cursor could # be closed by another thread before this call. _DeadlockWrap(cur.set, key,0,0,0) next = getattr(cur, "next") except _db.DBNotFoundError: pass except _db.DBCursorClosedError: # the database was modified during iteration. abort. pass finally : self._in_iter -= 1 class _DBWithCursor(_iter_mixin): """ A simple wrapper around DB that makes it look like the bsddbobject in the old module. It uses a cursor as needed to provide DB traversal. """ def __init__(self, db): self.db = db self.db.set_get_returns_none(0) # FIXME-20031101-greg: I believe there is still the potential # for deadlocks in a multithreaded environment if someone # attempts to use the any of the cursor interfaces in one # thread while doing a put or delete in another thread. The # reason is that _checkCursor and _closeCursors are not atomic # operations. Doing our own locking around self.dbc, # self.saved_dbc_key and self._cursor_refs could prevent this. # TODO: A test case demonstrating the problem needs to be written. # self.dbc is a DBCursor object used to implement the # first/next/previous/last/set_location methods. self.dbc = None self.saved_dbc_key = None # a collection of all DBCursor objects currently allocated # by the _iter_mixin interface. self._cursor_refs = {} self._in_iter = 0 self._kill_iteration = False def __del__(self): self.close() def _checkCursor(self): if self.dbc is None: self.dbc = _DeadlockWrap(self.db.cursor) if self.saved_dbc_key is not None: _DeadlockWrap(self.dbc.set, self.saved_dbc_key) self.saved_dbc_key = None # This method is needed for all non-cursor DB calls to avoid # Berkeley DB deadlocks (due to being opened with DB_INIT_LOCK # and DB_THREAD to be thread safe) when intermixing database # operations that use the cursor internally with those that don't. def _closeCursors(self, save=1): if self.dbc: c = self.dbc self.dbc = None if save: try: self.saved_dbc_key = _DeadlockWrap(c.current, 0,0,0)[0] except db.DBError: pass _DeadlockWrap(c.close) del c for cref in list(self._cursor_refs.values()): c = cref() if c is not None: _DeadlockWrap(c.close) def _checkOpen(self): if self.db is None: raise error("BSDDB object has already been closed") def isOpen(self): return self.db is not None def __len__(self): self._checkOpen() return _DeadlockWrap(lambda: len(self.db)) # len(self.db) def __repr__(self) : if self.isOpen() : return repr(dict(_DeadlockWrap(self.db.items))) return repr(dict()) def __getitem__(self, key): self._checkOpen() return _DeadlockWrap(lambda: self.db[key]) # self.db[key] def __setitem__(self, key, value): self._checkOpen() self._closeCursors() if self._in_iter and key not in self: self._kill_iteration = True def wrapF(): self.db[key] = value _DeadlockWrap(wrapF) # self.db[key] = value def __delitem__(self, key): self._checkOpen() self._closeCursors() if self._in_iter and key in self: self._kill_iteration = True def wrapF(): del self.db[key] _DeadlockWrap(wrapF) # del self.db[key] def close(self): self._closeCursors(save=0) if self.dbc is not None: _DeadlockWrap(self.dbc.close) v = 0 if self.db is not None: v = _DeadlockWrap(self.db.close) self.dbc = None self.db = None return v def keys(self): self._checkOpen() return _DeadlockWrap(self.db.keys) def has_key(self, key): self._checkOpen() return _DeadlockWrap(self.db.has_key, key) def set_location(self, key): self._checkOpen() self._checkCursor() return _DeadlockWrap(self.dbc.set_range, key) def __next__(self): # Renamed by "2to3" self._checkOpen() self._checkCursor() rv = _DeadlockWrap(getattr(self.dbc, "next")) return rv if sys.version_info[0] >= 3 : # For "2to3" conversion next = __next__ def previous(self): self._checkOpen() self._checkCursor() rv = _DeadlockWrap(self.dbc.prev) return rv def first(self): self._checkOpen() # fix 1725856: don't needlessly try to restore our cursor position self.saved_dbc_key = None self._checkCursor() rv = _DeadlockWrap(self.dbc.first) return rv def last(self): self._checkOpen() # fix 1725856: don't needlessly try to restore our cursor position self.saved_dbc_key = None self._checkCursor() rv = _DeadlockWrap(self.dbc.last) return rv def sync(self): self._checkOpen() return _DeadlockWrap(self.db.sync) #---------------------------------------------------------------------- # Compatibility object factory functions def hashopen(file, flag='c', mode=0o666, pgsize=None, ffactor=None, nelem=None, cachesize=None, lorder=None, hflags=0): flags = _checkflag(flag, file) e = _openDBEnv(cachesize) d = db.DB(e) d.set_flags(hflags) if pgsize is not None: d.set_pagesize(pgsize) if lorder is not None: d.set_lorder(lorder) if ffactor is not None: d.set_h_ffactor(ffactor) if nelem is not None: d.set_h_nelem(nelem) d.open(file, db.DB_HASH, flags, mode) return _DBWithCursor(d) #---------------------------------------------------------------------- def btopen(file, flag='c', mode=0o666, btflags=0, cachesize=None, maxkeypage=None, minkeypage=None, pgsize=None, lorder=None): flags = _checkflag(flag, file) e = _openDBEnv(cachesize) d = db.DB(e) if pgsize is not None: d.set_pagesize(pgsize) if lorder is not None: d.set_lorder(lorder) d.set_flags(btflags) if minkeypage is not None: d.set_bt_minkey(minkeypage) if maxkeypage is not None: d.set_bt_maxkey(maxkeypage) d.open(file, db.DB_BTREE, flags, mode) return _DBWithCursor(d) #---------------------------------------------------------------------- def rnopen(file, flag='c', mode=0o666, rnflags=0, cachesize=None, pgsize=None, lorder=None, rlen=None, delim=None, source=None, pad=None): flags = _checkflag(flag, file) e = _openDBEnv(cachesize) d = db.DB(e) if pgsize is not None: d.set_pagesize(pgsize) if lorder is not None: d.set_lorder(lorder) d.set_flags(rnflags) if delim is not None: d.set_re_delim(delim) if rlen is not None: d.set_re_len(rlen) if source is not None: d.set_re_source(source) if pad is not None: d.set_re_pad(pad) d.open(file, db.DB_RECNO, flags, mode) return _DBWithCursor(d) #---------------------------------------------------------------------- def _openDBEnv(cachesize): e = db.DBEnv() if cachesize is not None: if cachesize >= 20480: e.set_cachesize(0, cachesize) else: raise error("cachesize must be >= 20480") e.set_lk_detect(db.DB_LOCK_DEFAULT) e.open('.', db.DB_PRIVATE | db.DB_CREATE | db.DB_THREAD | db.DB_INIT_LOCK | db.DB_INIT_MPOOL) return e def _checkflag(flag, file): if flag == 'r': flags = db.DB_RDONLY elif flag == 'rw': flags = 0 elif flag == 'w': flags = db.DB_CREATE elif flag == 'c': flags = db.DB_CREATE elif flag == 'n': flags = db.DB_CREATE #flags = db.DB_CREATE | db.DB_TRUNCATE # we used db.DB_TRUNCATE flag for this before but Berkeley DB # 4.2.52 changed to disallowed truncate with txn environments. if file is not None and os.path.isfile(file): os.unlink(file) else: raise error("flags should be one of 'r', 'w', 'c' or 'n'") return flags | db.DB_THREAD #---------------------------------------------------------------------- # This is a silly little hack that allows apps to continue to use the # DB_THREAD flag even on systems without threads without freaking out # Berkeley DB. # # This assumes that if Python was built with thread support then # Berkeley DB was too. try: # 2to3 automatically changes "import thread" to "import _thread" import _thread as T del T except ImportError: db.DB_THREAD = 0 #---------------------------------------------------------------------- bsddb3-6.1.0/test.py0000644000000000000000000000336212363167637014170 0ustar rootroot00000000000000#!/usr/bin/env python """ Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import sys if sys.version_info[0] == 2 : import test2 as test else : # >= Python 3.0 import test3 as test sys.path.append("/home/pybsddb/trunk/build/lib.solaris-2.10-i86pc-3.1/bsddb3/") if __name__ == "__main__": test.process_args() bsddb3-6.1.0/PKG-INFO0000644000000000000000000000471212363235112013714 0ustar rootroot00000000000000Metadata-Version: 1.1 Name: bsddb3 Version: 6.1.0 Summary: Python bindings for Oracle Berkeley DB Home-page: http://www.jcea.es/programacion/pybsddb.htm Author: Jesus Cea, Robin Dunn, Gregory P. Smith, Andrew Kuchling, Barry Warsaw Author-email: pybsddb@jcea.es License: 3-clause BSD License Description: This module provides a nearly complete wrapping of the Oracle/Sleepycat C API for the Database Environment, Database, Cursor, Log Cursor, Sequence and Transaction objects, and each of these is exposed as a Python type in the bsddb3.db module. The database objects can use various access methods: btree, hash, recno, and queue. Complete support of Berkeley DB distributed transactions. Complete support for Berkeley DB Replication Manager. Complete support for Berkeley DB Base Replication. Support for RPC. Please see the documents in the docs directory of the source distribution or at the website for more details on the types and methods provided. The goal is to mirror most of the real Berkeley DB API so fall back to the Oracle Berkeley DB documentation as appropriate. If you need to support ancient versiones of Python and/or Berkeley DB , you can use old releases of this bindings. `Homepage `__ -- `Documentation `__ -- `Mailing List `__ -- `Donation `__ Platform: UNKNOWN Classifier: License :: OSI Approved :: BSD License Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Intended Audience :: Information Technology Classifier: Natural Language :: English Classifier: Natural Language :: Spanish Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Topic :: Database Classifier: Topic :: Software Development Classifier: Topic :: System :: Clustering Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.2 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 bsddb3-6.1.0/setup.py0000644000000000000000000000313712363167637014351 0ustar rootroot00000000000000#!/usr/bin/env python """ Copyright (c) 2008-2014, Jesus Cea Avion All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Jesus Cea Avion nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import sys if sys.version_info[0] == 2 : import setup2 else : # >= Python 3.0 import setup3 bsddb3-6.1.0/setup.cfg0000644000000000000000000000013012363235112014426 0ustar rootroot00000000000000[sdist] formats = gztar,zip [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 bsddb3-6.1.0/docs/0000755000000000000000000000000012363235112013543 5ustar rootroot00000000000000bsddb3-6.1.0/docs/html/0000755000000000000000000000000012363235112014507 5ustar rootroot00000000000000bsddb3-6.1.0/docs/html/dbenv.html0000644000000000000000000027570212247657276016534 0ustar rootroot00000000000000 DBEnv — PyBSDDB 6.0.0 documentation

DBEnv¶

Read Oracle documentation for better understanding.

More info...

DBEnv Attributes¶

DBEnv(flags=0)¶

database home directory (read-only)

DBEnv Methods¶

DBEnv(flags=0)

Constructor. More info...

set_rpc_server(host, cl_timeout=0, sv_timeout=0)¶

Establishes a connection for this dbenv to a RPC server. This function is not available if linked to Berkeley DB 4.8 or up. More info...

close(flags=0)¶

Close the database environment, freeing resources. More info...

open(homedir, flags=0, mode=0660)¶

Prepare the database environment for use. More info...

log_cursor()¶

Returns a created log cursor. More info...

memp_stat(flags=0)¶

Returns the memory pool (that is, the buffer cache) subsystem statistics.

The returning value is a tuple. The first element is a dictionary with the general stats. The second element is another dictionary, keyed by filename, and the values are the stats for each file.

The first dictionary contains these data:

gbytes Gigabytes of cache (total cache size is st_gbytes + st_bytes).
bytes Bytes of cache (total cache size is st_gbytes + st_bytes).
ncache Number of caches.
max_ncache Maximum number of caches, as configured with the DB_ENV->set_cache_max() method.
regsize Individual cache size, in bytes.
mmapsize Maximum memory-mapped file size.
maxopenfd Maximum open file descriptors.
maxwrite Maximum sequential buffer writes.
maxwrite_sleep Microseconds to pause after writing maximum sequential buffers.
map Requested pages mapped into the process’ address space (there is no available information about whether or not this request caused disk I/O, although examining the application page fault rate may be helpful).
cache_hit Requested pages found in the cache.
cache_miss Requested pages not found in the cache.
page_create Pages created in the cache.
page_in Pages read into the cache.
page_out Pages written from the cache to the backing file.
ro_evict Clean pages forced from the cache.
rw_evict Dirty pages forced from the cache.
page_trickle Dirty pages written using the DB_ENV->memp_trickle() method.
pages Pages in the cache.
page_clean Clean pages currently in the cache.
page_dirty Dirty pages currently in the cache.
hash_buckets Number of hash buckets in buffer hash table.
hash_searches Total number of buffer hash table lookups.
hash_longest Longest chain ever encountered in buffer hash table lookups.
hash_examined Total number of hash elements traversed during hash table lookups.
hash_nowait Number of times that a thread of control was able to obtain a hash bucket lock without waiting.
hash_wait Number of times that a thread of control was forced to wait before obtaining a hash bucket lock.
hash_max_nowait The number of times a thread of control was able to obtain the hash bucket lock without waiting on the bucket which had the maximum number of times that a thread of control needed to wait.
hash_max_wait Maximum number of times any hash bucket lock was waited for by a thread of control.
region_wait Number of times that a thread of control was forced to wait before obtaining a cache region mutex.
region_nowait Number of times that a thread of control was able to obtain a cache region mutex without waiting.
mvcc_frozen Number of buffers frozen.
mvcc_thawed Number of buffers thawed.
mvcc_freed Number of frozen buffers freed.
alloc Number of page allocations.
alloc_buckets Number of hash buckets checked during allocation.
alloc_max_buckets Maximum number of hash buckets checked during an allocation.
alloc_pages Number of pages checked during allocation.
alloc_max_pages Maximum number of pages checked during an allocation.
io_wait Number of operations blocked waiting for I/O to complete.
sync_interrupted Number of mpool sync operations interrupted.

The second dictionary contains these data:

pagesize Page size in bytes.
cache_hit Requested pages found in the cache.
cache_miss Requested pages not found in the cache.
map Requested pages mapped into the process’ address space.
page_create Pages created in the cache.
page_in Pages read into the cache.
page_out Pages written from the cache to the backing file.

More info...

memp_stat_print(flags=0)¶

Displays cache subsystem statistical information. More info...

memp_sync(lsn=None)¶

Flushes modified pages in the cache to their backing files. If provided, lsn is a tuple: (file, offset). More info...

memp_trickle(percent)¶

Ensures that a specified percent of the pages in the cache are clean, by writing dirty pages to their backing files. More info...

remove(homedir, flags=0)¶

Remove a database environment. More info...

dbremove(file, database=None, txn=None, flags=0)¶

Removes the database specified by the file and database parameters. If no database is specified, the underlying file represented by file is removed, incidentally removing all of the databases it contained. More info...

dbrename(file, database=None, newname, txn=None, flags=0)¶

Renames the database specified by the file and database parameters to newname. If no database is specified, the underlying file represented by file is renamed, incidentally renaming all of the databases it contained. More info...

fileid_reset(file, flags=0)¶

All databases contain an ID string used to identify the database in the database environment cache. If a physical database file is copied, and used in the same environment as another file with the same ID strings, corruption can occur. The DB_ENV->fileid_reset method creates new ID strings for all of the databases in the physical file. More info...

get_thread_count()¶

Returns the thread count as set by the DB_ENV->set_thread_count() method. More info...

set_thread_count(count)¶

Declare an approximate number of threads in the database environment. The DB_ENV->set_thread_count() method must be called prior to opening the database environment if the DB_ENV->failchk() method will be used. The DB_ENV->set_thread_count() method does not set the maximum number of threads but is used to determine memory sizing and the thread control block reclamation policy. More info...

set_encrypt(passwd, flags=0)¶

Set the password used by the Berkeley DB library to perform encryption and decryption. More info...

get_encrypt_flags()¶

Returns the encryption flags. More info...

get_intermediate_dir_mode()¶

Returns the intermediate directory permissions.

Intermediate directories are directories needed for recovery. Normally, Berkeley DB does not create these directories and will do so only if the DB_ENV->set_intermediate_dir_mode() method is called.

More info...

set_intermediate_dir_mode(mode)¶

By default, Berkeley DB does not create intermediate directories needed for recovery, that is, if the file /a/b/c/mydatabase is being recovered, and the directory path b/c does not exist, recovery will fail. This default behavior is because Berkeley DB does not know what permissions are appropriate for intermediate directory creation, and creating the directory might result in a security problem.

The DB_ENV->set_intermediate_dir_mode() method causes Berkeley DB to create any intermediate directories needed during recovery, using the specified permissions.

More info...

get_timeout(flags)¶

Returns a timeout value, in microseconds. More info...

set_timeout(timeout, flags)¶

Sets timeout values for locks or transactions in the database environment. More info...

get_mp_max_openfd()¶

Returns the maximum number of file descriptors the library will open concurrently when flushing dirty pages from the cache. More info...

set_mp_max_openfd(max_open_fd)¶

Limits the number of file descriptors the library will open concurrently when flushing dirty pages from the cache. More info...

get_mp_max_write()¶

Returns a tuple with the current maximum number of sequential write operations and microseconds to pause that the library can schedule when flushing dirty pages from the cache. More info...

set_mp_max_write(maxwrite, maxwrite_sleep)¶

Limits the number of sequential write operations scheduled by the library when flushing dirty pages from the cache. More info...

set_shm_key(key)¶

Specify a base segment ID for Berkeley DB environment shared memory regions created in system memory on VxWorks or systems supporting X/Open-style shared memory interfaces; for example, UNIX systems supporting shmget(2) and related System V IPC interfaces. More info...

get_shm_key()¶

Returns the base segment ID. More info...

set_cache_max(gbytes, bytes)¶

Sets the maximum cache size, in bytes. The specified size is rounded to the nearest multiple of the cache region size, which is the initial cache size divided by the number of regions specified to the DB_ENV->set_cachesize() method. If no value is specified, it defaults to the initial cache size. More info...

get_cache_max()¶

Returns the maximum size of the cache as set using the DB_ENV->set_cache_max() method. More info...

set_cachesize(gbytes, bytes, ncache=0)¶

Set the size of the shared memory buffer pool. More info...

get_cachesize()¶

Returns a tuple with the current size and composition of the cache. More info...

set_data_dir(dir)¶

Set the environment data directory. You can call this function multiple times, adding new directories. More info...

get_data_dirs()¶

Return a tuple with the directories. More info...

get_flags()¶

Returns the configuration flags set for a DB_ENV handle. More info...

set_flags(flags, onoff)¶

Set additional flags for the DBEnv. The onoff parameter specifes if the flag is set or cleared. More info...

set_tmp_dir(dir)¶

Set the directory to be used for temporary files. More info...

get_tmp_dir()¶

Returns the database environment temporary file directory. More info...

set_get_returns_none(flag)¶

By default when DB.get or DBCursor.get, get_both, first, last, next or prev encounter a DB_NOTFOUND error they return None instead of raising DBNotFoundError. This behaviour emulates Python dictionaries and is convenient for looping.

You can use this method to toggle that behaviour for all of the aformentioned methods or extend it to also apply to the DBCursor.set, set_both, set_range, and set_recno methods. Supported values of flag:

  • 0 all DB and DBCursor get and set methods will raise a DBNotFoundError rather than returning None.
  • 1 Default in module version <4.2.4 The DB.get and DBCursor.get, get_both, first, last, next and prev methods return None.
  • 2 Default in module version >=4.2.4 Extends the behaviour of 1 to the DBCursor set, set_both, set_range and set_recno methods.

The default of returning None makes it easy to do things like this without having to catch DBNotFoundError (KeyError):

data = mydb.get(key)
if data:
    doSomething(data)

or this:

rec = cursor.first()
while rec:
    print rec
    rec = cursor.next()

Making the cursor set methods return None is useful in order to do this:

rec = mydb.set()
while rec:
    key, val = rec
    doSomething(key, val)
    rec = mydb.next()

The downside to this it that it is inconsistent with the rest of the package and noticeably diverges from the Oracle Berkeley DB API. If you prefer to have the get and set methods raise an exception when a key is not found, use this method to tell them to do so.

Calling this method on a DBEnv object will set the default for all DB’s later created within that environment. Calling it on a DB object sets the behaviour for that DB only.

The previous setting is returned.

set_private(object)¶

Link an object to the DBEnv object. This allows to pass around an arbitrary object. For instance, for callback context.

get_private()¶

Give the object linked to the DBEnv.

get_open_flags()¶

Returns the current open method flags. That is, this method returns the flags that were specified when DB_ENV->open() was called. More info...

get_lg_filemode()¶

Returns the log file mode. More info...

set_lg_filemode(filemode)¶

Set the absolute file mode for created log files. More info...

get_lg_bsize()¶

Returns the size of the log buffer, in bytes. More info...

set_lg_bsize(size)¶

Set the size of the in-memory log buffer, in bytes. More info...

get_lg_dir()¶

Returns the log directory, which is the location for logging files. More info...

set_lg_dir(dir)¶

The path of a directory to be used as the location of logging files. Log files created by the Log Manager subsystem will be created in this directory. More info...

set_lg_max(size)¶

Set the maximum size of a single file in the log, in bytes. More info...

get_lg_max(size)¶

Returns the maximum log file size. More info...

get_lg_regionmax()¶

Returns the size of the underlying logging subsystem region. More info...

set_lg_regionmax(size)¶

Set the maximum size of a single region in the log, in bytes. More info...

get_lk_partitions()¶

Returns the number of lock table partitions used in the Berkeley DB environment. More info...

set_lk_partitions(partitions)¶

Set the number of lock table partitions in the Berkeley DB environment. More info...

get_lk_detect()¶

Returns the deadlock detector configuration. More info...

set_lk_detect(mode)¶

Set the automatic deadlock detection mode. More info...

set_lk_max(max)¶

Set the maximum number of locks. (This method is deprecated.) More info...

get_lk_max_locks()¶

Returns the maximum number of potential locks. More info...

set_lk_max_locks(max)¶

Set the maximum number of locks supported by the Berkeley DB lock subsystem. More info...

get_lk_max_lockers()¶

Returns the maximum number of potential lockers. More info...

set_lk_max_lockers(max)¶

Set the maximum number of simultaneous locking entities supported by the Berkeley DB lock subsystem. More info...

get_lk_max_objects()¶

Returns the maximum number of locked objects. More info...

set_lk_max_objects(max)¶

Set the maximum number of simultaneously locked objects supported by the Berkeley DB lock subsystem. More info...

get_mp_mmapsize()¶

Returns the the maximum file size, in bytes, for a file to be mapped into the process address space. More info...

set_mp_mmapsize(size)¶

Files that are opened read-only in the memory pool (and that satisfy a few other criteria) are, by default, mapped into the process address space instead of being copied into the local cache. This can result in better-than-usual performance, as available virtual memory is normally much larger than the local cache, and page faults are faster than page copying on many systems. However, in the presence of limited virtual memory it can cause resource starvation, and in the presence of large databases, it can result in immense process sizes.

This method sets the maximum file size, in bytes, for a file to be mapped into the process address space. If no value is specified, it defaults to 10MB. More info...

stat_print(flags=0)¶

Displays the default subsystem statistical information. More info...

log_file(lsn)¶

Maps lsn to filenames, returning the name of the file containing the named record. More info...

log_printf(string, txn=None)¶

Appends an informational message to the Berkeley DB database environment log files. More info...

log_archive(flags=0)¶

Returns a list of log or database file names. By default, log_archive returns the names of all of the log files that are no longer in use (e.g., no longer involved in active transactions), and that may safely be archived for catastrophic recovery and then removed from the system. More info...

log_flush()¶

Force log records to disk. Useful if the environment, database or transactions are used as ACI, instead of ACID. For example, if the environment is opened as DB_TXN_NOSYNC. More info...

log_get_config(which)¶

Returns whether the specified which parameter is currently set or not. You can manage this value using the DB_ENV->log_set_config() method. More info...

log_set_config(flags, onoff)¶

Configures the Berkeley DB logging subsystem. More info...

lock_detect(atype, flags=0)¶

Run one iteration of the deadlock detector, returns the number of transactions aborted. More info...

lock_get(locker, obj, lock_mode, flags=0)¶

Acquires a lock and returns a handle to it as a DBLock object. The locker parameter is an integer representing the entity doing the locking, and obj is an object representing the item to be locked. More info...

lock_id()¶

Acquires a locker id, guaranteed to be unique across all threads and processes that have the DBEnv open. More info...

lock_id_free(id)¶

Frees a locker ID allocated by the “dbenv.lock_id()” method. More info...

lock_put(lock)¶

Release the lock. More info...

lock_stat(flags=0)¶

Returns a dictionary of locking subsystem statistics with the following keys:

id Last allocated lock ID.
cur_maxid The current maximum unused locker ID.
nmodes Number of lock modes.
maxlocks Maximum number of locks possible.
maxlockers Maximum number of lockers possible.
maxobjects Maximum number of objects possible.
nlocks Number of current locks.
maxnlocks Maximum number of locks at once.
nlockers Number of current lockers.
nobjects Number of current lock objects.
maxnobjects Maximum number of lock objects at once.
maxnlockers Maximum number of lockers at once.
nrequests Total number of locks requested.
nreleases Total number of locks released.
nupgrade Total number of locks upgraded.
ndowngrade Total number of locks downgraded.
lock_wait The number of lock requests not immediately available due to conflicts, for which the thread of control waited.
lock_nowait The number of lock requests not immediately available due to conflicts, for which the thread of control did not wait.
ndeadlocks Number of deadlocks.
locktimeout Lock timeout value.
nlocktimeouts The number of lock requests that have timed out.
txntimeout Transaction timeout value.
ntxntimeouts The number of transactions that have timed out. This value is also a component of ndeadlocks, the total number of deadlocks detected.
objs_wait The number of requests to allocate or deallocate an object for which the thread of control waited.
objs_nowait The number of requests to allocate or deallocate an object for which the thread of control did not wait.
lockers_wait The number of requests to allocate or deallocate a locker for which the thread of control waited.
lockers_nowait The number of requests to allocate or deallocate a locker for which the thread of control did not wait.
locks_wait The number of requests to allocate or deallocate a lock structure for which the thread of control waited.
locks_nowait The number of requests to allocate or deallocate a lock structure for which the thread of control did not wait.
hash_len Maximum length of a lock hash bucket.
regsize Size of the region.
region_wait Number of times a thread of control was forced to wait before obtaining the region lock.
region_nowait Number of times a thread of control was able to obtain the region lock without waiting.

More info...

lock_stat_print(flags=0)¶

Displays the locking subsystem statistical information. More info...

get_tx_max()¶

Returns the number of active transactions. More info...

set_tx_max(max)¶

Set the maximum number of active transactions. More info...

get_tx_timestamp()¶

Returns the recovery timestamp. More info...

set_tx_timestamp(timestamp)¶

Recover to the time specified by timestamp rather than to the most current possible date. More info...

txn_begin(parent=None, flags=0)¶

Creates and begins a new transaction. A DBTxn object is returned. More info...

txn_checkpoint(kbyte=0, min=0, flag=0)¶

Flushes the underlying memory pool, writes a checkpoint record to the log and then flushes the log. More info...

txn_stat(flags=0)¶

Return a dictionary of transaction statistics with the following keys:

last_ckp The LSN of the last checkpoint.
time_ckp Time the last completed checkpoint finished (as the number of seconds since the Epoch, returned by the IEEE/ANSI Std 1003.1 POSIX time interface).
last_txnid Last transaction ID allocated.
maxtxns Max number of active transactions possible.
nactive Number of transactions currently active.
maxnactive Max number of active transactions at once.
nsnapshot The number of transactions on the snapshot list. These are transactions which modified a database opened with DB_MULTIVERSION, and which have committed or aborted, but the copies of pages they created are still in the cache.
maxnsnapshot The maximum number of transactions on the snapshot list at any one time.
nbegins Number of transactions that have begun.
naborts Number of transactions that have aborted.
ncommits Number of transactions that have committed.
nrestores Number of transactions that have been restored.
regsize Size of the region.
region_wait Number of times that a thread of control was forced to wait before obtaining the region lock.
region_nowait Number of times that a thread of control was able to obtain the region lock without waiting.

More info...

txn_stat_print(flags=0)¶

Displays the transaction subsystem statistical information. More info...

lsn_reset(file=None, flags=0)¶

This method allows database files to be moved from one transactional database environment to another. More info...

log_stat(flags=0)¶

Returns a dictionary of logging subsystem statistics with the following keys:

magic The magic number that identifies a file as a log file.
version The version of the log file type.
mode The mode of any created log files.
lg_bsize The in-memory log record cache size.
lg_size The log file size.
record The number of records written to this log.
w_mbytes The number of megabytes written to this log.
w_bytes The number of bytes over and above w_mbytes written to this log.
wc_mbytes The number of megabytes written to this log since the last checkpoint.
wc_bytes The number of bytes over and above wc_mbytes written to this log since the last checkpoint.
wcount The number of times the log has been written to disk.
wcount_fill The number of times the log has been written to disk because the in-memory log record cache filled up.
rcount The number of times the log has been read from disk.
scount The number of times the log has been flushed to disk.
cur_file The current log file number.
cur_offset The byte offset in the current log file.
disk_file The log file number of the last record known to be on disk.
disk_offset The byte offset of the last record known to be on disk.
maxcommitperflush The maximum number of commits contained in a single log flush.
mincommitperflush The minimum number of commits contained in a single log flush that contained a commit.
regsize The size of the log region, in bytes.
region_wait The number of times that a thread of control was forced to wait before obtaining the log region mutex.
region_nowait The number of times that a thread of control was able to obtain the log region mutex without waiting.

More info...

log_stat_print(flags=0)¶

Displays the logging subsystem statistical information. More info...

txn_recover()¶

Returns a list of tuples (GID, TXN) of transactions prepared but still unresolved. This is used while doing environment recovery in an application using distributed transactions.

This method must be called only from a single thread at a time. It should be called after DBEnv recovery. More info...

set_verbose(which, onoff)¶

Turns specific additional informational and debugging messages in the Berkeley DB message output on and off. To see the additional messages, verbose messages must also be configured for the application. More info...

get_verbose(which)¶

Returns whether the specified which parameter is currently set or not. More info...

set_event_notify(eventFunc)¶

Configures a callback function which is called to notify the process of specific Berkeley DB events. More info...

mutex_stat(flags=0)¶

Returns a dictionary of mutex subsystem statistics with the following keys:

mutex_align The mutex alignment, in bytes.
mutex_tas_spins The number of times test-and-set mutexes will spin without blocking.
mutex_cnt The total number of mutexes configured.
mutex_free The number of mutexes currently available.
mutex_inuse The number of mutexes currently in use.
mutex_inuse_max The maximum number of mutexes ever in use.
regsize The size of the mutex region, in bytes.
region_wait The number of times that a thread of control was forced to wait before obtaining the mutex region mutex.
region_nowait The number of times that a thread of control was able to obtain the mutex region mutex without waiting.

More info...

mutex_stat_print(flags=0)¶

Displays the mutex subsystem statistical information. More info...

mutex_set_max(value)¶

Configure the total number of mutexes to allocate. More info...

mutex_get_max()¶

Returns the total number of mutexes allocated. More info...

mutex_set_increment(value)¶

Configure the number of additional mutexes to allocate. More info...

mutex_get_increment()¶

Returns the number of additional mutexes to allocate. More info...

mutex_set_align(align)¶

Set the mutex alignment, in bytes. More info...

mutex_get_align()¶

Returns the mutex alignment, in bytes. More info...

mutex_set_tas_spins(tas_spins)¶

Specify that test-and-set mutexes should spin tas_spins times without blocking. Check the default values in the Oracle webpage. More info...

mutex_get_tas_spins()¶

Returns the test-and-set spin count. More info...

DBEnv Replication Manager Methods¶

This module automates many of the tasks needed to provide replication abilities in a Berkeley DB system. The module is fairly limited, but enough in many cases. Users more demanding must use the full Base Replication API.

This module requires pthread support (in Unix), so you must compile Berkeley DB with it if you want to be able to use the Replication Manager.

repmgr_start(nthreads, flags)¶

Starts the replication manager. More info...

repmgr_site(host, port)¶

Returns a DB_SITE handle that defines a site’s host/port network address. You use the DB_SITE handle to configure and manage replication sites. More info...

repmgr_site_by_eid(eid)¶

Returns a DB_SITE handle based on the site’s Environment ID value. You use the DB_SITE handle to configure and manage replication sites. More info...

repmgr_set_ack_policy(ack_policy)¶

Specifies how master and client sites will handle acknowledgment of replication messages which are necessary for “permanent” records. More info...

repmgr_get_ack_policy()¶

Returns the replication manager’s client acknowledgment policy. More info...

repmgr_site_list()¶

Returns a dictionary with the status of the sites currently known by the replication manager.

The keys are the Environment ID assigned by the replication manager. This is the same value that is passed to the application’s event notification function for the DB_EVENT_REP_NEWMASTER event.

The values are tuples containing the hostname, the TCP/IP port number and the link status.

More info...

repmgr_stat(flags=0)¶

Returns a dictionary with the replication manager statistics. Keys are:

perm_failed The number of times a message critical for maintaining database integrity (for example, a transaction commit), originating at this site, did not receive sufficient acknowledgement from clients, according to the configured acknowledgement policy and acknowledgement timeout.
msgs_queued The number of outgoing messages which could not be transmitted immediately, due to a full network buffer, and had to be queued for later delivery.
msgs_dropped The number of outgoing messages that were completely dropped, because the outgoing message queue was full. (Berkeley DB replication is tolerant of dropped messages, and will automatically request retransmission of any missing messages as needed.)
connection_drop The number of times an existing TCP/IP connection failed.
connect_fail The number of times an attempt to open a new TCP/IP connection failed.

More info...

repmgr_stat_print(flags=0)¶

Displays the replication manager statistical information. More info...

DBEnv Replication Methods¶

This section provides the raw methods for replication. If possible, it is recommended to use the Replication Manager.

rep_elect(nsites, nvotes)¶

Holds an election for the master of a replication group. More info...

rep_set_transport(envid, transportFunc)¶

Initializes the communication infrastructure for a database environment participating in a replicated application. More info...

rep_process_messsage(control, rec, envid)¶

Processes an incoming replication message sent by a member of the replication group to the local database environment.

Returns a two element tuple.

More info...

rep_start(flags, cdata=None)¶

Configures the database environment as a client or master in a group of replicated database environments.

The DB_ENV->rep_start method is not called by most replication applications. It should only be called by applications implementing their own network transport layer, explicitly holding replication group elections and handling replication messages outside of the replication manager framework.

More info...

rep_sync()¶

Forces master synchronization to begin for this client. This method is the other half of setting the DB_REP_CONF_DELAYCLIENT flag via the DB_ENV->rep_set_config method. More info...

rep_set_config(which, onoff)¶

Configures the Berkeley DB replication subsystem. More info...

rep_get_config(which)¶

Returns whether the specified which parameter is currently set or not. More info...

rep_set_limit(bytes)¶

Sets a byte-count limit on the amount of data that will be transmitted from a site in response to a single message processed by the DB_ENV->rep_process_message method. The limit is not a hard limit, and the record that exceeds the limit is the last record to be sent. More info...

rep_get_limit()¶

Gets a byte-count limit on the amount of data that will be transmitted from a site in response to a single message processed by the DB_ENV->rep_process_message method. The limit is not a hard limit, and the record that exceeds the limit is the last record to be sent. More info...

rep_set_request(minimum, maximum)¶

Sets a threshold for the minimum and maximum time that a client waits before requesting retransmission of a missing message. Specifically, if the client detects a gap in the sequence of incoming log records or database pages, Berkeley DB will wait for at least min microseconds before requesting retransmission of the missing record. Berkeley DB will double that amount before requesting the same missing record again, and so on, up to a maximum threshold of max microseconds. More info...

rep_get_request()¶

Returns a tuple with the minimum and maximum number of microseconds a client waits before requesting retransmission. More info...

rep_set_nsites(nsites)¶

Specifies the total number of sites in a replication group. More info...

rep_get_nsites()¶

Returns the total number of sites in the replication group. More info...

rep_set_priority(priority)¶

Specifies the database environment’s priority in replication group elections. The priority must be a positive integer, or 0 if this environment cannot be a replication group master. More info...

rep_get_priority()¶

Returns the database environment priority. More info...

rep_set_timeout(which, timeout)¶

Specifies a variety of replication timeout values. More info...

rep_get_timeout(which)¶

Returns the timeout value for the specified which parameter. More info...

rep_set_clockskew(fast, slow)¶

Sets the clock skew ratio among replication group members based on the fastest and slowest measurements among the group for use with master leases. More info...

rep_get_clockskew()¶

Returns a tuple with the current clock skew values. More info...

rep_stat(flags=0)¶

Returns a dictionary with the replication subsystem statistics. Keys are:

st_bulk_fills The number of times the bulk buffer filled up, forcing the buffer content to be sent.
bulk_overflows The number of times a record was bigger than the entire bulk buffer, and therefore had to be sent as a singleton.
bulk_records The number of records added to a bulk buffer.
bulk_transfers The number of bulk buffers transferred (via a call to the application’s send function).
client_rerequests The number of times this client site received a “re-request” message, indicating that a request it previously sent to another client could not be serviced by that client. (Compare to client_svc_miss.)
client_svc_miss The number of “request” type messages received by this client that could not be processed, forcing the originating requester to try sending the request to the master (or another client).
client_svc_req The number of “request” type messages received by this client. (“Request” messages are usually sent from a client to the master, but a message marked with the DB_REP_ANYWHERE flag in the invocation of the application’s send function may be sent to another client instead.)
dupmasters The number of duplicate master conditions originally detected at this site.
egen The current election generation number.
election_cur_winner The election winner.
election_gen The election generation number.
election_lsn The maximum LSN of election winner.
election_nsites The number sites responding to this site during the last election.
election_nvotes The number of votes required in the last election.
election_priority The election priority.
election_sec The number of seconds the last election took (the total election time is election_sec plus election_usec).
election_status The current election phase (0 if no election is in progress).
election_tiebreaker The election tiebreaker value.
election_usec The number of microseconds the last election took (the total election time is election_sec plus election_usec).
election_votes The number of votes received in the last election.
elections The number of elections held.
elections_won The number of elections won.
env_id The current environment ID.
env_priority The current environment priority.
gen The current generation number.
log_duplicated The number of duplicate log records received.
log_queued The number of log records currently queued.
log_queued_max The maximum number of log records ever queued at once.
log_queued_total The total number of log records queued.
log_records The number of log records received and appended to the log.
log_requested The number of times log records were missed and requested.
master The current master environment ID.
master_changes The number of times the master has changed.
max_lease_sec The number of seconds of the longest lease (the total lease time is max_lease_sec plus max_lease_usec).
max_lease_usec The number of microseconds of the longest lease (the total lease time is max_lease_sec plus max_lease_usec).
max_perm_lsn The LSN of the maximum permanent log record, or 0 if there are no permanent log records.
msgs_badgen The number of messages received with a bad generation number.
msgs_processed The number of messages received and processed.
msgs_recover The number of messages ignored due to pending recovery.
msgs_send_failures The number of failed message sends.
msgs_sent The number of messages sent.
newsites The number of new site messages received.
next_lsn In replication environments configured as masters, the next LSN expected. In replication environments configured as clients, the next LSN to be used.
next_pg The next page number we expect to receive.
nsites The number of sites used in the last election.
nthrottles Transmission limited. This indicates the number of times that data transmission was stopped to limit the amount of data sent in response to a single call to DB_ENV->rep_process_message.
outdated The number of outdated conditions detected.
pg_duplicated The number of duplicate pages received.
pg_records The number of pages received and stored.
pg_requested The number of pages missed and requested from the master.
startsync_delayed The number of times the client had to delay the start of a cache flush operation (initiated by the master for an impending checkpoint) because it was missing some previous log record(s).
startup_complete The client site has completed its startup procedures and is now handling live records from the master.
status
The current replication mode. Set to
DB_REP_MASTER if the environment is a replication master, DB_REP_CLIENT if the environment is a replication client, or 0 if replication is not configured.
txns_applied The number of transactions applied.
waiting_lsn The LSN of the first log record we have after missing log records being waited for, or 0 if no log records are currently missing.
waiting_pg The page number of the first page we have after missing pages being waited for, or 0 if no pages are currently missing.

More info...

rep_stat_print(flags=0)¶

Displays the replication subsystem statistical information. More info...

Table Of Contents

Previous topic

Berkeley DB 4.3 thru 6.0 Python Extension Package

Next topic

DB

bsddb3-6.1.0/docs/html/contents.html0000644000000000000000000002166112247657276017264 0ustar rootroot00000000000000 Python Bindings for Berkeley DB 4.3 thru 6.0 — PyBSDDB 6.0.0 documentation

Python Bindings for Berkeley DB 4.3 thru 6.0¶

Introduction¶

This handcrafted package contains Python wrappers for Berkeley DB, the Open Source embedded database system. Berkeley DB is a programmatic toolkit that provides high-performance built-in database support for desktop and server applications.

The Berkeley DB access methods include B+tree, Extended Linear Hashing, Fixed and Variable-length records, and Queues. Berkeley DB provides full transactional support, database recovery, online backups, multi-threaded and multi-process access, etc.

The Python wrappers allow you to store Python string objects of any length, keyed either by strings or integers depending on the database access method. With the use of another module in the package standard shelve-like functionality is provided allowing you to store any picklable Python object!

Berkeley DB is very powerful and versatile, but it is complex to use correctly. Oracle documentation is very complete. Please, review it.

Since June 2013 (release 6.0.0), this project accepts donations. Please, contribute if you can. Details.

Table Of Contents

Next topic

Berkeley DB 4.3 thru 6.0 Python Extension Package

bsddb3-6.1.0/docs/html/search.html0000644000000000000000000000611712247657276016673 0ustar rootroot00000000000000 Search — PyBSDDB 6.0.0 documentation

Search

Please activate JavaScript to enable the search functionality.

From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list.

bsddb3-6.1.0/docs/html/db.html0000644000000000000000000013036312247657276016014 0ustar rootroot00000000000000 DB — PyBSDDB 6.0.0 documentation

DB¶

Read Oracle documentation for better understanding.

More info...

DB Methods¶

DB(dbEnv=None, flags=0)¶

Constructor. More info...

append(data, txn=None)¶

A convenient version of put() that can be used for Recno or Queue databases. The DB_APPEND flag is automatically used, and the record number is returned. More info...

associate(secondaryDB, callback, flags=0, txn=None)¶

Used to associate secondaryDB to act as a secondary index for this (primary) database. The callback parameter should be a reference to a Python callable object that will construct and return the secondary key or DB_DONOTINDEX if the item should not be indexed. The parameters the callback will receive are the primaryKey and primaryData values. More info...

close(flags=0)¶

Flushes cached data and closes the database. More info...

compact(start=None, stop=None, flags=0,
compact_fillpercent=0, compact_pages=0, compact_timeout=0)

Compacts Btree and Recno access method databases, and optionally returns unused Btree, Hash or Recno database pages to the underlying filesystem.

The method returns the number of pages returned to the filesystem. More info...

consume(txn=None, flags=0)¶

For a database with the Queue access method, returns the record number and data from the first available record and deletes it from the queue. More info...

consume_wait(txn=None, flags=0)¶

For a database with the Queue access method, returns the record number and data from the first available record and deletes it from the queue. If the Queue database is empty, the thread of control will wait until there is data in the queue before returning. More info...

cursor(txn=None, flags=0)¶

Create a cursor on the DB and returns a DBCursor object. If a transaction is passed then the cursor can only be used within that transaction and you must be sure to close the cursor before commiting the transaction. More info...

delete(key, txn=None, flags=0)¶

Removes a key/data pair from the database. More info...

exists(key, txn=None, flags=0)¶

Test if a key exists in the database. Returns True or False. More info...

fd()¶

Returns a file descriptor for the database. More info...

get(key, default=None, txn=None, flags=0, dlen=-1, doff=-1)¶

Returns the data object associated with key. If key is an integer then the DB_SET_RECNO flag is automatically set for BTree databases and the actual key and the data value are returned as a tuple. If default is given then it is returned if the key is not found in the database. Partial records can be read using dlen and doff, however be sure to not read beyond the end of the actual data or you may get garbage. More info...

pget(key, default=None, txn=None, flags=0, dlen=-1, doff=-1)¶

This method is available only on secondary databases. It will return the primary key, given the secondary one, and associated data. More info...

get_transactional()¶

Returns True if the database is transactional. False if not. More info...

get_priority()¶

Returns the cache priority for pages referenced by the DB handle. This priority value is set using the DB->set_priority() method. More info...

set_priority(priority)¶

Set the cache priority for pages referenced by the DB handle.

The priority of a page biases the replacement algorithm to be more or less likely to discard a page when space is needed in the buffer pool. The bias is temporary, and pages will eventually be discarded if they are not referenced again. The DB->set_priority() method is only advisory, and does not guarantee pages will be treated in a specific way.

The value provided must be symbolic. Check the Oracle documentation.

More info...

get_dbname()¶

Returns a tuple with the filename and the database name. If there is no database name, the value returned will be None. More info...

get_open_flags()¶

Returns the current open method flags. That is, this method returns the flags that were specified when DB->open() was called. More info...

set_private(object)¶

Link an object to the DB object. This allows to pass around an arbitrary object. For instance, for callback context.

get_private()¶

Give the object linked to the DB.

get_both(key, data, txn=None, flags=0)¶

A convenient version of get() that automatically sets the DB_GET_BOTH flag, and which will be successful only if both the key and data value are found in the database. (Can be used to verify the presence of a record in the database when duplicate keys are allowed.) More info...

get_byteswapped()¶

May be used to determine if the database was created on a machine with the same endianess as the current machine. More info...

get_size(key, txn=None)¶

Return the size of the data object associated with key.

get_type()¶

Return the database’s access method type. More info...

join(cursorList, flags=0)¶

Create and return a specialized cursor for use in performing joins on secondary indices. More info...

key_range(key, txn=None, flags=0)¶

Returns an estimate of the proportion of keys that are less than, equal to and greater than the specified key. More info...

open(filename, dbname=None, dbtype=DB_UNKNOWN, flags=0, mode=0660, txn=None)¶

Opens the database named dbname in the file named filename. The dbname argument is optional and allows applications to have multiple logical databases in a single physical file. It is an error to attempt to open a second database in a file that was not initially created using a database name. In-memory databases never intended to be shared or preserved on disk may be created by setting both the filename and dbname arguments to None. More info...

put(key, data, txn=None, flags=0, dlen=-1, doff=-1)¶

Stores the key/data pair in the database. If the DB_APPEND flag is used and the database is using the Recno or Queue access method then the record number allocated to the data is returned. Partial data objects can be written using dlen and doff. More info...

remove(filename, dbname=None, flags=0)¶

Remove a database. More info...

rename(filename, dbname, newname, flags=0)¶

Rename a database. More info...

set_encrypt(passwd, flags=0)¶

Set the password used by the Berkeley DB library to perform encryption and decryption. Because databases opened within Berkeley DB environments use the password specified to the environment, it is an error to attempt to set a password in a database created within an environment. More info...

get_encrypt_flags()¶

Returns the encryption flags. More info...

set_bt_compare(compareFunc)¶

Set the B-Tree database comparison function. This can only be called once before the database has been opened. compareFunc takes two arguments: (left key string, right key string) It must return a -1, 0, 1 integer similar to cmp. You can shoot your database in the foot, beware! Read the Berkeley DB docs for the full details of how the comparison function MUST behave. More info...

get_bt_minkey()¶

Returns the minimum number of key/data pairs intended to be stored on any single Btree leaf page. This value can be set using the DB->set_bt_minkey() method. More info...

set_bt_minkey(minKeys)¶

Set the minimum number of keys that will be stored on any single BTree page. More info...

set_cachesize(gbytes, bytes, ncache=0)¶

Set the size of the database’s shared memory buffer pool. More info...

get_cachesize()¶

Returns a tuple with the current size and composition of the cache. More info...

set_dup_compare(compareFunc)¶

Set the duplicate data item comparison function. This can only be called once before the database has been opened. compareFunc takes two arguments: (left key string, right key string) It must return a -1, 0, 1 integer similar to cmp. You can shoot your database in the foot, beware! Read the Berkeley DB docs for the full details of how the comparison function MUST behave. More info...

set_get_returns_none(flag)¶

Controls what get and related methods do when a key is not found.

See the DBEnv set_get_returns_none documentation.

The previous setting is returned.

get_flags()¶

Returns the current database flags as set by the DB->set_flags() method. More info...

set_flags(flags)¶

Set additional flags on the database before opening. More info...

get_h_ffactor()¶

Returns the hash table density as set by the DB->set_h_ffactor() method. More info...

set_h_ffactor(ffactor)¶

Set the desired density within the hash table. More info...

get_h_nelem()¶

Returns the estimate of the final size of the hash table as set by the DB->set_h_nelem() method. More info...

set_h_nelem(nelem)¶

Set an estimate of the final size of the hash table. More info...

get_lorder()¶

Returns the database byte order; a byte order of 4,321 indicates a big endian order, and a byte order of 1,234 indicates a little endian order. This value is set using the DB->set_lorder() method. More info...

set_lorder(lorder)¶

Set the byte order for integers in the stored database metadata. More info...

get_pagesize()¶

Returns the database’s current page size, as set by the DB->set_pagesize() method. More info...

set_pagesize(pagesize)¶

Set the size of the pages used to hold items in the database, in bytes. More info...

get_re_delim()¶

Returns the delimiting byte, which is used to mark the end of a record in the backing source file for the Recno access method. The return value will be a numeric byte value. More info...

set_re_delim(delim)¶

Set the delimiting byte used to mark the end of a record in the backing source file for the Recno access method. You can specify a char or a numeric byte value. More info...

get_re_len()¶

Returns the length of the records held in a Queue access method database. This value can be set using the DB->set_re_len() method. More info...

set_re_len(length)¶

For the Queue access method, specify that the records are of length length. For the Recno access method, specify that the records are fixed-length, not byte delimited, and are of length length. More info...

get_re_pad()¶

Returns the pad character used for short, fixed-length records used by the Queue and Recno access methods. The method returns a byte value. More info...

set_re_pad(pad)¶

Set the padding character for short, fixed-length records for the Queue and Recno access methods. You can specify a char or a numeric byte value. More info...

get_re_source()¶

Returns the source file used by the Recno access method. This file is configured for the Recno access method using the DB->set_re_source() method. More info...

set_re_source(source)¶

Set the underlying source file for the Recno access method. More info...

get_q_extentsize()¶

Returns the number of pages in an extent. This value is used only for Queue databases and is set using the DB->set_q_extentsize() method. More info...

set_q_extentsize(extentsize)¶

Set the size of the extents used to hold pages in a Queue database, specified as a number of pages. Each extent is created as a separate physical file. If no extent size is set, the default behavior is to create only a single underlying database file. More info...

stat(flags=0, txn=None)¶

Return a dictionary containing database statistics with the following keys.

For Hash databases:

magic Magic number that identifies the file as a Hash database.
version Version of the Hash database.
nkeys Number of unique keys in the database.
ndata Number of key/data pairs in the database.
pagecnt The number of pages in the database.
pagesize Underlying Hash database page (& bucket) size.
nelem Estimated size of the hash table specified at database creation time.
ffactor Desired fill factor (number of items per bucket) specified at database creation time.
buckets Number of hash buckets.
free Number of pages on the free list.
bfree Number of bytes free on bucket pages.
bigpages Number of big key/data pages.
big_bfree Number of bytes free on big item pages.
overflows Number of overflow pages (overflow pages are pages that contain items that did not fit in the main bucket page).
ovfl_free Number of bytes free on overflow pages.
dup Number of duplicate pages.
dup_free Number of bytes free on duplicate pages.

For BTree and Recno databases:

magic Magic number that identifies the file as a Btree database.
version Version of the Btree database.
nkeys

For the Btree Access Method, the number of unique keys in the database.

For the Recno Access Method, the number of records in the database. If the database has been configured to not re-number records during deletion, the number of records may include records that have been deleted.

ndata

For the Btree Access Method, the number of key/data pairs in the database.

For the Recno Access Method, the number of records in the database. If the database has been configured to not re-number records during deletion, the number of records may include records that have been deleted.

pagecnt The number of pages in the database.
pagesize Underlying database page size.
minkey Minimum keys per page.
re_len Length of fixed-length records.
re_pad Padding byte value for fixed-length records.
levels Number of levels in the database.
int_pg Number of database internal pages.
leaf_pg Number of database leaf pages.
dup_pg Number of database duplicate pages.
over_pg Number of database overflow pages.
empty_pg Number of empty database pages.
free Number of pages on the free list.
int_pgfree Num of bytes free in database internal pages.
leaf_pgfree Number of bytes free in database leaf pages.
dup_pgfree Num bytes free in database duplicate pages.
over_pgfree Num of bytes free in database overflow pages.

For Queue databases:

magic Magic number that identifies the file as a Queue database.
version Version of the Queue file type.
nkeys Number of records in the database.
ndata Number of records in the database.
pagesize Underlying database page size.
extentsize Underlying database extent size, in pages.
pages Number of pages in the database.
re_len Length of the records.
re_pad Padding byte value for the records.
pgfree Number of bytes free in database pages.
first_recno First undeleted record in the database.
cur_recno Last allocated record number in the database.

More info...

stat_print(flags=0)¶

Displays the database statistical information. More info...

sync(flags=0)¶

Flushes any cached information to disk. More info...

truncate(txn=None, flags=0)¶

Empties the database, discarding all records it contains. The number of records discarded from the database is returned. More info...

upgrade(filename, flags=0)¶

Upgrades all of the databases included in the file filename, if necessary. More info...

verify(filename, dbname=None, outfile=None, flags=0)¶

Verifies the integrity of all databases in the file specified by the filename argument, and optionally outputs the databases’ key/data pairs to a file. More info...

DB Mapping and Compatibility Methods¶

These methods of the DB type are for implementing the Mapping Interface, as well as others for making a DB behave as much like a dictionary as possible. The main downside to using a DB as a dictionary is you are not able to specify a transaction object.

DB_length() [ usage: len(db) ]

Return the number of key/data pairs in the database.

DB_subscript(key) [ usage: db[key] ]

Return the data associated with key.

DB_ass_sub(key, data) [ usage: db[key] = data ]

Assign or update a key/data pair, or delete a key/data pair if data is NULL.

keys(txn=None)¶

Return a list of all keys in the database. Warning: this method traverses the entire database so it can possibly take a long time to complete.

items(txn=None)¶

Return a list of tuples of all key/data pairs in the database. Warning: this method traverses the entire database so it can possibly take a long time to complete.

values(txn=None)¶

Return a list of all data values in the database. Warning: this method traverses the entire database so it can possibly take a long time to complete.

has_key(key, txn=None)¶

Returns true if key is present in the database.

Table Of Contents

Previous topic

DBEnv

Next topic

DBCursor

bsddb3-6.1.0/docs/html/dbtxn.html0000644000000000000000000002037412247657276016546 0ustar rootroot00000000000000 DBTxn — PyBSDDB 6.0.0 documentation

DBTxn¶

Read Oracle documentation for better understanding.

More info...

DBTxn Methods¶

abort()¶

Aborts the transaction More info...

commit(flags=0)¶

Ends the transaction, committing any changes to the databases. More info...

id()¶

The txn_id function returns the unique transaction id associated with the specified transaction. More info...

prepare(gid)¶

Initiates the beginning of a two-phase commit. A global identifier parameter is required, which is a value unique across all processes involved in the commit. It must be a string of DB_GID_SIZE bytes. More info...

discard()¶

This method frees up all the per-process resources associated with the specified transaction, neither committing nor aborting the transaction. The transaction will be keep in “unresolved” state. This call may be used only after calls to “dbenv.txn_recover()”. A “unresolved” transaction will be returned again thru new calls to “dbenv.txn_recover()”.

For example, when there are multiple global transaction managers recovering transactions in a single Berkeley DB environment, any transactions returned by “dbenv.txn_recover()” that are not handled by the current global transaction manager should be discarded using “txn.discard()”.

More info...

set_timeout(timeout, flags)¶

Sets timeout values for locks or transactions for the specified transaction. More info...

get_name(name)¶

Returns the string associated with the transaction. More info...

set_name(name)¶

Associates the specified string with the transaction. More info...

Table Of Contents

Previous topic

DBLogCursor

Next topic

DBLock

bsddb3-6.1.0/docs/html/dbcursor.html0000644000000000000000000004535212247657276017255 0ustar rootroot00000000000000 DBCursor — PyBSDDB 6.0.0 documentation

DBCursor¶

Read Oracle documentation for better understanding.

More info...

DBCursor Methods¶

close()¶

Discards the cursor. If the cursor is created within a transaction then you must be sure to close the cursor before commiting the transaction. More info...

count(flags=0)¶

Returns a count of the number of duplicate data items for the key referenced by the cursor. More info...

delete(flags=0)¶

Deletes the key/data pair currently referenced by the cursor. More info...

dup(flags=0)¶

Create a new cursor. More info...

set_priority(priority)¶

Set the cache priority for pages referenced by the DBC handle. More info...

get_priority()¶

Returns the cache priority for pages referenced by the DBC handle. More info...

put(key, data, flags=0, dlen=-1, doff=-1)¶

Stores the key/data pair into the database. Partial data records can be written using dlen and doff. More info...

get(flags, dlen=-1, doff=-1)¶

See get(key, data, flags, dlen=-1, doff=-1) below.

get(key, flags, dlen=-1, doff=-1)

See get(key, data, flags, dlen=-1, doff=-1) below.

get(key, data, flags, dlen=-1, doff=-1)

Retrieves key/data pairs from the database using the cursor. All the specific functionalities of the get method are actually provided by the various methods below, which are the preferred way to fetch data using the cursor. These generic interfaces are only provided as an inconvenience. Partial data records are returned if dlen and doff are used in this method and in many of the specific methods below. More info...

pget(flags, dlen=-1, doff=-1)¶

See pget(key, data, flags, dlen=-1, doff=-1) below.

pget(key, flags, dlen=-1, doff=-1)

See pget(key, data, flags, dlen=-1, doff=-1) below.

pget(key, data, flags, dlen=-1, doff=-1)

Similar to the already described get(). This method is available only on secondary databases. It will return the primary key, given the secondary one, and associated data More info...

DBCursor Get Methods¶

These DBCursor methods are all wrappers around the get() function in the C API.

current(flags=0, dlen=-1, doff=-1)¶

Returns the key/data pair currently referenced by the cursor. More info...

get_current_size()¶

Returns length of the data for the current entry referenced by the cursor.

first(flags=0, dlen=-1, doff=-1)¶

Position the cursor to the first key/data pair and return it. More info...

last(flags=0, dlen=-1, doff=-1)¶

Position the cursor to the last key/data pair and return it. More info...

next(flags=0, dlen=-1, doff=-1)¶

Position the cursor to the next key/data pair and return it. More info...

prev(flags=0, dlen=-1, doff=-1)¶

Position the cursor to the previous key/data pair and return it. More info...

consume(flags=0, dlen=-1, doff=-1)¶

For a database with the Queue access method, returns the record number and data from the first available record and deletes it from the queue.

NOTE: This method is deprecated in Berkeley DB version 3.2 in favor of the new consume method in the DB class.

get_both(key, data, flags=0)¶

Like set() but positions the cursor to the record matching both key and data. (An alias for this is set_both, which makes more sense to me...) More info...

get_recno()¶

Return the record number associated with the cursor. The database must use the BTree access method and have been created with the DB_RECNUM flag. More info...

join_item(flags=0)¶

For cursors returned from the DB.join method, returns the combined key value from the joined cursors. More info...

next_dup(flags=0, dlen=-1, doff=-1)¶

If the next key/data pair of the database is a duplicate record for the current key/data pair, the cursor is moved to the next key/data pair of the database, and that pair is returned. More info...

next_nodup(flags=0, dlen=-1, doff=-1)¶

The cursor is moved to the next non-duplicate key/data pair of the database, and that pair is returned. More info...

prev_dup(flags=0, dlen=-1, doff=-1)¶

If the previous key/data pair of the database is a duplicate data record for the current key/data pair, the cursor is moved to the previous key/data pair of the database, and that pair is returned. More info...

prev_nodup(flags=0, dlen=-1, doff=-1)¶

The cursor is moved to the previous non-duplicate key/data pair of the database, and that pair is returned. More info...

set(key, flags=0, dlen=-1, doff=-1)¶

Move the cursor to the specified key in the database and return the key/data pair found there. More info...

set_range(key, flags=0, dlen=-1, doff=-1)¶

Identical to set() except that in the case of the BTree access method, the returned key/data pair is the smallest key greater than or equal to the specified key (as determined by the comparison function), permitting partial key matches and range searches. More info...

set_recno(recno, flags=0, dlen=-1, doff=-1)¶

Move the cursor to the specific numbered record of the database, and return the associated key/data pair. The underlying database must be of type Btree and it must have been created with the DB_RECNUM flag. More info...

set_both(key, data, flags=0)¶

See get_both(). The only difference in behaviour can be disabled using set_get_returns_none(2). More info...

Table Of Contents

Previous topic

DB

Next topic

DBLogCursor

bsddb3-6.1.0/docs/html/genindex.html0000644000000000000000000007304512247657276017233 0ustar rootroot00000000000000 Index — PyBSDDB 6.0.0 documentation

Index

A | C | D | E | F | G | H | I | J | K | L | M | N | O | P | R | S | T | U | V

A

abort() (built-in function)
append() (built-in function)
associate() (built-in function)

C

close() (built-in function), [1], [2], [3], [4], [5]
commit() (built-in function)
consume() (built-in function), [1]
consume_wait() (built-in function)
count() (built-in function)
current() (built-in function), [1]
cursor() (built-in function)

D

DB() (built-in function)
DBEnv() (built-in function), [1]
dbremove() (built-in function)
dbrename() (built-in function)
DBSequence() (built-in function)
delete() (built-in function), [1]
discard() (built-in function)
dup() (built-in function)

E

exists() (built-in function)

F

fd() (built-in function)
fileid_reset() (built-in function)
first() (built-in function), [1]
full_version() (built-in function)

G

get() (built-in function), [1], [2], [3], [4]
get_address() (built-in function)
get_both() (built-in function), [1]
get_bt_minkey() (built-in function)
get_byteswapped() (built-in function)
get_cache_max() (built-in function)
get_cachesize() (built-in function), [1], [2]
get_config() (built-in function)
get_current_size() (built-in function)
get_data_dirs() (built-in function)
get_dbname() (built-in function)
get_dbp() (built-in function)
get_eid() (built-in function)
get_encrypt_flags() (built-in function), [1]
get_flags() (built-in function), [1], [2]
get_h_ffactor() (built-in function)
get_h_nelem() (built-in function)
get_intermediate_dir_mode() (built-in function)
get_key() (built-in function)
get_lg_bsize() (built-in function)
get_lg_dir() (built-in function)
get_lg_filemode() (built-in function)
get_lg_max() (built-in function)
get_lg_regionmax() (built-in function)
get_lk_detect() (built-in function)
get_lk_max_lockers() (built-in function)
get_lk_max_locks() (built-in function)
get_lk_max_objects() (built-in function)
get_lk_partitions() (built-in function)
get_lorder() (built-in function)
get_mp_max_openfd() (built-in function)
get_mp_max_write() (built-in function)
get_mp_mmapsize() (built-in function)
get_name() (built-in function)
get_open_flags() (built-in function), [1]
get_pagesize() (built-in function)
get_priority() (built-in function), [1]
get_private() (built-in function), [1]
get_q_extentsize() (built-in function)
get_range() (built-in function)
get_re_delim() (built-in function)
get_re_len() (built-in function)
get_re_pad() (built-in function)
get_re_source() (built-in function)
get_recno() (built-in function)
get_shm_key() (built-in function)
get_size() (built-in function)
get_thread_count() (built-in function)
get_timeout() (built-in function)
get_tmp_dir() (built-in function)
get_transactional() (built-in function)
get_tx_max() (built-in function)
get_tx_timestamp() (built-in function)
get_type() (built-in function)
get_verbose() (built-in function)

H

has_key() (built-in function)

I

id() (built-in function)
initial_value() (built-in function)
items() (built-in function)

J

join() (built-in function)
join_item() (built-in function)

K

key_range() (built-in function)
keys() (built-in function)

L

last() (built-in function), [1]
lock_detect() (built-in function)
lock_get() (built-in function)
lock_id() (built-in function)
lock_id_free() (built-in function)
lock_put() (built-in function)
lock_stat() (built-in function)
lock_stat_print() (built-in function)
log_archive() (built-in function)
log_cursor() (built-in function)
log_file() (built-in function)
log_flush() (built-in function)
log_get_config() (built-in function)
log_printf() (built-in function)
log_set_config() (built-in function)
log_stat() (built-in function)
log_stat_print() (built-in function)
lsn_reset() (built-in function)

M

memp_stat() (built-in function)
memp_stat_print() (built-in function)
memp_sync() (built-in function)
memp_trickle() (built-in function)
mutex_get_align() (built-in function)
mutex_get_increment() (built-in function)
mutex_get_max() (built-in function)
mutex_get_tas_spins() (built-in function)
mutex_set_align() (built-in function)
mutex_set_increment() (built-in function)
mutex_set_max() (built-in function)
mutex_set_tas_spins() (built-in function)
mutex_stat() (built-in function)
mutex_stat_print() (built-in function)

N

next() (built-in function), [1]
next_dup() (built-in function)
next_nodup() (built-in function)

O

open() (built-in function), [1], [2]

P

pget() (built-in function), [1], [2], [3]
prepare() (built-in function)
prev() (built-in function), [1]
prev_dup() (built-in function)
prev_nodup() (built-in function)
put() (built-in function), [1]

R

remove() (built-in function), [1], [2], [3]
rename() (built-in function)
rep_elect() (built-in function)
rep_get_clockskew() (built-in function)
rep_get_config() (built-in function)
rep_get_limit() (built-in function)
rep_get_nsites() (built-in function)
rep_get_priority() (built-in function)
rep_get_request() (built-in function)
rep_get_timeout() (built-in function)
rep_process_messsage() (built-in function)
rep_set_clockskew() (built-in function)
rep_set_config() (built-in function)
rep_set_limit() (built-in function)
rep_set_nsites() (built-in function)
rep_set_priority() (built-in function)
rep_set_request() (built-in function)
rep_set_timeout() (built-in function)
rep_set_transport() (built-in function)
rep_start() (built-in function)
rep_stat() (built-in function)
rep_stat_print() (built-in function)
rep_sync() (built-in function)
repmgr_get_ack_policy() (built-in function)
repmgr_set_ack_policy() (built-in function)
repmgr_site() (built-in function)
repmgr_site_by_eid() (built-in function)
repmgr_site_list() (built-in function)
repmgr_start() (built-in function)
repmgr_stat() (built-in function)
repmgr_stat_print() (built-in function)

S

set() (built-in function), [1]
set_both() (built-in function)
set_bt_compare() (built-in function)
set_bt_minkey() (built-in function)
set_cache_max() (built-in function)
set_cachesize() (built-in function), [1], [2]
set_config() (built-in function)
set_data_dir() (built-in function)
set_dup_compare() (built-in function)
set_encrypt() (built-in function), [1]
set_event_notify() (built-in function)
set_flags() (built-in function), [1], [2]
set_get_returns_none() (built-in function), [1]
set_h_ffactor() (built-in function)
set_h_nelem() (built-in function)
set_intermediate_dir_mode() (built-in function)
set_lg_bsize() (built-in function)
set_lg_dir() (built-in function)
set_lg_filemode() (built-in function)
set_lg_max() (built-in function)
set_lg_regionmax() (built-in function)
set_lk_detect() (built-in function)
set_lk_max() (built-in function)
set_lk_max_lockers() (built-in function)
set_lk_max_locks() (built-in function)
set_lk_max_objects() (built-in function)
set_lk_partitions() (built-in function)
set_lorder() (built-in function)
set_mp_max_openfd() (built-in function)
set_mp_max_write() (built-in function)
set_mp_mmapsize() (built-in function)
set_name() (built-in function)
set_pagesize() (built-in function)
set_priority() (built-in function), [1]
set_private() (built-in function), [1]
set_q_extentsize() (built-in function)
set_range() (built-in function), [1]
set_re_delim() (built-in function)
set_re_len() (built-in function)
set_re_pad() (built-in function)
set_re_source() (built-in function)
set_recno() (built-in function)
set_rpc_server() (built-in function)
set_shm_key() (built-in function)
set_thread_count() (built-in function)
set_timeout() (built-in function), [1]
set_tmp_dir() (built-in function)
set_tx_max() (built-in function)
set_tx_timestamp() (built-in function)
set_verbose() (built-in function)
stat() (built-in function), [1]
stat_print() (built-in function), [1], [2]
sync() (built-in function)

T

truncate() (built-in function)
txn_begin() (built-in function)
txn_checkpoint() (built-in function)
txn_recover() (built-in function)
txn_stat() (built-in function)
txn_stat_print() (built-in function)

U

upgrade() (built-in function)

V

values() (built-in function)
verify() (built-in function)
version() (built-in function)
bsddb3-6.1.0/docs/html/introduction.html0000644000000000000000000004050112247657276020142 0ustar rootroot00000000000000 Berkeley DB 4.3 thru 6.0 Python Extension Package — PyBSDDB 6.0.0 documentation

Berkeley DB 4.3 thru 6.0 Python Extension Package¶

Introduction¶

This is a simple bit of documentation for the bsddb3.db Python extension module which wraps the Berkeley DB 4.3 thru 6.0 C library. The extension module is located in a Python package along with a few pure python modules.

It is expected that this module will be used in the following general ways by different programmers in different situations. The goals of this module are to allow all of these methods without making things too complex for the simple cases, and without leaving out funtionality needed by the complex cases.

  1. Backwards compatibility: It is desirable for this package to be a near drop-in replacement for the bsddb module shipped with Python which is designed to wrap either DB 1.85, or the 1.85 compatibility interface. This means that there will need to be equivalent object creation functions available, (btopen(), hashopen(), and rnopen()) and the objects returned will need to have the same or at least similar methods available, (specifically, first(), last(), next(), and prev() will need to be available without the user needing to explicitly use a cursor.) All of these have been implemented in Python code in the bsddb3.__init__.py module.
  2. Simple persistent dictionary: One small step beyond the above. The programmer may be aware of and use the new DB object type directly, but only needs it from a single process and thread. The programmer should not have to be bothered with using a DBEnv, and the DB object should behave as much like a dictionary as possible.
  3. Concurrent access dictionaries: This refers to the ability to simultaneously have one writer and multiple readers of a DB (either in multiple threads or processes) and is implemented simply by creating a DBEnv with certain flags. No extra work is required to allow this access mode in bsddb3.
  4. Advanced transactional data store: This mode of use is where the full capabilities of the Berkeley DB library are called into action. The programmer will probably not use the dictionary access methods as much as the regular methods of the DB object, so he can pass transaction objects to the methods. Again, most of this advanced functionality is activated simply by opening a DBEnv with the proper flags, and also by using transactions and being aware of and reacting to deadlock exceptions, etc.

Types Provided¶

The bsddb3.db extension module provides the following object types:

  • DB: The basic database object, capable of Hash, BTree, Recno, and Queue access methods.
  • DBEnv: Provides a Database Environment for more advanced database use. Apps using transactions, logging, concurrent access, etc. will need to have an environment object.
  • DBCursor: A pointer-like object used to traverse a database.
  • DBTxn: A database transaction. Allows for multi-file commit, abort and checkpoint of database modifications.
  • DBLock: An opaque handle for a lock. See DBEnv.lock_get() and DBEnv.lock_put(). Locks are not necessarily associated with anything in the database, but can be used for any syncronization task across all threads and processes that have the DBEnv open.
  • DBSequence: Sequences provide an arbitrary number of persistent objects that return an increasing or decreasing sequence of integers. Opening a sequence handle associates it with a record in a database.
  • DBSite: Site object for Replication Manager.

Top level functions¶

version()¶

Returns a tuple with major, minor and patch level. More info...

full_version()¶

Returns a tuple with the full version string, family, release, major, minor and patch level. More info...

Exceptions Provided¶

The Berkeley DB C API uses function return codes to signal various errors. The bsddb3.db module checks for these error codes and turns them into Python exceptions, allowing you to use familiar try:... except:... constructs and not have to bother with checking every method’s return value.

Each of the error codes is turned into an exception specific to that error code, as outlined in the table below. If you are using the C API documentation then it is very easy to map the error return codes specified there to the name of the Python exception that will be raised. Simply refer to the table below.

Each exception derives from the DBError exception class so if you just want to catch generic errors you can use DBError to do it. Since DBNotFoundError is raised when a given key is not found in the database, DBNotFoundError also derives from the standard KeyError exception to help make a DB look and act like a dictionary. We do the same trick with DBKeyEmptyError.

When any of these exceptions is raised, the associated value is a tuple containing an integer representing the error code and a string for the error message itself.

DBError Base class, all others derive from this
DBCursorClosedError When trying to use a closed cursor
DBForeignConflictError DB_FOREIGN_CONFLICT
DBKeyEmptyError DB_KEYEMPTY (also derives from KeyError)
DBKeyExistError DB_KEYEXIST
DBLockDeadlockError DB_LOCK_DEADLOCK
DBLockNotGrantedError DB_LOCK_NOTGRANTED
DBNotFoundError DB_NOTFOUND (also derives from KeyError)
DBOldVersionError DB_OLD_VERSION
DBPageNotFoundError DB_PAGE_NOTFOUND
DBRepHandleDeadError DB_REP_HANDLE_DEAD
DBRepLeaseExpiredError DB_REP_LEASE_EXPIRED
DBRepLockoutError DB_REP_LOCKOUT
DBRepUnavailError DB_REP_UNAVAIL
DBRunRecoveryError DB_RUNRECOVERY
DBSecondaryBadError DB_SECONDARY_BAD
DBVerifyBadError DB_VERIFY_BAD
DBNoServerError DB_NOSERVER
DBNoServerHomeError DB_NOSERVER_HOME
DBNoServerIDError DB_NOSERVER_ID
DBInvalidArgError EINVAL
DBAccessError EACCES
DBNoSpaceError ENOSPC
DBNoMemoryError DB_BUFFER_SMALL
DBAgainError EAGAIN
DBBusyError EBUSY
DBFileExistsError EEXIST
DBNoSuchFileError ENOENT
DBPermissionsError EPERM

Other Package Modules¶

  • dbshelve.py: This is an implementation of the standard Python shelve concept for storing objects that uses bsddb3 specifically, and also exposes some of the more advanced methods and capabilities of the underlying DB.
  • dbtables.py: This is a module by Gregory Smith that implements a simplistic table structure on top of a DB.
  • dbutils.py: A catch-all for python code that is generally useful when working with DB’s
  • dbobj.py: Contains subclassable versions of DB and DBEnv.
  • dbrecio.py: Contains the DBRecIO class that can be used to do partial reads and writes from a DB record using a file-like interface. Contributed by Itamar Shtull-Trauring.

Testing¶

A full unit test suite is being developed to exercise the various object types, their methods and the various usage modes described in the introduction. PyUnit is used and the tests are structured such that they can be run unattended and automated. There are currently 482 test cases! (March 2010)

Reference¶

See the C language API online documentation on Oracle’s website for more details of the functionality of each of these methods. The names of all the Python methods should be the same or similar to the names in the C API.

Berkeley DB is very powerful and versatile, but it is complex to use correctly. Oracle documentation is very complete. Please, review it.

NOTE: All the methods shown below having more than one keyword argument are actually implemented using keyword argument parsing, so you can use keywords to provide optional parameters as desired. Those that have only a single optional argument are implemented without keyword parsing to help keep the implementation simple. If this is too confusing let me know and I’ll think about using keywords for everything.

bsddb3-6.1.0/docs/html/dbsite.html0000644000000000000000000001602212247657276016674 0ustar rootroot00000000000000 DBSite — PyBSDDB 6.0.0 documentation

DBSite¶

Read Oracle documentation for better understanding.

You use the DB_SITE handle to configure and manage replication sites.

More info...

DBSite Methods¶

close(flags=0)¶

Close a DBSite handle. More info...

get_address()¶

Returns a replication site’s network address. That is, this method returns a tuple with the site’s hostname and port. More info...

get_config()¶

Returns whether the specified which parameter is currently set. More info...

get_eid()¶

Returns a replication site’s Environment ID (EID). More info...

remove()¶

Removes the site from the replication group. If called at the master site, repmgr updates the membership database directly. If called from a client, this method causes a request to be sent to the master to perform the operation. The method then awaits confirmation. More info...

set_config(which, value)¶

Configures a replication site. More info...

Table Of Contents

Previous topic

DBSequence

Next topic

History

bsddb3-6.1.0/docs/html/dbsequence.html0000644000000000000000000003016712247657276017546 0ustar rootroot00000000000000 DBSequence — PyBSDDB 6.0.0 documentation

DBSequence¶

Read Oracle documentation for better understanding.

Sequences provide an arbitrary number of persistent objects that return an increasing or decreasing sequence of integers. Opening a sequence handle associates it with a record in a database. The handle can maintain a cache of values from the database so that a database update is not needed as the application allocates a value.

More info...

DBSequence Methods¶

DBSequence(db, flags=0)¶

Constructor. More info...

open(key, txn=None, flags=0)¶

Opens the sequence represented by the key. More info...

close(flags=0)¶

Close a DBSequence handle. More info...

initial_value(value)¶

Set the initial value for a sequence. This call is only effective when the sequence is being created. More info...

get(delta=1, txn=None, flags=0)¶

Returns the next available element in the sequence and changes the sequence value by delta. More info...

get_dbp()¶

Returns the DB object associated to the DBSequence. More info...

get_key()¶

Returns the key for the sequence. More info...

remove(txn=None, flags=0)¶

Removes the sequence from the database. This method should not be called if there are other open handles on this sequence. More info...

get_cachesize()¶

Returns the current cache size. More info...

set_cachesize(size)¶

Configure the number of elements cached by a sequence handle. More info...

get_flags()¶

Returns the current flags. More info...

set_flags(flags)¶

Configure a sequence. More info...

stat(flags=0)¶

Returns a dictionary of sequence statistics with the following keys:

wait The number of times a thread of control was forced to wait on the handle mutex.
nowait The number of times that a thread of control was able to obtain handle mutex without waiting.
current The current value of the sequence in the database.
value The current cached value of the sequence.
last_value The last cached value of the sequence.
min The minimum permitted value of the sequence.
max The maximum permitted value of the sequence.
cache_size The number of values that will be cached in this handle.
flags The flags value for the sequence.

More info...

stat_print(flags=0)¶

Prints diagnostic information. More info...

get_range()¶

Returns a tuple representing the range of values in the sequence. More info...

set_range((min, max))¶

Configure a sequence range. More info...

Table Of Contents

Previous topic

DBLock

Next topic

DBSite

bsddb3-6.1.0/docs/html/donate.html0000644000000000000000000001332212247657276016674 0ustar rootroot00000000000000 DONATE! — PyBSDDB 6.0.0 documentation

Previous topic

History

bsddb3-6.1.0/docs/html/history.html0000644000000000000000000001066712247657276017134 0ustar rootroot00000000000000 History — PyBSDDB 6.0.0 documentation

History¶

This module was started by Andrew Kuchling (amk) to remove the dependency on SWIG in a package by Gregory P. Smith who based his work on a similar package by Robin Dunn which wrapped Berkeley DB 2.7.x.

Development then returned full circle back to Robin Dunn working in behalf of Digital Creations to complete the SWIG-less wrapping of the DB 3.x API and to build a solid unit test suite. Having completed that, Robin was now busy with another project (wxPython) and Greg returned as maintainer.

Jesus Cea Avion is the maintainer of this code since February 2008, and ported it to Python 3.x.

Previous topic

DBSite

Next topic

DONATE!

bsddb3-6.1.0/docs/html/static/0000755000000000000000000000000012363235112015776 5ustar rootroot00000000000000bsddb3-6.1.0/docs/html/static/minus.png0000644000000000000000000000030712064560573017651 0ustar rootroot00000000000000‰PNG  IHDR &Îàq pHYs  šœtIME× <®8åtEXtCommentöÌ–¿RIDATÓcç¬ó³ÏÀ –H3Q5€ ©BàƒÇÄÆáä.@ $p³d!sý#ø~<<+"À¾xÓ ÀM›À0‡ÿêB™\€„Àt‘8K€@zŽB¦@F€˜&S `ËcbãP-`'æÓ€ø™{[”! ‘ eˆDh;¬ÏVŠEX0fKÄ9Ø-0IWfH°·ÀÎ ² 0Qˆ…){`È##x„™FòW<ñ+®ç*x™²<¹$9E[-qWW.(ÎI+6aaš@.Ây™24àóÌ ‘àƒóýxήÎÎ6޶_-ê¿ÿ"bbãþåÏ«p@át~Ñþ,/³€;€mþ¢%îh^  u÷‹f²@µ éÚWópø~<ß5°j>{‘-¨]cöK'XtÀâ÷ò»oÁÔ(€hƒáÏwÿï?ýG %€fI’q^D$.Tʳ?ÇD *°AôÁ,ÀÁÜÁ ü`6„B$ÄÂBB d€r`)¬‚B(†Í°*`/Ô@4ÀQh†“p.ÂU¸=púažÁ(¼ AÈa!ÚˆbŠX#Ž™…ø!ÁH‹$ ɈQ"K‘5H1RŠT UHò=r9‡\Fº‘;È2‚ü†¼G1”²Q=Ô µC¹¨7„F¢ Ðdt1š ›Ðr´=Œ6¡çЫhÚ>CÇ0Àè3Äl0.ÆÃB±8, “c˱"¬ «Æ°V¬»‰õcϱwEÀ 6wB aAHXLXNØH¨ $4Ú 7 „QÂ'"“¨K´&ºùÄb21‡XH,#Ö/{ˆCÄ7$‰C2'¹I±¤TÒÒFÒnR#é,©›4H#“ÉÚdk²9”, +È…ääÃä3ää!ò[ b@q¤øSâ(RÊjJåå4åe˜2AU£šRݨ¡T5ZB­¡¶R¯Q‡¨4uš9̓IK¥­¢•Óhh÷i¯ètºÝ•N—ÐWÒËéGè—èôw †ƒÇˆg(›gw¯˜L¦Ó‹ÇT071ë˜ç™™oUX*¶*|‘Ê •J•&•*/T©ª¦ªÞª UóUËT©^S}®FU3Sã© Ô–«UªPëSSg©;¨‡ªg¨oT?¤~Yý‰YÃLÃOC¤Q ±_ã¼Æ c³x,!k «†u5Ä&±ÍÙ|v*»˜ý»‹=ª©¡9C3J3W³Ró”f?ã˜qøœtN ç(§—ó~ŠÞï)â)¦4L¹1e\kª–—–X«H«Q«Gë½6®í§¦½E»YûAÇJ'\'GgÎçSÙSݧ §M=:õ®.ªk¥¡»Dw¿n§î˜ž¾^€žLo§Þy½çú}/ýTýmú§õG X³ $Û Î<Å5qo</ÇÛñQC]Ã@C¥a•a—á„‘¹Ñ<£ÕFFŒiÆ\ã$ãmÆmÆ£&&!&KMêMîšRM¹¦)¦;L;LÇÍÌÍ¢ÍÖ™5›=1×2ç›ç›×›ß·`ZxZ,¶¨¶¸eI²äZ¦Yî¶¼n…Z9Y¥XUZ]³F­­%Ö»­»§§¹N“N«žÖgðñ¶É¶©·°åØÛ®¶m¶}agbg·Å®Ã“}º}ý= ‡Ù«Z~s´r:V:ޚΜî?}Åô–é/gXÏÏØ3ã¶Ë)ÄiS›ÓGgg¹sƒóˆ‹‰K‚Ë.—>.›ÆÝȽäJtõq]ázÒõ›³›Âí¨Û¯î6îiî‡ÜŸÌ4Ÿ)žY3sÐÃÈCàQåÑ? Ÿ•0k߬~OCOgµç#/c/‘W­×°·¥wª÷aï>ö>rŸã>ã<7Þ2ÞY_Ì7À·È·ËOÃož_…ßC#ÿdÿzÿѧ€%g‰A[ûøz|!¿Ž?:Ûeö²ÙíAŒ ¹AA‚­‚åÁ­!hÈì­!÷ç˜Î‘Îi…P~èÖÐaæa‹Ã~ '…‡…W†?ŽpˆXÑ1—5wÑÜCsßDúD–DÞ›g1O9¯-J5*>ª.j<Ú7º4º?Æ.fYÌÕXXIlK9.*®6nl¾ßüíó‡ââ ã{˜/È]py¡ÎÂô…§©.,:–@LˆN8”ðA*¨Œ%òw%Ž yÂÂg"/Ñ6шØC\*NòH*Mz’쑼5y$Å3¥,幄'©¼L LÝ›:žšv m2=:½1ƒ’‘qBª!M“¶gêgæfvˬe…²þÅn‹·/•Ék³¬Y- ¶B¦èTZ(×*²geWf¿Í‰Ê9–«ž+Íí̳ÊÛ7œïŸÿíÂá’¶¥†KW-X潬j9²‰Š®Û—Ø(Üxå‡oÊ¿™Ü”´©«Ä¹dÏfÒféæÞ-ž[–ª—æ—n ÙÚ´ ßV´íõöEÛ/—Í(Û»ƒ¶C¹£¿<¸¼e§ÉÎÍ;?T¤TôTúT6îÒݵa×ønÑî{¼ö4ìÕÛ[¼÷ý>ɾÛUUMÕfÕeûIû³÷?®‰ªéø–ûm]­NmqíÇÒý#¶×¹ÔÕÒ=TRÖ+ëGǾþïw- 6 UœÆâ#pDyäé÷ ß÷ :ÚvŒ{¬áÓvg/jBšòšF›Sšû[b[ºOÌ>ÑÖêÞzüGÛœ499â?rýéü§CÏdÏ&žþ¢þË®/~øÕë×Îјѡ—ò—“¿m|¥ýêÀë¯ÛÆÂƾÉx31^ôVûíÁwÜwï£ßOä| (ÿhù±õSЧû“““ÿ˜óüc3-ÛbKGDÿÿÿ ½§“ pHYs  šœtIMEÚ 1;ïV·¿§IDAT8Ëu‘ËkÜUÇ?ßsgœ4ÔØøhª‚`µ©ÖG1 RQ‚”îܸp%èBªø”n"‚bРXJ ‹.4V iZð##T;m£µ!4™üæžãbâP­~7÷rîù>ιbwïý†cû†; m;‡oª”ÓAÜàΆ ζZ^«/®þôä£Ãç¸|îs¯ÝÉø{Óý;†¯y¿»Rº¥ð¸Â=È9(rÉt¦Vo¼¾û¡­ûG÷Í1±wíÞÿ#_àÓ©¹›{»¿ìî*•›E&ç å!€€ˆÀƒ(—Lç–VŸßuïÀ«oœéêûÁᲵ‘DŽÀ€ P„‡²G”“4ÿçÊ Ü:&€¯ç~™êî*ݳÖreˆuá: ‚ááS­-,ßUšœ©^Ÿ’ú›E&·™JY[ÃPà!RˆìB ŖޞʖR@_ÎôÈ€dBfó”€NvHfÂ"è2ØTÊî]­ˆR‘’ ³ö j§'BàÖ1‰ddAak…/DIJD… ’D2‘ÌH&L`&L† $Ex,6‹|Ö~_\©¿Pœ‘ $™ýMH`I˜©=Ÿ @¨±Z|õÈÎÁ|ttv´gcåЕ—WTZ'¤õ3rŽÈîje"ܵx¾9ÿö›¯°W> ¹mb©Ñ|by¥ˆ•fFRx{wí%Dúõå¹Z½±€áCíÿÞüô$õwdüÀôðÖ«ÞH¦mW÷nètaµ(ŠM<~;9¿ôáž]C/ñ_¸ãåŸ;÷ÉãÕ«§æã‹Õ#Ÿ}ûÀáÉïoÿ`zS§áÚ·ù_>:;x컓§?Ÿ©yóÝ©ÿ|}æ’~ûwam-/ž®7ž=¾0úìS÷5è»ØíR翚¾P"*Ö¯ IEND®B`‚bsddb3-6.1.0/docs/html/static/down-pressed.png0000644000000000000000000000056012064560573021131 0ustar rootroot00000000000000‰PNG  IHDRóÿasRGB®ÎébKGDùC» pHYs × ×B(›xtIMEÚ -vF#ðIDAT8ËÍÒ!OAàïÚJ, ++@ I v¢bÿ@Wñ7F’ HNâ±ú# ‚4¡8Ì6¹4×6Tñ’MvvÞ¼7³»êœûöDs¿‡aóxâ1†U îq‚;<¦ˆÏ E¸Â-f)âºj%ßpˆo4xFà78G…>æ)â-ƒ ž ¡ÂEYm4%7YTk-¾–Q¶a–"NWAo-y†eqÒá¾,)â ÓÒYÓÑú´ptŽÐå½\hóq´Îím˜sÔz¦ìG]ÄNñ‡Òa…‡röçß¶¨s^lã vh\î2Ù%ðâßãŽ0EeRvØIEND®B`‚bsddb3-6.1.0/docs/html/static/searchtools.js0000644000000000000000000003725312247657276020720 0ustar rootroot00000000000000/* * searchtools.js_t * ~~~~~~~~~~~~~~~~ * * Sphinx JavaScript utilties for the full-text search. * * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ /** * helper function to return a node containing the * search summary for a given text. keywords is a list * of stemmed words, hlwords is the list of normal, unstemmed * words. the first one is used to find the occurance, the * latter for highlighting it. */ jQuery.makeSearchSummary = function(text, keywords, hlwords) { var textLower = text.toLowerCase(); var start = 0; $.each(keywords, function() { var i = textLower.indexOf(this.toLowerCase()); if (i > -1) start = i; }); start = Math.max(start - 120, 0); var excerpt = ((start > 0) ? '...' : '') + $.trim(text.substr(start, 240)) + ((start + 240 - text.length) ? '...' : ''); var rv = $('
').text(excerpt); $.each(hlwords, function() { rv = rv.highlightText(this, 'highlighted'); }); return rv; } /** * Porter Stemmer */ var Stemmer = function() { var step2list = { ational: 'ate', tional: 'tion', enci: 'ence', anci: 'ance', izer: 'ize', bli: 'ble', alli: 'al', entli: 'ent', eli: 'e', ousli: 'ous', ization: 'ize', ation: 'ate', ator: 'ate', alism: 'al', iveness: 'ive', fulness: 'ful', ousness: 'ous', aliti: 'al', iviti: 'ive', biliti: 'ble', logi: 'log' }; var step3list = { icate: 'ic', ative: '', alize: 'al', iciti: 'ic', ical: 'ic', ful: '', ness: '' }; var c = "[^aeiou]"; // consonant var v = "[aeiouy]"; // vowel var C = c + "[^aeiouy]*"; // consonant sequence var V = v + "[aeiou]*"; // vowel sequence var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0 var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 var s_v = "^(" + C + ")?" + v; // vowel in stem this.stemWord = function (w) { var stem; var suffix; var firstch; var origword = w; if (w.length < 3) return w; var re; var re2; var re3; var re4; firstch = w.substr(0,1); if (firstch == "y") w = firstch.toUpperCase() + w.substr(1); // Step 1a re = /^(.+?)(ss|i)es$/; re2 = /^(.+?)([^s])s$/; if (re.test(w)) w = w.replace(re,"$1$2"); else if (re2.test(w)) w = w.replace(re2,"$1$2"); // Step 1b re = /^(.+?)eed$/; re2 = /^(.+?)(ed|ing)$/; if (re.test(w)) { var fp = re.exec(w); re = new RegExp(mgr0); if (re.test(fp[1])) { re = /.$/; w = w.replace(re,""); } } else if (re2.test(w)) { var fp = re2.exec(w); stem = fp[1]; re2 = new RegExp(s_v); if (re2.test(stem)) { w = stem; re2 = /(at|bl|iz)$/; re3 = new RegExp("([^aeiouylsz])\\1$"); re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); if (re2.test(w)) w = w + "e"; else if (re3.test(w)) { re = /.$/; w = w.replace(re,""); } else if (re4.test(w)) w = w + "e"; } } // Step 1c re = /^(.+?)y$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; re = new RegExp(s_v); if (re.test(stem)) w = stem + "i"; } // Step 2 re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; suffix = fp[2]; re = new RegExp(mgr0); if (re.test(stem)) w = stem + step2list[suffix]; } // Step 3 re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; suffix = fp[2]; re = new RegExp(mgr0); if (re.test(stem)) w = stem + step3list[suffix]; } // Step 4 re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; re2 = /^(.+?)(s|t)(ion)$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; re = new RegExp(mgr1); if (re.test(stem)) w = stem; } else if (re2.test(w)) { var fp = re2.exec(w); stem = fp[1] + fp[2]; re2 = new RegExp(mgr1); if (re2.test(stem)) w = stem; } // Step 5 re = /^(.+?)e$/; if (re.test(w)) { var fp = re.exec(w); stem = fp[1]; re = new RegExp(mgr1); re2 = new RegExp(meq1); re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) w = stem; } re = /ll$/; re2 = new RegExp(mgr1); if (re.test(w) && re2.test(w)) { re = /.$/; w = w.replace(re,""); } // and turn initial Y back to y if (firstch == "y") w = firstch.toLowerCase() + w.substr(1); return w; } } /** * Search Module */ var Search = { _index : null, _queued_query : null, _pulse_status : -1, init : function() { var params = $.getQueryParameters(); if (params.q) { var query = params.q[0]; $('input[name="q"]')[0].value = query; this.performSearch(query); } }, loadIndex : function(url) { $.ajax({type: "GET", url: url, data: null, success: null, dataType: "script", cache: true}); }, setIndex : function(index) { var q; this._index = index; if ((q = this._queued_query) !== null) { this._queued_query = null; Search.query(q); } }, hasIndex : function() { return this._index !== null; }, deferQuery : function(query) { this._queued_query = query; }, stopPulse : function() { this._pulse_status = 0; }, startPulse : function() { if (this._pulse_status >= 0) return; function pulse() { Search._pulse_status = (Search._pulse_status + 1) % 4; var dotString = ''; for (var i = 0; i < Search._pulse_status; i++) dotString += '.'; Search.dots.text(dotString); if (Search._pulse_status > -1) window.setTimeout(pulse, 500); }; pulse(); }, /** * perform a search for something */ performSearch : function(query) { // create the required interface elements this.out = $('#search-results'); this.title = $('

' + _('Searching') + '

').appendTo(this.out); this.dots = $('').appendTo(this.title); this.status = $('

').appendTo(this.out); this.output = $('