Kyoto Cabinet is a library of routines for managing a database.
The database is a simple data file containing records, each is a pair
of a key and a value. Every key and value is serial bytes with
variable length. Both binary data and character string can be used
as a key and a value. Each key must be unique within a database.
There is neither concept of data tables nor data types. Records are
organized in hash table or B+ tree.
The following access methods are provided to the database: storing
a record with a key and a value, deleting a record by a key,
retrieving a record by a key. Moreover, traversal access to every
key are provided. These access methods are similar to ones of the
original DBM (and its followers: NDBM and GDBM) library defined in
the UNIX standard. Kyoto Cabinet is an alternative for the DBM
because of its higher performance.
Each operation of the hash database has the time complexity of
"O(1)". Therefore, in theory, the performance is constant
regardless of the scale of the database. In practice, the
performance is determined by the speed of the main memory or the
storage device. If the size of the database is less than the
capacity of the main memory, the performance will seem on-memory
speed, which is faster than std::map of STL. Of course, the database
size can be greater than the capacity of the main memory and the
upper limit is 8 exabytes. Even in that case, each operation needs
only one or two seeking of the storage device.
Each operation of the B+ tree database has the time complexity of
"O(log N)". Therefore, in theory, the performance is
logarithmic to the scale of the database. Although the performance
of random access of the B+ tree database is slower than that of the
hash database, the B+ tree database supports sequential access in
order of the keys, which realizes forward matching search for strings
and range search for integers. The performance of sequential access
is much faster than that of random access.
This library wraps the polymorphic database of the C++ API. So,
you can select the internal data structure by specifying the database
name in runtime. This library works on Python 3.x (3.1 or later)
only. Python 2.x requires another dedicated package.
Installation
Install the latest version of Kyoto Cabinet beforehand and get the
package of the Python binding of Kyoto Cabinet.
Enter the directory of the extracted package then perform
installation. If your system has the another command except for the
"python3" command, edit the Makefile beforehand.:
make
make check
su
make install
Symbols of the module `kyotocabinet' should be included in each
source file of application programs.:
import kyotocabinet
An instance of the class `DB' is used in order to handle a
database. You can store, delete, and retrieve records with the
instance.
Example
The following code is a typical example to use a database.:
from kyotocabinet import *
import sys
# create the database object
db = DB()
# open the database
if not db.open("casket.kch", DB.OWRITER | DB.OCREATE):
print("open error: " + str(db.error()), file=sys.stderr)
# store records
if not db.set("foo", "hop") or not db.set("bar", "step") or not db.set("baz", "jump"):
print("set error: " + str(db.error()), file=sys.stderr)
# retrieve records
value = db.get_str("foo")
if value:
print(value)
else:
print("get error: " + str(db.error()), file=sys.stderr)
# traverse records
cur = db.cursor()
cur.jump()
while True:
rec = cur.get_str(True)
if not rec: break
print(rec[0] + ":" + rec[1])
cur.disable()
# close the database
if not db.close():
print("close error: " + str(db.error()), file=sys.stderr)
The following code is a more complex example, which uses the
Visitor pattern.:
from kyotocabinet import *
import sys
# create the database object
db = DB()
# open the database
if not db.open("casket.kch", DB.OREADER):
print("open error: " + str(db.error()), file=sys.stderr)
# define the visitor
class VisitorImpl(Visitor):
# call back function for an existing record
def visit_full(self, key, value):
print("{}:{}".format(key.decode(), value.decode()))
return self.NOP
# call back function for an empty record space
def visit_empty(self, key):
print("{} is missing".format(key.decode()), file=sys.stderr)
return self.NOP
visitor = VisitorImpl()
# retrieve a record with visitor
if not db.accept("foo", visitor, False) or not db.accept("dummy", visitor, False):
print("accept error: " + str(db.error()), file=sys.stderr)
# traverse records with visitor
if not db.iterate(visitor, False):
print("iterate error: " + str(db.error()), file=sys.stderr)
# close the database
if not db.close():
print("close error: " + str(db.error()), file=sys.stderr)
The following code is also a complex example, which is more suited
to the Python style.:
from kyotocabinet import *
import sys
# define the functor
def dbproc(db):
# store records
db[b'foo'] = b'step'; # bytes is fundamental
db['bar'] = 'hop'; # string is also ok
db[3] = 'jump'; # number is also ok
# retrieve a record value
print("{}".format(db['foo'].decode()))
# update records in transaction
def tranproc():
db['foo'] = 2.71828
return True
db.transaction(tranproc)
# multiply a record value
def mulproc(key, value):
return float(value) * 2
db.accept('foo', mulproc)
# traverse records by iterator
for key in db:
print("{}:{}".format(key.decode(), db[key].decode()))
# upcase values by iterator
def upproc(key, value):
return value.upper()
db.iterate(upproc)
# traverse records by cursor
def curproc(cur):
cur.jump()
def printproc(key, value):
print("{}:{}".format(key.decode(), value.decode()))
return Visitor.NOP
while cur.accept(printproc):
cur.step()
db.cursor_process(curproc)
# process the database by the functor
DB.process(dbproc, 'casket.kch')
License
Copyright (C) 2009-2010 FAL Labs. All rights reserved.
Kyoto Cabinet is free software: you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation, either version 3 of the
License, or any later version.
Kyoto Cabinet is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
hash_murmur(str)
Get the hash value of a string by MurMur hashing.
hash_fnv(str)
Get the hash value of a string by FNV hashing.
levdist(a,
b,
utf)
Calculate the levenshtein distance of two strings.
Variables
VERSION = 'x.y.z'
The version information.
__package__ = None
Function Details
conv_bytes(obj)
Convert any object to a string.
Parameters:
obj - the object.
Returns:
the result string.
atoi(str)
Convert a string to an integer.
Parameters:
str - specifies the string.
Returns:
the integer. If the string does not contain numeric expression,
0 is returned.
atoix(str)
Convert a string with a metric prefix to an integer.
Parameters:
str - the string, which can be trailed by a binary metric prefix.
"K", "M", "G", "T",
"P", and "E" are supported. They are
case-insensitive.
Returns:
the integer. If the string does not contain numeric expression,
0 is returned. If the integer overflows the domain, INT64_MAX or
INT64_MIN is returned according to the sign.
atof(str)
Convert a string to a real number.
Parameters:
str - specifies the string.
Returns:
the real number. If the string does not contain numeric
expression, 0.0 is returned.
hash_murmur(str)
Get the hash value of a string by MurMur hashing.
Parameters:
str - the string.
Returns:
the hash value.
hash_fnv(str)
Get the hash value of a string by FNV hashing.
Parameters:
str - the string.
Returns:
the hash value.
levdist(a,
b,
utf)
Calculate the levenshtein distance of two strings.
kyotocabinet
kyotocabinet-python-1.23/doc/frames.html 0000644 0001750 0001750 00000001115 11757455420 017373 0 ustar mikio mikio
kyotocabinet
kyotocabinet-python-1.23/doc/epydoc.js 0000644 0001750 0001750 00000024525 11757455420 017063 0 ustar mikio mikio function toggle_private() {
// Search for any private/public links on this page. Store
// their old text in "cmd," so we will know what action to
// take; and change their text to the opposite action.
var cmd = "?";
var elts = document.getElementsByTagName("a");
for(var i=0; i";
s += " ";
for (var i=0; i... ";
elt.innerHTML = s;
}
}
function toggle(id) {
elt = document.getElementById(id+"-toggle");
if (elt.innerHTML == "-")
collapse(id);
else
expand(id);
return false;
}
function highlight(id) {
var elt = document.getElementById(id+"-def");
if (elt) elt.className = "py-highlight-hdr";
var elt = document.getElementById(id+"-expanded");
if (elt) elt.className = "py-highlight";
var elt = document.getElementById(id+"-collapsed");
if (elt) elt.className = "py-highlight";
}
function num_lines(s) {
var n = 1;
var pos = s.indexOf("\n");
while ( pos > 0) {
n += 1;
pos = s.indexOf("\n", pos+1);
}
return n;
}
// Collapse all blocks that mave more than `min_lines` lines.
function collapse_all(min_lines) {
var elts = document.getElementsByTagName("div");
for (var i=0; i 0)
if (elt.id.substring(split, elt.id.length) == "-expanded")
if (num_lines(elt.innerHTML) > min_lines)
collapse(elt.id.substring(0, split));
}
}
function expandto(href) {
var start = href.indexOf("#")+1;
if (start != 0 && start != href.length) {
if (href.substring(start, href.length) != "-") {
collapse_all(4);
pos = href.indexOf(".", start);
while (pos != -1) {
var id = href.substring(start, pos);
expand(id);
pos = href.indexOf(".", pos+1);
}
var id = href.substring(start, href.length);
expand(id);
highlight(id);
}
}
}
function kill_doclink(id) {
var parent = document.getElementById(id);
parent.removeChild(parent.childNodes.item(0));
}
function auto_kill_doclink(ev) {
if (!ev) var ev = window.event;
if (!this.contains(ev.toElement)) {
var parent = document.getElementById(this.parentID);
parent.removeChild(parent.childNodes.item(0));
}
}
function doclink(id, name, targets_id) {
var elt = document.getElementById(id);
// If we already opened the box, then destroy it.
// (This case should never occur, but leave it in just in case.)
if (elt.childNodes.length > 1) {
elt.removeChild(elt.childNodes.item(0));
}
else {
// The outer box: relative + inline positioning.
var box1 = document.createElement("div");
box1.style.position = "relative";
box1.style.display = "inline";
box1.style.top = 0;
box1.style.left = 0;
// A shadow for fun
var shadow = document.createElement("div");
shadow.style.position = "absolute";
shadow.style.left = "-1.3em";
shadow.style.top = "-1.3em";
shadow.style.background = "#404040";
// The inner box: absolute positioning.
var box2 = document.createElement("div");
box2.style.position = "relative";
box2.style.border = "1px solid #a0a0a0";
box2.style.left = "-.2em";
box2.style.top = "-.2em";
box2.style.background = "white";
box2.style.padding = ".3em .4em .3em .4em";
box2.style.fontStyle = "normal";
box2.onmouseout=auto_kill_doclink;
box2.parentID = id;
// Get the targets
var targets_elt = document.getElementById(targets_id);
var targets = targets_elt.getAttribute("targets");
var links = "";
target_list = targets.split(",");
for (var i=0; i" +
target[0] + "";
}
// Put it all together.
elt.insertBefore(box1, elt.childNodes.item(0));
//box1.appendChild(box2);
box1.appendChild(shadow);
shadow.appendChild(box2);
box2.innerHTML =
"Which "+name+" do you want to see documentation for?" +
"
";
}
return false;
}
function get_anchor() {
var href = location.href;
var start = href.indexOf("#")+1;
if ((start != 0) && (start != href.length))
return href.substring(start, href.length);
}
function redirect_url(dottedName) {
// Scan through each element of the "pages" list, and check
// if "name" matches with any of them.
for (var i=0; i-m" or "-c";
// extract the portion & compare it to dottedName.
var pagename = pages[i].substring(0, pages[i].length-2);
if (pagename == dottedName.substring(0,pagename.length)) {
// We've found a page that matches `dottedName`;
// construct its URL, using leftover `dottedName`
// content to form an anchor.
var pagetype = pages[i].charAt(pages[i].length-1);
var url = pagename + ((pagetype=="m")?"-module.html":
"-class.html");
if (dottedName.length > pagename.length)
url += "#" + dottedName.substring(pagename.length+1,
dottedName.length);
return url;
}
}
}
kyotocabinet-python-1.23/doc/kyotocabinet.DB-class.html 0000644 0001750 0001750 00000310222 11757455420 022202 0 ustar mikio mikio
kyotocabinet.DB
__getitem__(self,
key,
value)
Alias of the get method.
__setitem__(self,
key,
value)
Alias of the set method.
__iter__(self)
Alias of the cursor method.
process(proc,
path='*',
mode=6,
opts=0)
Process a database by a functor.
Class Variables
GEXCEPTIONAL = 1
generic mode: exceptional mode.
GCONCURRENT = 2
generic mode: concurrent mode.
OREADER = 1
open mode: open as a reader.
OWRITER = 2
open mode: open as a writer.
OCREATE = 4
open mode: writer creating.
OTRUNCATE = 8
open mode: writer truncating.
OAUTOTRAN = 16
open mode: auto transaction.
OAUTOSYNC = 32
open mode: auto synchronization.
ONOLOCK = 64
open mode: open without locking.
OTRYLOCK = 128
open mode: lock without blocking.
ONOREPAIR = 256
open mode: open without auto repair.
MSET = 0
merge mode: overwrite the existing value.
MADD = 1
merge mode: keep the existing value.
MREPLACE = 2
merge mode: modify the existing record only.
MAPPEND = 3
merge mode: append the new value.
Method Details
__init__(self,
opts=0) (Constructor)
Create a database object.
Parameters:
opts - the optional features by bitwise-or: DB.GEXCEPTIONAL for the
exceptional mode, DB.GCONCURRENT for the concurrent mode.
Returns:
the database object.
Note:
The exceptional mode means that fatal errors caused by methods are
reported by exceptions raised. The concurrent mode means that
database operations by multiple threads are performed concurrently
without the giant VM lock. However, it has a side effect that such
methods with call back of Python code as DB#accept, DB#accept_bulk,
DB#iterate, and Cursor#accept are disabled.
error(self)
Get the last happened error.
Returns:
the last happened error.
open(self,
path=':',
mode=6)
Open a database file.
Parameters:
path - the path of a database file. If it is "-", the
database will be a prototype hash database. If it is
"+", the database will be a prototype tree database.
If it is ":", the database will be a stash database.
If it is "*", the database will be a cache hash
database. If it is "%", the database will be a cache
tree database. If its suffix is ".kch", the database
will be a file hash database. If its suffix is ".kct",
the database will be a file tree database. If its suffix is
".kcd", the database will be a directory hash database.
If its suffix is ".kcf", the database will be a
directory tree database. If its suffix is ".kcx", the
database will be a plain text database. Otherwise, this function
fails. Tuning parameters can trail the name, separated by
"#". Each parameter is composed of the name and the
value, separated by "=". If the "type"
parameter is specified, the database type is determined by the
value in "-", "+", ":",
"*", "%", "kch", "kct",
"kcd", kcf", and "kcx". All database
types support the logging parameters of "log",
"logkinds", and "logpx". The prototype hash
database and the prototype tree database do not support any other
tuning parameter. The stash database supports "bnum".
The cache hash database supports "opts",
"bnum", "zcomp", "capcnt",
"capsiz", and "zkey". The cache tree
database supports all parameters of the cache hash database
except for capacity limitation, and supports "psiz",
"rcomp", "pccap" in addition. The file hash
database supports "apow", "fpow",
"opts", "bnum", "msiz",
"dfunit", "zcomp", and "zkey". The
file tree database supports all parameters of the file hash
database and "psiz", "rcomp",
"pccap" in addition. The directory hash database
supports "opts", "zcomp", and
"zkey". The directory tree database supports all
parameters of the directory hash database and "psiz",
"rcomp", "pccap" in addition. The plain text
database does not support any other tuning parameter.
mode - the connection mode. DB.OWRITER as a writer, DB.OREADER as a
reader. The following may be added to the writer mode by
bitwise-or: DB.OCREATE, which means it creates a new database if
the file does not exist, DB.OTRUNCATE, which means it creates a
new database regardless if the file exists, DB.OAUTOTRAN, which
means each updating operation is performed in implicit
transaction, DB.OAUTOSYNC, which means each updating operation is
followed by implicit synchronization with the file system. The
following may be added to both of the reader mode and the writer
mode by bitwise-or: DB.ONOLOCK, which means it opens the database
file without file locking, DB.OTRYLOCK, which means locking is
performed without blocking, DB.ONOREPAIR, which means the
database file is not repaired implicitly even if file destruction
is detected.
Returns:
true on success, or false on failure.
Note:
The tuning parameter "log" is for the original
"tune_logger" and the value specifies the path of the log
file, or "-" for the standard output, or "+"
for the standard error. "logkinds" specifies kinds of
logged messages and the value can be "debug",
"info", "warn", or "error".
"logpx" specifies the prefix of each log message.
"opts" is for "tune_options" and the value can
contain "s" for the small option, "l" for the
linear option, and "c" for the compress option.
"bnum" corresponds to "tune_bucket".
"zcomp" is for "tune_compressor" and the value
can be "zlib" for the ZLIB raw compressor,
"def" for the ZLIB deflate compressor, "gz" for
the ZLIB gzip compressor, "lzo" for the LZO compressor,
"lzma" for the LZMA compressor, or "arc" for
the Arcfour cipher. "zkey" specifies the cipher key of
the compressor. "capcnt" is for "cap_count".
"capsiz" is for "cap_size". "psiz"
is for "tune_page". "rcomp" is for
"tune_comparator" and the value can be "lex"
for the lexical comparator, "dec" for the decimal
comparator, "lexdesc" for the lexical descending
comparator, or "decdesc" for the decimal descending
comparator. "pccap" is for "tune_page_cache".
"apow" is for "tune_alignment".
"fpow" is for "tune_fbp". "msiz" is
for "tune_map". "dfunit" is for
"tune_defrag". Every opened database must be closed by
the PolyDB::close method when it is no longer in use. It is not
allowed for two or more database objects in the same process to
keep their connections to the same database file at the same time.
close(self)
Close the database file.
Returns:
true on success, or false on failure.
accept(self,
key,
visitor,
writable=True)
Accept a visitor to a record.
Parameters:
key - the key.
visitor - a visitor object which implements the Visitor interface, or a
function object which receives the key and the value.
writable - true for writable operation, or false for read-only operation.
Returns:
true on success, or false on failure.
Note:
The operation for each record is performed atomically and other
threads accessing the same record are blocked. To avoid deadlock,
any explicit database operation must not be performed in this
method.
accept_bulk(self,
keys,
visitor,
writable=True)
Accept a visitor to multiple records at once.
Parameters:
keys - specifies a sequence object of the keys.
visitor - a visitor object which implements the Visitor interface, or a
function object which receives the key and the value.
writable - true for writable operation, or false for read-only operation.
Returns:
true on success, or false on failure.
Note:
The operations for specified records are performed atomically and
other threads accessing the same records are blocked. To avoid
deadlock, any explicit database operation must not be performed in
this method.
iterate(self,
visitor,
writable=True)
Iterate to accept a visitor for each record.
Parameters:
visitor - a visitor object which implements the Visitor interface, or a
function object which receives the key and the value.
writable - true for writable operation, or false for read-only operation.
Returns:
true on success, or false on failure.
Note:
The whole iteration is performed atomically and other threads are
blocked. To avoid deadlock, any explicit database operation must
not be performed in this method.
set(self,
key,
value)
Set the value of a record.
Parameters:
key - the key.
value - the value.
Returns:
true on success, or false on failure.
Note:
If no record corresponds to the key, a new record is created. If
the corresponding record exists, the value is overwritten.
add(self,
key,
value)
Add a record.
Parameters:
key - the key.
value - the value.
Returns:
true on success, or false on failure.
Note:
If no record corresponds to the key, a new record is created. If
the corresponding record exists, the record is not modified and
false is returned.
replace(self,
key,
value)
Replace the value of a record.
Parameters:
key - the key.
value - the value.
Returns:
true on success, or false on failure.
Note:
If no record corresponds to the key, no new record is created and
false is returned. If the corresponding record exists, the value
is modified.
append(self,
key,
value)
Append the value of a record.
Parameters:
key - the key.
value - the value.
Returns:
true on success, or false on failure.
Note:
If no record corresponds to the key, a new record is created. If
the corresponding record exists, the given value is appended at the
end of the existing value.
increment(self,
key,
num=0,
orig=0)
Add a number to the numeric integer value of a record.
Parameters:
key - the key.
num - the additional number.
orig - the origin number if no record corresponds to the key. If it is
negative infinity and no record corresponds, this method fails.
If it is positive infinity, the value is set as the additional
number regardless of the current value.
Returns:
the result value, or None on failure.
Note:
The value is serialized as an 8-byte binary integer in big-endian
order, not a decimal string. If existing value is not 8-byte, this
method fails.
increment_double(self,
key,
num=0.0,
orig=0.0)
Add a number to the numeric double value of a record.
Parameters:
key - the key.
num - the additional number.
orig - the origin number if no record corresponds to the key. If it is
negative infinity and no record corresponds, this method fails.
If it is positive infinity, the value is set as the additional
number regardless of the current value.
Returns:
the result value, or None on failure.
Note:
The value is serialized as an 16-byte binary fixed-point number in
big-endian order, not a decimal string. If existing value is not
16-byte, this method fails.
cas(self,
key,
oval,
nval)
Perform compare-and-swap.
Parameters:
key - the key.
oval - the old value. None means that no record corresponds.
nval - the new value. None means that the record is removed.
Returns:
true on success, or false on failure.
remove(self,
key)
Remove a record.
Parameters:
key - the key.
Returns:
true on success, or false on failure.
Note:
If no record corresponds to the key, false is returned.
get(self,
key)
Retrieve the value of a record.
Parameters:
key - the key.
Returns:
the value of the corresponding record, or None on failure.
get_str(self,
key)
Retrieve the value of a record.
Note:
Equal to the original DB::get method except that the return value
is string.
check(self,
key)
Check the existence of a record.
Parameters:
key - the key.
Returns:
the size of the value, or -1 on failure.
seize(self,
key)
Retrieve the value of a record and remove it atomically.
Parameters:
key - the key.
Returns:
the value of the corresponding record, or None on failure.
seize_str(self,
key)
Retrieve the value of a record and remove it atomically.
Note:
Equal to the original DB::seize method except that the return value
is string.
set_bulk(self,
recs,
atomic=True)
Store records at once.
Parameters:
recs - a map object of the records to store.
atomic - true to perform all operations atomically, or false for
non-atomic operations.
Returns:
the number of stored records, or -1 on failure.
remove_bulk(self,
keys,
atomic=True)
Remove records at once.
Parameters:
keys - a sequence object of the keys of the records to remove.
atomic - true to perform all operations atomically, or false for
non-atomic operations.
Returns:
the number of removed records, or -1 on failure.
get_bulk(self,
keys,
atomic=True)
Retrieve records at once.
Parameters:
keys - a sequence object of the keys of the records to retrieve.
atomic - true to perform all operations atomically, or false for
non-atomic operations.
Returns:
a map object of retrieved records, or None on failure.
get_bulk_str(self,
keys,
atomic=True)
Retrieve records at once.
Note:
Equal to the original DB::get_bulk method except that the return
value is string map.
clear(self)
Remove all records.
Returns:
true on success, or false on failure.
synchronize(self,
hard=False,
proc=None)
Synchronize updated contents with the file and the device.
Parameters:
hard - true for physical synchronization with the device, or false for
logical synchronization with the file system.
proc - a postprocessor object which implements the FileProcessor
interface, or a function object which receives the same
parameters. If it is None, no postprocessing is performed.
Returns:
true on success, or false on failure.
Note:
The operation of the processor is performed atomically and other
threads accessing the same record are blocked. To avoid deadlock,
any explicit database operation must not be performed in this
method.
occupy(self,
writable=False,
proc=None)
Occupy database by locking and do something meanwhile.
Parameters:
writable - true to use writer lock, or false to use reader lock.
proc - a processor object which implements the FileProcessor interface,
or a function object which receives the same parameters. If it
is None, no processing is performed.
Returns:
true on success, or false on failure.
Note:
The operation of the processor is performed atomically and other
threads accessing the same record are blocked. To avoid deadlock,
any explicit database operation must not be performed in this
method.
copy(self,
dest)
Create a copy of the database file.
Parameters:
dest - the path of the destination file.
Returns:
true on success, or false on failure.
begin_transaction(self,
hard=False)
Begin transaction.
Parameters:
hard - true for physical synchronization with the device, or false for
logical synchronization with the file system.
Returns:
true on success, or false on failure.
end_transaction(self,
commit=True)
End transaction.
Parameters:
commit - true to commit the transaction, or false to abort the
transaction.
Returns:
true on success, or false on failure.
transaction(self,
proc,
hard=False)
Perform entire transaction by a functor.
Parameters:
proc - the functor of operations during transaction. If the function
returns true, the transaction is committed. If the function
returns false or an exception is thrown, the transaction is
aborted.
hard - true for physical synchronization with the device, or false for
logical synchronization with the file system.
Returns:
true on success, or false on failure.
dump_snapshot(self,
dest)
Dump records into a snapshot file.
Parameters:
dest - the name of the destination file.
Returns:
true on success, or false on failure.
load_snapshot(self,
src)
Load records from a snapshot file.
Parameters:
src - the name of the source file.
Returns:
true on success, or false on failure.
count(self)
Get the number of records.
Returns:
the number of records, or -1 on failure.
size(self)
Get the size of the database file.
Returns:
the size of the database file in bytes, or -1 on failure.
path(self)
Get the path of the database file.
Returns:
the path of the database file, or None on failure.
status(self)
Get the miscellaneous status information.
Returns:
a dictionary object of the status information, or None on
failure.
match_prefix(self,
prefix,
max=-1)
Get keys matching a prefix string.
Parameters:
prefix - the prefix string.
max - the maximum number to retrieve. If it is negative, no limit is
specified.
Returns:
a list object of matching keys, or None on failure.
match_regex(self,
regex,
max=-1)
Get keys matching a regular expression string.
Parameters:
regex - the regular expression string.
max - the maximum number to retrieve. If it is negative, no limit is
specified.
Returns:
a list object of matching keys, or None on failure.
VERSION __package__
kyotocabinet-python-1.23/doc/epydoc.css 0000644 0001750 0001750 00000037227 11757455420 017242 0 ustar mikio mikio
/* Epydoc CSS Stylesheet
*
* This stylesheet can be used to customize the appearance of epydoc's
* HTML output.
*
*/
/* Default Colors & Styles
* - Set the default foreground & background color with 'body'; and
* link colors with 'a:link' and 'a:visited'.
* - Use bold for decision list terms.
* - The heading styles defined here are used for headings *within*
* docstring descriptions. All headings used by epydoc itself use
* either class='epydoc' or class='toc' (CSS styles for both
* defined below).
*/
body { background: #ffffff; color: #000000; }
p { margin-top: 0.5em; margin-bottom: 0.5em; }
a:link { color: #0000ff; }
a:visited { color: #204080; }
dt { font-weight: bold; }
h1 { font-size: +140%; font-style: italic;
font-weight: bold; }
h2 { font-size: +125%; font-style: italic;
font-weight: bold; }
h3 { font-size: +110%; font-style: italic;
font-weight: normal; }
code { font-size: 100%; }
/* N.B.: class, not pseudoclass */
a.link { font-family: monospace; }
/* Page Header & Footer
* - The standard page header consists of a navigation bar (with
* pointers to standard pages such as 'home' and 'trees'); a
* breadcrumbs list, which can be used to navigate to containing
* classes or modules; options links, to show/hide private
* variables and to show/hide frames; and a page title (using
*
). The page title may be followed by a link to the
* corresponding source code (using 'span.codelink').
* - The footer consists of a navigation bar, a timestamp, and a
* pointer to epydoc's homepage.
*/
h1.epydoc { margin: 0; font-size: +140%; font-weight: bold; }
h2.epydoc { font-size: +130%; font-weight: bold; }
h3.epydoc { font-size: +115%; font-weight: bold;
margin-top: 0.2em; }
td h3.epydoc { font-size: +115%; font-weight: bold;
margin-bottom: 0; }
table.navbar { background: #a0c0ff; color: #000000;
border: 2px groove #c0d0d0; }
table.navbar table { color: #000000; }
th.navbar-select { background: #70b0ff;
color: #000000; }
table.navbar a { text-decoration: none; }
table.navbar a:link { color: #0000ff; }
table.navbar a:visited { color: #204080; }
span.breadcrumbs { font-size: 85%; font-weight: bold; }
span.options { font-size: 70%; }
span.codelink { font-size: 85%; }
td.footer { font-size: 85%; }
/* Table Headers
* - Each summary table and details section begins with a 'header'
* row. This row contains a section title (marked by
* 'span.table-header') as well as a show/hide private link
* (marked by 'span.options', defined above).
* - Summary tables that contain user-defined groups mark those
* groups using 'group header' rows.
*/
td.table-header { background: #70b0ff; color: #000000;
border: 1px solid #608090; }
td.table-header table { color: #000000; }
td.table-header table a:link { color: #0000ff; }
td.table-header table a:visited { color: #204080; }
span.table-header { font-size: 120%; font-weight: bold; }
th.group-header { background: #c0e0f8; color: #000000;
text-align: left; font-style: italic;
font-size: 115%;
border: 1px solid #608090; }
/* Summary Tables (functions, variables, etc)
* - Each object is described by a single row of the table with
* two cells. The left cell gives the object's type, and is
* marked with 'code.summary-type'. The right cell gives the
* object's name and a summary description.
* - CSS styles for the table's header and group headers are
* defined above, under 'Table Headers'
*/
table.summary { border-collapse: collapse;
background: #e8f0f8; color: #000000;
border: 1px solid #608090;
margin-bottom: 0.5em; }
td.summary { border: 1px solid #608090; }
code.summary-type { font-size: 85%; }
table.summary a:link { color: #0000ff; }
table.summary a:visited { color: #204080; }
/* Details Tables (functions, variables, etc)
* - Each object is described in its own div.
* - A single-row summary table w/ table-header is used as
* a header for each details section (CSS style for table-header
* is defined above, under 'Table Headers').
*/
table.details { border-collapse: collapse;
background: #e8f0f8; color: #000000;
border: 1px solid #608090;
margin: .2em 0 0 0; }
table.details table { color: #000000; }
table.details a:link { color: #0000ff; }
table.details a:visited { color: #204080; }
/* Fields */
dl.fields { margin-left: 2em; margin-top: 1em;
margin-bottom: 1em; }
dl.fields dd ul { margin-left: 0em; padding-left: 0em; }
dl.fields dd ul li ul { margin-left: 2em; padding-left: 0em; }
div.fields { margin-left: 2em; }
div.fields p { margin-bottom: 0.5em; }
/* Index tables (identifier index, term index, etc)
* - link-index is used for indices containing lists of links
* (namely, the identifier index & term index).
* - index-where is used in link indices for the text indicating
* the container/source for each link.
* - metadata-index is used for indices containing metadata
* extracted from fields (namely, the bug index & todo index).
*/
table.link-index { border-collapse: collapse;
background: #e8f0f8; color: #000000;
border: 1px solid #608090; }
td.link-index { border-width: 0px; }
table.link-index a:link { color: #0000ff; }
table.link-index a:visited { color: #204080; }
span.index-where { font-size: 70%; }
table.metadata-index { border-collapse: collapse;
background: #e8f0f8; color: #000000;
border: 1px solid #608090;
margin: .2em 0 0 0; }
td.metadata-index { border-width: 1px; border-style: solid; }
table.metadata-index a:link { color: #0000ff; }
table.metadata-index a:visited { color: #204080; }
/* Function signatures
* - sig* is used for the signature in the details section.
* - .summary-sig* is used for the signature in the summary
* table, and when listing property accessor functions.
* */
.sig-name { color: #006080; }
.sig-arg { color: #008060; }
.sig-default { color: #602000; }
.summary-sig { font-family: monospace; }
.summary-sig-name { color: #006080; font-weight: bold; }
table.summary a.summary-sig-name:link
{ color: #006080; font-weight: bold; }
table.summary a.summary-sig-name:visited
{ color: #006080; font-weight: bold; }
.summary-sig-arg { color: #006040; }
.summary-sig-default { color: #501800; }
/* Subclass list
*/
ul.subclass-list { display: inline; }
ul.subclass-list li { display: inline; }
/* To render variables, classes etc. like functions */
table.summary .summary-name { color: #006080; font-weight: bold;
font-family: monospace; }
table.summary
a.summary-name:link { color: #006080; font-weight: bold;
font-family: monospace; }
table.summary
a.summary-name:visited { color: #006080; font-weight: bold;
font-family: monospace; }
/* Variable values
* - In the 'variable details' sections, each varaible's value is
* listed in a 'pre.variable' box. The width of this box is
* restricted to 80 chars; if the value's repr is longer than
* this it will be wrapped, using a backslash marked with
* class 'variable-linewrap'. If the value's repr is longer
* than 3 lines, the rest will be ellided; and an ellipsis
* marker ('...' marked with 'variable-ellipsis') will be used.
* - If the value is a string, its quote marks will be marked
* with 'variable-quote'.
* - If the variable is a regexp, it is syntax-highlighted using
* the re* CSS classes.
*/
pre.variable { padding: .5em; margin: 0;
background: #dce4ec; color: #000000;
border: 1px solid #708890; }
.variable-linewrap { color: #604000; font-weight: bold; }
.variable-ellipsis { color: #604000; font-weight: bold; }
.variable-quote { color: #604000; font-weight: bold; }
.variable-group { color: #008000; font-weight: bold; }
.variable-op { color: #604000; font-weight: bold; }
.variable-string { color: #006030; }
.variable-unknown { color: #a00000; font-weight: bold; }
.re { color: #000000; }
.re-char { color: #006030; }
.re-op { color: #600000; }
.re-group { color: #003060; }
.re-ref { color: #404040; }
/* Base tree
* - Used by class pages to display the base class hierarchy.
*/
pre.base-tree { font-size: 80%; margin: 0; }
/* Frames-based table of contents headers
* - Consists of two frames: one for selecting modules; and
* the other listing the contents of the selected module.
* - h1.toc is used for each frame's heading
* - h2.toc is used for subheadings within each frame.
*/
h1.toc { text-align: center; font-size: 105%;
margin: 0; font-weight: bold;
padding: 0; }
h2.toc { font-size: 100%; font-weight: bold;
margin: 0.5em 0 0 -0.3em; }
/* Syntax Highlighting for Source Code
* - doctest examples are displayed in a 'pre.py-doctest' block.
* If the example is in a details table entry, then it will use
* the colors specified by the 'table pre.py-doctest' line.
* - Source code listings are displayed in a 'pre.py-src' block.
* Each line is marked with 'span.py-line' (used to draw a line
* down the left margin, separating the code from the line
* numbers). Line numbers are displayed with 'span.py-lineno'.
* The expand/collapse block toggle button is displayed with
* 'a.py-toggle' (Note: the CSS style for 'a.py-toggle' should not
* modify the font size of the text.)
* - If a source code page is opened with an anchor, then the
* corresponding code block will be highlighted. The code
* block's header is highlighted with 'py-highlight-hdr'; and
* the code block's body is highlighted with 'py-highlight'.
* - The remaining py-* classes are used to perform syntax
* highlighting (py-string for string literals, py-name for names,
* etc.)
*/
pre.py-doctest { padding: .5em; margin: 1em;
background: #e8f0f8; color: #000000;
border: 1px solid #708890; }
table pre.py-doctest { background: #dce4ec;
color: #000000; }
pre.py-src { border: 2px solid #000000;
background: #f0f0f0; color: #000000; }
.py-line { border-left: 2px solid #000000;
margin-left: .2em; padding-left: .4em; }
.py-lineno { font-style: italic; font-size: 90%;
padding-left: .5em; }
a.py-toggle { text-decoration: none; }
div.py-highlight-hdr { border-top: 2px solid #000000;
border-bottom: 2px solid #000000;
background: #d8e8e8; }
div.py-highlight { border-bottom: 2px solid #000000;
background: #d0e0e0; }
.py-prompt { color: #005050; font-weight: bold;}
.py-more { color: #005050; font-weight: bold;}
.py-string { color: #006030; }
.py-comment { color: #003060; }
.py-keyword { color: #600000; }
.py-output { color: #404040; }
.py-name { color: #000050; }
.py-name:link { color: #000050 !important; }
.py-name:visited { color: #000050 !important; }
.py-number { color: #005000; }
.py-defname { color: #000060; font-weight: bold; }
.py-def-name { color: #000060; font-weight: bold; }
.py-base-class { color: #000060; }
.py-param { color: #000060; }
.py-docstring { color: #006030; }
.py-decorator { color: #804020; }
/* Use this if you don't want links to names underlined: */
/*a.py-name { text-decoration: none; }*/
/* Graphs & Diagrams
* - These CSS styles are used for graphs & diagrams generated using
* Graphviz dot. 'img.graph-without-title' is used for bare
* diagrams (to remove the border created by making the image
* clickable).
*/
img.graph-without-title { border: none; }
img.graph-with-title { border: 1px solid #000000; }
span.graph-title { font-weight: bold; }
span.graph-caption { }
/* General-purpose classes
* - 'p.indent-wrapped-lines' defines a paragraph whose first line
* is not indented, but whose subsequent lines are.
* - The 'nomargin-top' class is used to remove the top margin (e.g.
* from lists). The 'nomargin' class is used to remove both the
* top and bottom margin (but not the left or right margin --
* for lists, that would cause the bullets to disappear.)
*/
p.indent-wrapped-lines { padding: 0 0 0 7em; text-indent: -7em;
margin: 0; }
.nomargin-top { margin-top: 0; }
.nomargin { margin-top: 0; margin-bottom: 0; }
/* HTML Log */
div.log-block { padding: 0; margin: .5em 0 .5em 0;
background: #e8f0f8; color: #000000;
border: 1px solid #000000; }
div.log-error { padding: .1em .3em .1em .3em; margin: 4px;
background: #ffb0b0; color: #000000;
border: 1px solid #000000; }
div.log-warning { padding: .1em .3em .1em .3em; margin: 4px;
background: #ffffb0; color: #000000;
border: 1px solid #000000; }
div.log-info { padding: .1em .3em .1em .3em; margin: 4px;
background: #b0ffb0; color: #000000;
border: 1px solid #000000; }
h2.log-hdr { background: #70b0ff; color: #000000;
margin: 0; padding: 0em 0.5em 0em 0.5em;
border-bottom: 1px solid #000000; font-size: 110%; }
p.log { font-weight: bold; margin: .5em 0 .5em 0; }
tr.opt-changed { color: #000000; font-weight: bold; }
tr.opt-default { color: #606060; }
pre.log { margin: 0; padding: 0; padding-left: 1em; }
kyotocabinet-python-1.23/doc/help.html 0000644 0001750 0001750 00000025314 11757455420 017055 0 ustar mikio mikio
Help
This document contains the API (Application Programming Interface)
documentation for kyotocabinet. Documentation for the Python
objects defined by the project is divided into separate pages for each
package, module, and class. The API documentation also includes two
pages containing information about the project as a whole: a trees
page, and an index page.
Object Documentation
Each Package Documentation page contains:
A description of the package.
A list of the modules and sub-packages contained by the
package.
A summary of the classes defined by the package.
A summary of the functions defined by the package.
A summary of the variables defined by the package.
A detailed description of each function defined by the
package.
A detailed description of each variable defined by the
package.
Each Module Documentation page contains:
A description of the module.
A summary of the classes defined by the module.
A summary of the functions defined by the module.
A summary of the variables defined by the module.
A detailed description of each function defined by the
module.
A detailed description of each variable defined by the
module.
Each Class Documentation page contains:
A class inheritance diagram.
A list of known subclasses.
A description of the class.
A summary of the methods defined by the class.
A summary of the instance variables defined by the class.
A summary of the class (static) variables defined by the
class.
A detailed description of each method defined by the
class.
A detailed description of each instance variable defined by the
class.
A detailed description of each class (static) variable defined
by the class.
Project Documentation
The Trees page contains the module and class hierarchies:
The module hierarchy lists every package and module, with
modules grouped into packages. At the top level, and within each
package, modules and sub-packages are listed alphabetically.
The class hierarchy lists every class, grouped by base
class. If a class has more than one base class, then it will be
listed under each base class. At the top level, and under each base
class, classes are listed alphabetically.
The Index page contains indices of terms and
identifiers:
The term index lists every term indexed by any object's
documentation. For each term, the index provides links to each
place where the term is indexed.
The identifier index lists the (short) name of every package,
module, class, method, function, variable, and parameter. For each
identifier, the index provides a short description, and a link to
its documentation.
The Table of Contents
The table of contents occupies the two frames on the left side of
the window. The upper-left frame displays the project
contents, and the lower-left frame displays the module
contents:
Project Contents...
API Documentation Frame
Module Contents ...
The project contents frame contains a list of all packages
and modules that are defined by the project. Clicking on an entry
will display its contents in the module contents frame. Clicking on a
special entry, labeled "Everything," will display the contents of
the entire project.
The module contents frame contains a list of every
submodule, class, type, exception, function, and variable defined by a
module or package. Clicking on an entry will display its
documentation in the API documentation frame. Clicking on the name of
the module, at the top of the frame, will display the documentation
for the module itself.
The "frames" and "no frames" buttons below the top
navigation bar can be used to control whether the table of contents is
displayed or not.
The Navigation Bar
A navigation bar is located at the top and bottom of every page.
It indicates what type of page you are currently viewing, and allows
you to go to related pages. The following table describes the labels
on the navigation bar. Note that not some labels (such as
[Parent]) are not displayed on all pages.
Label
Highlighted when...
Links to...
[Parent]
(never highlighted)
the parent of the current package
[Package]
viewing a package
the package containing the current object
[Module]
viewing a module
the module containing the current object
[Class]
viewing a class
the class containing the current object
[Trees]
viewing the trees page
the trees page
[Index]
viewing the index page
the index page
[Help]
viewing the help page
the help page
The "show private" and "hide private" buttons below
the top navigation bar can be used to control whether documentation
for private objects is displayed. Private objects are usually defined
as objects whose (short) names begin with a single underscore, but do
not end with an underscore. For example, "_x",
"__pprint", and "epydoc.epytext._tokenize"
are private objects; but "re.sub",
"__init__", and "type_" are not. However,
if a module defines the "__all__" variable, then its
contents are used to decide which objects are private.
A timestamp below the bottom navigation bar indicates when each
page was last updated.
Note:
This method should be called explicitly when the cursor is no
longer in use.
accept(self,
visitor,
writable=True,
step=False)
Accept a visitor to the current record.
Parameters:
visitor - a visitor object which implements the Visitor interface, or a
function object which receives the key and the value.
writable - true for writable operation, or false for read-only operation.
step - true to move the cursor to the next record, or false for no move.
Returns:
true on success, or false on failure.
Note:
The operation for each record is performed atomically and other
threads accessing the same record are blocked. To avoid deadlock,
any explicit database operation must not be performed in this
method.
set_value(self,
value,
step=False)
Set the value of the current record.
Parameters:
value - the value.
step - true to move the cursor to the next record,
Returns:
true on success, or false on failure.
remove(self)
Remove the current record.
Returns:
true on success, or false on failure.
Note:
If no record corresponds to the key, false is returned. The cursor
is moved to the next record implicitly.
get_key(self,
step=False)
Get the key of the current record.
Parameters:
step - true to move the cursor to the next record, or false for no move.
Returns:
the key of the current record, or None on failure.
Note:
If the cursor is invalidated, None is returned.
get_key_str(self,
step=False)
Get the key of the current record.
Note:
Equal to the original Cursor::get_key method except that the return
value is string.
get_value(self,
step=False)
Get the value of the current record.
Parameters:
step - true to move the cursor to the next record, or false for no move.
Returns:
the value of the current record, or None on failure.
Note:
If the cursor is invalidated, None is returned.
get_value_str(self,
step=False)
Get the value of the current record.
Note:
Equal to the original Cursor::get_value method except that the
return value is string.
get(self,
step=False)
Get a pair of the key and the value of the current record.
Parameters:
step - true to move the cursor to the next record, or false for no move.
Returns:
a pair of the key and the value of the current record, or None on
failure.
Note:
If the cursor is invalidated, None is returned.
get_str(self,
step=False)
Get a pair of the key and the value of the current record.
Note:
Equal to the original Cursor::get method except that the return
value is string.
seize(self)
Get a pair of the key and the value of the current record and remove
it atomically.
Returns:
a pair of the key and the value of the current record, or None on
failure.
Note:
If the cursor is invalidated, None is returned. The cursor is
moved to the next record implicitly.
seize_str(self)
Get a pair of the key and the value of the current record and remove
it atomically.
Note:
Equal to the original Cursor::seize method except that the return
value is string.
jump(self,
key=None)
Jump the cursor to a record for forward scan.
Parameters:
key - the key of the destination record. If it is None, the
destination is the first record.
Returns:
true on success, or false on failure.
jump_back(self,
key=None)
Jump the cursor to a record for backward scan.
Parameters:
key - the key of the destination record. If it is None, the
destination is the last record.
Returns:
true on success, or false on failure.
Note:
This method is dedicated to tree databases. Some database types,
especially hash databases, will provide a dummy implementation.
step(self)
Step the cursor to the next record.
Returns:
true on success, or false on failure.
step_back(self)
Step the cursor to the previous record.
Returns:
true on success, or false on failure.
Note:
This method is dedicated to tree databases. Some database types,
especially hash databases, may provide a dummy implementation.
When javascript is enabled, this page will redirect URLs of
the form redirect.html#dotted.name to the
documentation for the object with the given fully-qualified
dotted name.
kyotocabinet-python-1.23/doc/index.html 0000644 0001750 0001750 00000001115 11757455420 017225 0 ustar mikio mikio
kyotocabinet
kyotocabinet-python-1.23/setup.py 0000644 0001750 0001750 00000003621 11502521666 016173 0 ustar mikio mikio from distutils.core import *
from subprocess import *
package_name = 'Kyoto Cabinet'
package_version = '1.5'
package_description = 'a straightforward implementation of DBM'
package_author = 'FAL Labs'
package_author_email = 'info@fallabs.com'
package_url = 'http://fallabs.net/kyotocabinet/'
module_name = 'kyotocabinet'
def getcmdout(cmdargs):
try:
pipe = Popen(cmdargs, stdout=PIPE)
output = pipe.communicate()[0].decode('utf-8')
except:
output = ""
return output.strip()
include_dirs = []
myincopts = getcmdout(['kcutilmgr', 'conf', '-i']).split()
for incopt in myincopts:
if incopt.startswith('-I'):
incdir = incopt[2:]
include_dirs.append(incdir)
if len(include_dirs) < 1:
include_dirs = ['/usr/local/include']
extra_compile_args = []
sources = ['kyotocabinet.cc']
library_dirs = []
libraries = []
mylibopts = getcmdout(['kcutilmgr', 'conf', '-l']).split()
for libopt in mylibopts:
if libopt.startswith('-L'):
libdir = libopt[2:]
library_dirs.append(libdir)
elif libopt.startswith('-l'):
libname = libopt[2:]
libraries.append(libname)
if len(library_dirs) < 1:
library_dirs = ['/usr/local/lib']
if len(libraries) < 1:
if (os.uname()[0] == "Darwin"):
libraries = ['kyotocabinet', 'z', 'stdc++', 'pthread', 'm', 'c']
else:
libraries = ['kyotocabinet', 'z', 'stdc++', 'rt', 'pthread', 'm', 'c']
module = Extension(module_name,
include_dirs = include_dirs,
extra_compile_args = extra_compile_args,
sources = sources,
library_dirs = library_dirs,
libraries = libraries)
setup (name = package_name,
version = package_version,
description = package_description,
author = package_author,
author_email = package_author_email,
url = package_url,
ext_modules = [module])
kyotocabinet-python-1.23/kctest.py 0000755 0001750 0001750 00000111301 11727166477 016344 0 ustar mikio mikio #! /usr/bin/python3
# -*- coding: utf-8 -*-
#-------------------------------------------------------------------------------------------------
# The test cases of the Python binding
# Copyright (C) 2009-2010 FAL Labs
# This file is part of Kyoto Cabinet.
# This program is free software: you can redistribute it and/or modify it under the terms of
# the GNU General Public License as published by the Free Software Foundation, either version
# 3 of the License, or any later version.
# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
# You should have received a copy of the GNU General Public License along with this program.
# If not, see .
#-------------------------------------------------------------------------------------------------
from kyotocabinet import *
import sys
import os
import re
import random
import time
import threading
import shutil
# main routine
def main():
if len(sys.argv) < 2: usage()
if sys.argv[1] == "order":
rv = runorder()
elif sys.argv[1] == "wicked":
rv = runwicked()
elif sys.argv[1] == "misc":
rv = runmisc()
else:
usage()
return rv
# print the usage and exit
def usage():
print("{}: test cases of the Python binding".format(progname), file=sys.stderr)
print("", file=sys.stderr)
print("usage:", file=sys.stderr)
print(" {} order [-cc] [-th num] [-rnd] [-etc] path rnum".format(progname), file=sys.stderr)
print(" {} wicked [-cc] [-th num] [-it num] path rnum".format(progname), file=sys.stderr)
print(" {} misc path".format(progname), file=sys.stderr)
print("", file=sys.stderr)
exit(1)
# generate a random number
def rand(num):
if num < 2: return 0
return rndstate.randint(0, num - 1)
# print the error message of the database
def dberrprint(db, func):
err = db.error()
print("{}: {}: {}: {}: {}".format(progname, func, err.code(), err.name(), err.message()))
# print members of a database
def dbmetaprint(db, verbose):
if verbose:
status = db.status()
if status != None:
for key in status:
print("{}: {}".format(key, status[key]))
else:
print("count: {}".format(db.count()))
print("size: {}".format(db.size()))
# parse arguments of order command
def runorder():
path = None
rnum = None
gopts = 0
thnum = 1
rnd = False
etc = False
i = 2
while i < len(sys.argv):
arg = sys.argv[i]
if path == None and arg.startswith("-"):
if arg == "-cc":
gopts |= DB.GCONCURRENT
elif arg == "-th":
i += 1
if i >= len(sys.argv): usage()
thnum = int(sys.argv[i])
elif arg == "-rnd":
rnd = True
elif arg == "-etc":
etc = True
else:
usage()
elif path == None:
path = arg
elif rnum == None:
rnum = int(arg)
else:
usage()
i += 1
if path == None or rnum == None or rnum < 1 or thnum < 1: usage()
rv = procorder(path, rnum, gopts, thnum, rnd, etc)
return rv
# parse arguments of wicked command
def runwicked():
path = None
rnum = None
gopts = 0
thnum = 1
itnum = 1
i = 2
while i < len(sys.argv):
arg = sys.argv[i]
if path == None and arg.startswith("-"):
if arg == "-cc":
gopts |= DB.GCONCURRENT
elif arg == "-th":
i += 1
if i >= len(sys.argv): usage()
thnum = int(sys.argv[i])
elif arg == "-it":
i += 1
if i >= len(sys.argv): usage()
itnum = int(sys.argv[i])
else:
usage()
elif path == None:
path = arg
elif rnum == None:
rnum = int(arg)
else:
usage()
i += 1
if path == None or rnum == None or rnum < 1 or thnum < 1 or itnum < 1: usage()
rv = procwicked(path, rnum, gopts, thnum, itnum)
return rv
# parse arguments of misc command
def runmisc():
path = None
i = 2
while i < len(sys.argv):
arg = sys.argv[i]
if path == None and arg.startswith("-"):
usage()
elif path == None:
path = arg
else:
usage()
i += 1
if path == None: usage()
rv = procmisc(path)
return rv
# perform order command
def procorder(path, rnum, gopts, thnum, rnd, etc):
print("")
print(" path={} rnum={} gopts={} thnum={} rnd={} etc={}".
format(path, rnum, gopts, thnum, rnd, etc))
print("")
err = False
db = DB(gopts)
db.tune_exception_rule([ Error.SUCCESS, Error.NOIMPL, Error.MISC ])
print("opening the database:")
stime = time.time()
if not db.open(path, DB.OWRITER | DB.OCREATE | DB.OTRUNCATE):
dberrprint(db, "DB::open")
err = True
etime = time.time()
print("time: {:.3f}".format(etime - stime))
print("setting records:")
stime = time.time()
class Setter(threading.Thread):
def __init__(self, thid):
threading.Thread.__init__(self)
self.thid = thid
def run(self):
nonlocal err
base = self.thid * rnum
rng = rnum * thnum
for i in range(1, rnum + 1):
if err: break
key = "{:08d}".format(rand(rng) + 1 if rnd else base + i)
if not db.set(key, key):
dberrprint(db, "DB::set")
err = True
if self.thid < 1 and rnum > 250 and i % (rnum / 250) == 0:
print(".", end="")
if i == rnum or i % (rnum / 10) == 0:
print(" ({:08d})".format(i))
sys.stdout.flush()
threads = []
for thid in range(0, thnum):
th = Setter(thid)
th.start()
threads.append(th)
for th in threads:
th.join()
etime = time.time()
dbmetaprint(db, False)
print("time: {:.3f}".format(etime - stime))
if etc:
print("adding records:")
stime = time.time()
class Adder(threading.Thread):
def __init__(self, thid):
threading.Thread.__init__(self)
self.thid = thid
def run(self):
nonlocal err
base = self.thid * rnum
rng = rnum * thnum
for i in range(1, rnum + 1):
if err: break
key = "{:08d}".format(rand(rng) + 1 if rnd else base + i)
if not db.add(key, key) and db.error() != Error.DUPREC:
dberrprint(db, "DB::add")
err = True
if self.thid < 1 and rnum > 250 and i % (rnum / 250) == 0:
print(".", end="")
if i == rnum or i % (rnum / 10) == 0:
print(" ({:08d})".format(i))
sys.stdout.flush()
threads = []
for thid in range(0, thnum):
th = Adder(thid)
th.start()
threads.append(th)
for th in threads:
th.join()
etime = time.time()
dbmetaprint(db, False)
print("time: {:.3f}".format(etime - stime))
if etc:
print("appending records:")
stime = time.time()
class Appender(threading.Thread):
def __init__(self, thid):
threading.Thread.__init__(self)
self.thid = thid
def run(self):
nonlocal err
base = self.thid * rnum
rng = rnum * thnum
for i in range(1, rnum + 1):
if err: break
key = "{:08d}".format(rand(rng) + 1 if rnd else base + i)
if not db.append(key, key):
dberrprint(db, "DB::append")
err = True
if self.thid < 1 and rnum > 250 and i % (rnum / 250) == 0:
print(".", end="")
if i == rnum or i % (rnum / 10) == 0:
print(" ({:08d})".format(i))
sys.stdout.flush()
threads = []
for thid in range(0, thnum):
th = Appender(thid)
th.start()
threads.append(th)
for th in threads:
th.join()
etime = time.time()
dbmetaprint(db, False)
print("time: {:.3f}".format(etime - stime))
if etc and not (gopts & DB.GCONCURRENT):
print("accepting visitors:")
stime = time.time()
class Accepter(threading.Thread):
def __init__(self, thid):
threading.Thread.__init__(self)
self.thid = thid
def run(self):
nonlocal err
class VisitorImpl(Visitor):
def __init__(self):
self.cnt = 0
def visit_full(self, key, value):
self.cnt += 1
if self.cnt % 100 == 0: time.sleep(0)
rv = self.NOP
if rnd:
num = rand(7)
if num == 0:
rv = self.cnt
elif num == 1:
rv = self.REMOVE
return rv
def visit_empty(self, key):
return self.visit_full(key, key)
visitor = VisitorImpl()
base = self.thid * rnum
rng = rnum * thnum
for i in range(1, rnum + 1):
if err: break
key = "{:08d}".format(rand(rng) + 1 if rnd else base + i)
if not db.accept(key, visitor, rnd):
dberrprint(db, "DB::accept")
err = True
if self.thid < 1 and rnum > 250 and i % (rnum / 250) == 0:
print(".", end="")
if i == rnum or i % (rnum / 10) == 0:
print(" ({:08d})".format(i))
sys.stdout.flush()
threads = []
for thid in range(0, thnum):
th = Accepter(thid)
th.start()
threads.append(th)
for th in threads:
th.join()
etime = time.time()
dbmetaprint(db, False)
print("time: {:.3f}".format(etime - stime))
print("Getting records:")
stime = time.time()
class Getter(threading.Thread):
def __init__(self, thid):
threading.Thread.__init__(self)
self.thid = thid
def run(self):
nonlocal err
base = self.thid * rnum
rng = rnum * thnum
for i in range(1, rnum + 1):
if err: break
key = "{:08d}".format(rand(rng) + 1 if rnd else base + i)
if db.get(key) == None and db.error() != Error.NOREC:
dberrprint(db, "DB::get")
err = True
if self.thid < 1 and rnum > 250 and i % (rnum / 250) == 0:
print(".", end="")
if i == rnum or i % (rnum / 10) == 0:
print(" ({:08d})".format(i))
sys.stdout.flush()
threads = []
for thid in range(0, thnum):
th = Getter(thid)
th.start()
threads.append(th)
for th in threads:
th.join()
etime = time.time()
dbmetaprint(db, False)
print("time: {:.3f}".format(etime - stime))
if etc and not (gopts & DB.GCONCURRENT):
print("traversing the database by the inner iterator:")
stime = time.time()
class InnerTraverser(threading.Thread):
def __init__(self, thid):
threading.Thread.__init__(self)
self.thid = thid
def run(self):
nonlocal err
class VisitorImpl(Visitor):
def __init__(self, thid):
self.thid = thid
self.cnt = 0
def visit_full(self, key, value):
self.cnt += 1
if self.cnt % 100 == 0: time.sleep(0)
rv = self.NOP
if rnd:
num = rand(7)
if num == 0:
rv = str(self.cnt) * 2
elif num == 1:
rv = self.REMOVE
if self.thid < 1 and rnum > 250 and self.cnt % (rnum / 250) == 0:
print(".", end="")
if self.cnt == rnum or self.cnt % (rnum / 10) == 0:
print(" ({:08d})".format(self.cnt))
sys.stdout.flush()
return rv
def visit_empty(self, key):
return self.visit_full(key, key)
visitor = VisitorImpl(self.thid)
if not db.iterate(visitor, rnd):
dberrprint(db, "DB::iterate")
err = True
threads = []
for thid in range(0, thnum):
th = InnerTraverser(thid)
th.start()
threads.append(th)
for th in threads:
th.join()
if rnd: print(" (end)")
etime = time.time()
dbmetaprint(db, False)
print("time: {:.3f}".format(etime - stime))
if etc and not (gopts & DB.GCONCURRENT):
print("traversing the database by the outer cursor:")
stime = time.time()
class OuterTraverser(threading.Thread):
def __init__(self, thid):
threading.Thread.__init__(self)
self.thid = thid
def run(self):
nonlocal err
class VisitorImpl(Visitor):
def __init__(self, thid):
self.thid = thid
self.cnt = 0
def visit_full(self, key, value):
self.cnt += 1
if self.cnt % 100 == 0: time.sleep(0)
rv = self.NOP
if rnd:
num = rand(7)
if num == 0:
rv = str(self.cnt) * 2
elif num == 1:
rv = self.REMOVE
if self.thid < 1 and rnum > 250 and self.cnt % (rnum / 250) == 0:
print(".", end="")
if self.cnt == rnum or self.cnt % (rnum / 10) == 0:
print(" ({:08d})".format(self.cnt))
sys.stdout.flush()
return rv
def visit_empty(self, key):
return self.visit_full(key, key)
visitor = VisitorImpl(self.thid)
cur = db.cursor()
if not cur.jump() and db.error() != Error.NOREC:
dberrprint(db, "Cursor::jump")
err = True
while cur.accept(visitor, rnd, False):
if not cur.step() and db.error() != Error.NOREC:
dberrprint(db, "Cursor::step")
err = True
if db.error() != Error.NOREC:
dberrprint(db, "Cursor::accept")
err = True
threads = []
for thid in range(0, thnum):
th = OuterTraverser(thid)
th.start()
threads.append(th)
for th in threads:
th.join()
if rnd: print(" (end)")
etime = time.time()
dbmetaprint(db, False)
print("time: {:.3f}".format(etime - stime))
print("Removing records:")
stime = time.time()
class Remover(threading.Thread):
def __init__(self, thid):
threading.Thread.__init__(self)
self.thid = thid
def run(self):
nonlocal err
base = self.thid * rnum
rng = rnum * thnum
for i in range(1, rnum + 1):
if err: break
key = "{:08d}".format(rand(rng) + 1 if rnd else base + i)
if not db.remove(key) and db.error() != Error.NOREC:
dberrprint(db, "DB::remove")
err = True
if self.thid < 1 and rnum > 250 and i % (rnum / 250) == 0:
print(".", end="")
if i == rnum or i % (rnum / 10) == 0:
print(" ({:08d})".format(i))
sys.stdout.flush()
threads = []
for thid in range(0, thnum):
th = Remover(thid)
th.start()
threads.append(th)
for th in threads:
th.join()
etime = time.time()
dbmetaprint(db, True)
print("time: {:.3f}".format(etime - stime))
print("closing the database:")
stime = time.time()
if not db.close():
dberrprint(db, "DB::close")
err = True
etime = time.time()
print("time: {:.3f}".format(etime - stime))
print("error" if err else "ok")
print("")
return 1 if err else 0
# perform wicked command
def procwicked(path, rnum, gopts, thnum, itnum):
print("")
print(" path={} rnum={} gopts={} thnum={} itnum={}".
format(path, rnum, gopts, thnum, itnum))
print("")
err = False
db = DB(gopts)
db.tune_exception_rule([ Error.SUCCESS, Error.NOIMPL, Error.MISC ])
for itcnt in range(1, itnum + 1):
if itnum > 1: print("iteration {}:".format(itcnt))
stime = time.time()
omode = DB.OWRITER | DB.OCREATE
if itcnt == 1: omode |= DB.OTRUNCATE
if not db.open(path, omode):
dberrprint(db, "DB::open")
err = True
class Operator(threading.Thread):
def __init__(self, thid):
threading.Thread.__init__(self)
self.thid = thid
self.cnt = 0
def run(self):
nonlocal err
class VisitorImpl(Visitor):
def __init__(self):
self.cnt = 0
def visit_full(self, key, value):
self.cnt += 1
if self.cnt % 100 == 0: time.sleep(0)
rv = self.NOP
num = rand(7)
if num == 0:
rv = self.cnt
elif num == 1:
rv = self.REMOVE
return rv
def visit_empty(self, key):
return self.visit_full(key, key)
visitor = VisitorImpl()
cur = db.cursor()
rng = rnum * thnum
for i in range(1, rnum + 1):
if err: break
tran = rand(100) == 0
if tran and not db.begin_transaction(rand(rnum) == 0):
dberrprint(db, "DB::begin_transaction")
tran = False
err = True
key = "{:08d}".format(rand(rng) + 1)
op = rand(12)
if op == 0:
if not db.set(key, key):
dberrprint(db, "DB::set")
err = True
elif op == 1:
if not db.add(key, key) and db.error() != Error.DUPREC:
dberrprint(db, "DB::add")
err = True
elif op == 2:
if not db.replace(key, key) and db.error() != Error.NOREC:
dberrprint(db, "DB::replace")
err = True
elif op == 3:
if not db.append(key, key):
dberrprint(db, "DB::append")
err = True
elif op == 4:
if rand(2) == 0:
if db.increment(key, rand(10)) == None and \
db.error() != Error.LOGIC:
dberrprint(db, "DB::increment")
err = True
else:
if db.increment_double(key, rand(10000) / 1000.0) == None and \
db.error() != Error.LOGIC:
dberrprint(db, "DB::increment_double")
err = True
elif op == 5:
if not db.cas(key, key, key) and db.error() != Error.LOGIC:
dberrprint(db, "DB::cas")
err = True
elif op == 6:
if not db.remove(key) and db.error() != Error.NOREC:
dberrprint(db, "DB::remove")
err = True
elif op == 7:
if not db.accept(key, visitor, True) and \
(not (gopts & DB.GCONCURRENT) or db.error() != Error.INVALID):
dberrprint(db, "DB::accept")
err = True
elif op == 8:
if rand(10) == 0:
if rand(4) == 0:
try:
if not cur.jump_back(key) and db.error() != Error.NOREC:
dberrprint(db, "Cursor::jump_back")
err = True
except Error.XNOIMPL as e:
pass
else:
if not cur.jump(key) and db.error() != Error.NOREC:
dberrprint(db, "Cursor::jump")
err = True
cop = rand(6)
if cop == 0:
if cur.get_key() == None and db.error() != Error.NOREC:
dberrprint(db, "Cursor::get_key")
err = True
elif cop == 1:
if cur.get_value() == None and db.error() != Error.NOREC:
dberrprint(db, "Cursor::get_value")
err = True
elif cop == 2:
if cur.get() == None and db.error() != Error.NOREC:
dberrprint(db, "Cursor::get")
err = True
elif cop == 3:
if not cur.remove() and db.error() != Error.NOREC:
dberrprint(db, "Cursor::remove")
err = True
else:
if not cur.accept(visitor, True, rand(2) == 0) and \
db.error() != Error.NOREC and \
(not (gopts & DB.GCONCURRENT) or \
db.error() != Error.INVALID):
dberrprint(db, "Cursor::accept")
err = True
if rand(2) == 0:
if not cur.step() and db.error() != Error.NOREC:
dberrprint(db, "Cursor::step")
err = True
if rand(rnum / 50 + 1) == 0:
prefix = key[0:-1]
if db.match_prefix(prefix, rand(10)) == None:
dberrprint(db, "DB::match_prefix")
err = True
if rand(rnum / 50 + 1) == 0:
regex = key[0:-1]
if db.match_regex(regex, rand(10)) == None and \
db.error() != Error.NOLOGIC:
dberrprint(db, "DB::match_regex")
err = True
if rand(rnum / 50 + 1) == 0:
origin = key[0:-1]
if db.match_similar(origin, 3, rand(2) == 0, rand(10)) == None:
dberrprint(db, "DB::match_similar")
err = True
if rand(10) == 0:
paracur = db.cursor()
paracur.jump(key)
if not paracur.accept(visitor, True, rand(2) == 0) and \
db.error() != Error.NOREC and \
(not (gopts & DB.GCONCURRENT) or \
db.error() != Error.INVALID):
dberrprint(db, "Cursor::accept")
err = True
paracur.disable()
else:
if db.get(key) == None and db.error() != Error.NOREC:
dberrprint(db, "DB::get")
err = True
if tran and not db.end_transaction(rand(10) > 0):
dberrprint(db, "DB::begin_transaction")
tran = False
err = True
if self.thid < 1 and rnum > 250 and i % (rnum / 250) == 0:
print(".", end="")
if i == rnum or i % (rnum / 10) == 0:
print(" ({:08d})".format(i))
sys.stdout.flush()
cur.disable()
threads = []
for thid in range(0, thnum):
th = Operator(thid)
th.start()
threads.append(th)
for th in threads:
th.join()
dbmetaprint(db, itcnt == itnum)
if not db.close():
dberrprint(db, "DB::close")
err = True
etime = time.time()
print("time: {:.3f}".format(etime - stime))
print("error" if err else "ok")
print("")
return 1 if err else 0
# perform misc command
def procmisc(path):
print("")
print(" path={}".format(path))
print("")
err = False
if conv_bytes("mikio") != b"mikio" or conv_bytes(123.45) != b"123.45":
print("{}: conv_str: error".format(progname))
err = True
print("calling utility functions:")
atoi("123.456mikio")
atoix("123.456mikio")
atof("123.456mikio")
hash_murmur(path)
hash_fnv(path)
levdist(path, "casket")
dcurs = []
print("opening the database with functor:")
def myproc(db):
nonlocal err
db.tune_exception_rule([ Error.SUCCESS, Error.NOIMPL, Error.MISC ])
repr(db)
str(db)
rnum = 10000
print("setting records:")
for i in range(0, rnum):
db[i] = i
if db.count() != rnum:
dberrprint(db, "DB::count")
err = True
print("deploying cursors:")
for i in range(1, 101):
cur = db.cursor()
if not cur.jump(i):
dberrprint(db, "Cursor::jump")
err = True
num = i % 3
if num == 0:
dcurs.append(cur)
elif num == 1:
cur.disable()
repr(cur)
str(cur)
print("getting records:")
for cur in dcurs:
if cur.get_key() == None:
dberrprint(db, "Cursor::jump")
err = True
print("accepting visitor:")
def visitfunc(key, value):
rv = Visitor.NOP
num = int(key) % 3
if num == 0:
if value == None:
rv = "empty:{}".format(key.decode())
else:
rv = "full:{}".format(key.decode())
elif num == 1:
rv = Visitor.REMOVE
return rv
for i in range(0, rnum * 2):
if not db.accept(i, visitfunc, True):
dberrprint(db, "DB::access")
err = True
print("accepting visitor by iterator:")
if not db.iterate(lambda key, value: None, False):
dberrprint(db, "DB::iterate")
err = True
if not db.iterate(lambda key, value: str.upper(value.decode()), True):
dberrprint(db, "DB::iterate")
err = True
print("accepting visitor with a cursor:")
cur = db.cursor()
def curvisitfunc(key, value):
rv = Visitor.NOP
num = int(key) % 7
if num == 0:
rv = "cur:full:{}".format(key.decode())
elif num == 1:
rv = Visitor.REMOVE
return rv
try:
if not cur.jump_back():
dberrprint(db, "Cursor::jump_back")
err = True
while cur.accept(curvisitfunc, True):
cur.step_back()
except Error.XNOIMPL as e:
if not cur.jump():
dberrprint(db, "Cursor::jump")
err = True
while cur.accept(curvisitfunc, True):
cur.step()
print("accepting visitor in bulk:")
keys = []
for i in range(1, 11):
keys.append(i)
if not db.accept_bulk(keys, visitfunc, True):
dberrprint(db, "DB::accept_bulk")
err = True
recs = {}
for i in range(1, 11):
recs[i] = "[{:d}]".format(i)
if db.set_bulk(recs) < 0:
dberrprint(db, "DB::set_bulk")
err = True
if not db.get_bulk(keys):
dberrprint(db, "DB::get_bulk")
err = True
if not db.get_bulk_str(keys):
dberrprint(db, "DB::get_bulk_str")
err = True
if db.remove_bulk(keys) < 0:
dberrprint(db, "DB::remove_bulk")
err = True
print("synchronizing the database:")
class FileProcessorImpl(FileProcessor):
def process(self, path, count, size):
return True
fproc = FileProcessorImpl()
if not db.synchronize(False, fproc):
dberrprint(db, "DB::synchronize")
err = True
if not db.synchronize(False, lambda path, count, size: True):
dberrprint(db, "DB::synchronize")
err = True
if not db.occupy(False, fproc):
dberrprint(db, "DB::occupy")
err = True
print("performing transaction:")
def commitfunc():
db["tako"] = "ika"
return True
if not db.transaction(commitfunc, False):
dberrprint(db, "DB::transaction")
err = True
if db["tako"].decode() != "ika":
dberrprint(db, "DB::transaction")
err = True
del db["tako"]
cnt = db.count()
def abortfunc():
db["tako"] = "ika"
db["kani"] = "ebi"
return False
if not db.transaction(abortfunc, False):
dberrprint(db, "DB::transaction")
err = True
if db["tako"] != None or db["kani"] != None or db.count() != cnt:
dberrprint(db, "DB::transaction")
err = True
print("closing the database:")
dberr = DB.process(myproc, path, DB.OWRITER | DB.OCREATE | DB.OTRUNCATE)
if dberr != None:
print("{}: DB::process: {}".format(progname, str(dberr)))
err = True;
print("accessing dead cursors:")
for cur in dcurs:
cur.get_key()
print("checking the exceptional mode:")
db = DB(DB.GEXCEPTIONAL)
try:
db.open("hoge")
except Error.XINVALID as e:
if e.code() != Error.INVALID:
dberrprint(db, "DB::open")
err = True
else:
dberrprint(db, "DB::open")
err = True
print("re-opening the database as a reader:")
db = DB()
if not db.open(path, DB.OREADER):
dberrprint(db, "DB::open")
err = True
print("traversing records by iterator:")
keys = []
for key in db:
keys.append(key)
if db.count() != len(keys):
dberrprint(db, "DB::count")
err = True
print("checking records:")
for key in keys:
if db.get(key) == None:
dberrprint(db, "DB::get")
err = True
print("closing the database:")
if not db.close():
dberrprint(db, "DB::close")
err = True
print("re-opening the database in the concurrent mode:")
db = DB(DB.GCONCURRENT)
if not db.open(path, DB.OWRITER):
dberrprint(db, "DB::open")
err = True
if not db.set("tako", "ika"):
dberrprint(db, "DB::set")
err = True
def dummyfunc(key, value):
raise
if db.accept(dummyfunc, "tako") or db.error() != Error.INVALID:
dberrprint(db, "DB::accept")
err = True
print("removing records by cursor:")
cur = db.cursor()
if not cur.jump():
dberrprint(db, "Cursor::jump")
err = True
cnt = 0
while True:
key = cur.get_key(True)
if not key: break
if cnt % 10 != 0:
if not db.remove(key):
dberrprint(db, "DB::remove")
err = True
cnt += 1
if db.error() != Error.NOREC:
dberrprint(db, "Cursor::get_key")
err = True
cur.disable()
print("processing a cursor by callback:")
def curprocfunc(cur):
if not cur.jump():
dberrprint(db, "Cursor::jump")
err = True
value = "[{}]".format(cur.get_value_str())
if not cur.set_value(value):
dberrprint(db, "Cursor::set_value")
err = True
if cur.get_value() != value.encode():
dberrprint(db, "Cursor::get_value")
err = True
db.cursor_process(curprocfunc)
print("dumping records into snapshot:")
snappath = db.path()
if re.match(r".*\.(kch|kct)$", snappath):
snappath = snappath + ".kcss"
else:
snappath = "kctest.kcss"
if not db.dump_snapshot(snappath):
dberrprint(db, "DB::dump_snapshot")
err = True
cnt = db.count()
print("clearing the database:")
if not db.clear():
dberrprint(db, "DB::clear")
err = True
print("loading records from snapshot:")
if not db.load_snapshot(snappath):
dberrprint(db, "DB::load_snapshot")
err = True
if db.count() != cnt:
dberrprint(db, "DB::load_snapshot")
err = True
os.remove(snappath)
copypath = db.path()
suffix = None
if copypath.endswith(".kch"):
suffix = ".kch"
elif copypath.endswith(".kct"):
suffix = ".kct"
elif copypath.endswith(".kcd"):
suffix = ".kcd"
elif copypath.endswith(".kcf"):
suffix = ".kcf"
if suffix != None:
print("performing copy and merge:")
copypaths = []
for i in range(0, 2):
copypaths.append("{}.{}{}".format(copypath, i + 1, suffix))
srcary = []
for copypath in copypaths:
if not db.copy(copypath):
dberrprint(db, "DB::copy")
err = True
srcdb = DB()
if not srcdb.open(copypath, DB.OREADER):
dberrprint(srcdb, "DB::open")
err = True
srcary.append(srcdb)
if not db.merge(srcary, DB.MAPPEND):
dberrprint(db, "DB::merge")
err = True
for srcdb in srcary:
if not srcdb.close():
dberrprint(srcdb, "DB::open")
err = True
for copypath in copypaths:
shutil.rmtree(copypath, True)
try:
os.remove(copypath)
except OSError as e:
pass
print("shifting records:")
ocnt = db.count()
cnt = 0
while True:
rec = db.shift() if cnt % 2 == 0 else db.shift_str()
if rec == None: break
cnt += 1
if db.error() != Error.NOREC:
dberrprint(db, "DB::shift")
err = True
if db.count() != 0 or cnt != ocnt:
dberrprint(db, "DB::shift")
err = True
print("closing the database:")
if not db.close():
dberrprint(db, "DB::close")
err = True
repr(db)
str(db)
print("error" if err else "ok")
print("")
return 1 if err else 0
# execute main
progname = sys.argv[0]
progname = re.sub(r".*/", "", progname)
rndstate = random.Random()
exit(main())
kyotocabinet-python-1.23/COPYING 0000644 0001750 0001750 00000104513 13676535073 015531 0 ustar mikio mikio GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
Copyright (C)
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
.
kyotocabinet-python-1.23/Makefile 0000644 0001750 0001750 00000006110 13676535131 016123 0 ustar mikio mikio # Makefile for Kyoto Cabinet for Python
PACKAGE = kyotocabinet-python
VERSION = 1.23
PACKAGEDIR = $(PACKAGE)-$(VERSION)
PACKAGETGZ = $(PACKAGE)-$(VERSION).tar.gz
PYTHON = python3
RUNENV = LD_LIBRARY_PATH=.:/lib:/usr/lib:/usr/local/lib:$(HOME)/lib
all :
$(PYTHON) setup.py build
cp -f build/*/*.so .
@printf '\n'
@printf '#================================================================\n'
@printf '# Ready to install.\n'
@printf '#================================================================\n'
clean :
rm -rf casket casket* *~ *.tmp *.kcss *.so *.pyc build hoge moge tako ika
install :
$(PYTHON) setup.py install
@printf '\n'
@printf '#================================================================\n'
@printf '# Thanks for using Kyoto Cabinet for Python.\n'
@printf '#================================================================\n'
uninstall :
$(PYTHON) setup.py install --record files.tmp
xargs rm -f < files.tmp
dist :
$(MAKE) clean
cd .. && tar cvf - $(PACKAGEDIR) | gzip -c > $(PACKAGETGZ)
check :
$(MAKE) DBNAME=":" RNUM="10000" check-each
$(MAKE) DBNAME="*" RNUM="10000" check-each
$(MAKE) DBNAME="%" RNUM="10000" check-each
$(MAKE) DBNAME="casket.kch" RNUM="10000" check-each
$(MAKE) DBNAME="casket.kct" RNUM="10000" check-each
$(MAKE) DBNAME="casket.kcd" RNUM="1000" check-each
$(MAKE) DBNAME="casket.kcf" RNUM="10000" check-each
@printf '\n'
@printf '#================================================================\n'
@printf '# Checking completed.\n'
@printf '#================================================================\n'
check-each :
rm -rf casket*
$(RUNENV) $(PYTHON) kctest.py order "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py order -rnd "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py order -etc "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py order -rnd -etc "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py order -th 4 "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py order -th 4 -rnd "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py order -th 4 -etc "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py order -th 4 -rnd -etc "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py order -cc -th 4 -rnd -etc "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py wicked "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py wicked -it 4 "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py wicked -th 4 "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py wicked -th 4 -it 4 "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py wicked -cc -th 4 -it 4 "$(DBNAME)" "$(RNUM)"
$(RUNENV) $(PYTHON) kctest.py misc "$(DBNAME)"
rm -rf casket*
check-forever :
while true ; \
do \
$(MAKE) check || break ; \
done
doc :
$(MAKE) docclean
cp -f kyotocabinet-doc.py kyotocabinet.py
-[ -f kyotocabinet.so ] && mv -f kyotocabinet.so kyotocabinet-mod.so || true
-epydoc --name kyotocabinet --no-private --no-sourcecode -o doc -q kyotocabinet.py
-[ -f kyotocabinet-mod.so ] && mv -f kyotocabinet-mod.so kyotocabinet.so || true
rm -f kyotocabinet.py
docclean :
rm -rf doc
.PHONY: all clean install check doc
# END OF FILE
kyotocabinet-python-1.23/kyotocabinet-doc.py 0000644 0001750 0001750 00000115354 11757454566 020320 0 ustar mikio mikio #-------------------------------------------------------------------------------------------------
# Python binding of Kyoto Cabinet
# Copyright (C) 2009-2010 FAL Labs
# This file is part of Kyoto Cabinet.
# This program is free software: you can redistribute it and/or modify it under the terms of
# the GNU General Public License as published by the Free Software Foundation, either version
# 3 of the License, or any later version.
# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
# You should have received a copy of the GNU General Public License along with this program.
# If not, see .
#-------------------------------------------------------------------------------------------------
"""
Python 3.x Binding of Kyoto Cabinet
===================================
Introduction
------------
Kyoto Cabinet is a library of routines for managing a database. The database is a simple data file containing records, each is a pair of a key and a value. Every key and value is serial bytes with variable length. Both binary data and character string can be used as a key and a value. Each key must be unique within a database. There is neither concept of data tables nor data types. Records are organized in hash table or B+ tree.
The following access methods are provided to the database: storing a record with a key and a value, deleting a record by a key, retrieving a record by a key. Moreover, traversal access to every key are provided. These access methods are similar to ones of the original DBM (and its followers: NDBM and GDBM) library defined in the UNIX standard. Kyoto Cabinet is an alternative for the DBM because of its higher performance.
Each operation of the hash database has the time complexity of "O(1)". Therefore, in theory, the performance is constant regardless of the scale of the database. In practice, the performance is determined by the speed of the main memory or the storage device. If the size of the database is less than the capacity of the main memory, the performance will seem on-memory speed, which is faster than std::map of STL. Of course, the database size can be greater than the capacity of the main memory and the upper limit is 8 exabytes. Even in that case, each operation needs only one or two seeking of the storage device.
Each operation of the B+ tree database has the time complexity of "O(log N)". Therefore, in theory, the performance is logarithmic to the scale of the database. Although the performance of random access of the B+ tree database is slower than that of the hash database, the B+ tree database supports sequential access in order of the keys, which realizes forward matching search for strings and range search for integers. The performance of sequential access is much faster than that of random access.
This library wraps the polymorphic database of the C++ API. So, you can select the internal data structure by specifying the database name in runtime. This library works on Python 3.x (3.1 or later) only. Python 2.x requires another dedicated package.
Installation
------------
Install the latest version of Kyoto Cabinet beforehand and get the package of the Python binding of Kyoto Cabinet.
Enter the directory of the extracted package then perform installation. If your system has the another command except for the "python3" command, edit the Makefile beforehand.::
make
make check
su
make install
Symbols of the module `kyotocabinet' should be included in each source file of application programs.::
import kyotocabinet
An instance of the class `DB' is used in order to handle a database. You can store, delete, and retrieve records with the instance.
Example
-------
The following code is a typical example to use a database.::
from kyotocabinet import *
import sys
# create the database object
db = DB()
# open the database
if not db.open("casket.kch", DB.OWRITER | DB.OCREATE):
print("open error: " + str(db.error()), file=sys.stderr)
# store records
if not db.set("foo", "hop") or \
not db.set("bar", "step") or \
not db.set("baz", "jump"):
print("set error: " + str(db.error()), file=sys.stderr)
# retrieve records
value = db.get_str("foo")
if value:
print(value)
else:
print("get error: " + str(db.error()), file=sys.stderr)
# traverse records
cur = db.cursor()
cur.jump()
while True:
rec = cur.get_str(True)
if not rec: break
print(rec[0] + ":" + rec[1])
cur.disable()
# close the database
if not db.close():
print("close error: " + str(db.error()), file=sys.stderr)
The following code is a more complex example, which uses the Visitor pattern.::
from kyotocabinet import *
import sys
# create the database object
db = DB()
# open the database
if not db.open("casket.kch", DB.OREADER):
print("open error: " + str(db.error()), file=sys.stderr)
# define the visitor
class VisitorImpl(Visitor):
# call back function for an existing record
def visit_full(self, key, value):
print("{}:{}".format(key.decode(), value.decode()))
return self.NOP
# call back function for an empty record space
def visit_empty(self, key):
print("{} is missing".format(key.decode()), file=sys.stderr)
return self.NOP
visitor = VisitorImpl()
# retrieve a record with visitor
if not db.accept("foo", visitor, False) or \
not db.accept("dummy", visitor, False):
print("accept error: " + str(db.error()), file=sys.stderr)
# traverse records with visitor
if not db.iterate(visitor, False):
print("iterate error: " + str(db.error()), file=sys.stderr)
# close the database
if not db.close():
print("close error: " + str(db.error()), file=sys.stderr)
The following code is also a complex example, which is more suited to the Python style.::
from kyotocabinet import *
import sys
# define the functor
def dbproc(db):
# store records
db[b'foo'] = b'step'; # bytes is fundamental
db['bar'] = 'hop'; # string is also ok
db[3] = 'jump'; # number is also ok
# retrieve a record value
print("{}".format(db['foo'].decode()))
# update records in transaction
def tranproc():
db['foo'] = 2.71828
return True
db.transaction(tranproc)
# multiply a record value
def mulproc(key, value):
return float(value) * 2
db.accept('foo', mulproc)
# traverse records by iterator
for key in db:
print("{}:{}".format(key.decode(), db[key].decode()))
# upcase values by iterator
def upproc(key, value):
return value.upper()
db.iterate(upproc)
# traverse records by cursor
def curproc(cur):
cur.jump()
def printproc(key, value):
print("{}:{}".format(key.decode(), value.decode()))
return Visitor.NOP
while cur.accept(printproc):
cur.step()
db.cursor_process(curproc)
# process the database by the functor
DB.process(dbproc, 'casket.kch')
License
-------
Copyright (C) 2009-2010 FAL Labs. All rights reserved.
Kyoto Cabinet is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version.
Kyoto Cabinet is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
"""
VERSION = "x.y.z"
"""The version information."""
def conv_bytes(obj):
"""
Convert any object to a string.
@param obj: the object.
@return: the result string.
"""
def atoi(str):
"""
Convert a string to an integer.
@param str: specifies the string.
@return: the integer. If the string does not contain numeric expression, 0 is returned.
"""
def atoix(str):
"""
Convert a string with a metric prefix to an integer.
@param str: the string, which can be trailed by a binary metric prefix. "K", "M", "G", "T", "P", and "E" are supported. They are case-insensitive.
@return: the integer. If the string does not contain numeric expression, 0 is returned. If the integer overflows the domain, INT64_MAX or INT64_MIN is returned according to the sign.
"""
def atof(str):
"""
Convert a string to a real number.
@param str: specifies the string.
@return: the real number. If the string does not contain numeric expression, 0.0 is returned.
"""
def hash_murmur(str):
"""
Get the hash value of a string by MurMur hashing.
@param str: the string.
@return: the hash value.
"""
def hash_fnv(str):
"""
Get the hash value of a string by FNV hashing.
@param str: the string.
@return: the hash value.
"""
def levdist(a, b, utf):
"""
Calculate the levenshtein distance of two strings.
@param a: one string.
@param b: the other string.
@param utf: flag to treat keys as UTF-8 strings.
@return: the levenshtein distance.
"""
class Error:
"""
Error data.
"""
SUCCESS = 0
"""error code: success."""
NOIMPL = 1
"""error code: not implemented."""
INVALID = 2
"""error code: invalid operation."""
NOREPOS = 3
"""error code: no repository."""
NOPERM = 4
"""error code: no permission."""
BROKEN = 5
"""error code: broken file."""
DUPREC = 6
"""error code: record duplication."""
NOREC = 7
"""error code: no record."""
LOGIC = 8
"""error code: logical inconsistency."""
SYSTEM = 9
"""error code: system error."""
MISC = 15
"""error code: miscellaneous error."""
def __init__(self, code, message):
"""
Create an error object.
@param code: the error code.
@param message: the supplement message.
@return: the error object.
"""
def set(self, code, message):
"""
Set the error information.
@param code: the error code.
@param message: the supplement message.
@return: always None.
"""
def code(self):
"""
Get the error code.
@return: the error code.
"""
def name(self):
"""
Get the readable string of the code.
@return: the readable string of the code.
"""
def message(self):
"""
Get the supplement message.
@return: the supplement message.
"""
def __repr__(self):
"""
Get the representing expression.
@return: the representing expression.
"""
def __str__(self):
"""
Get the string expression.
@return: the string expression.
"""
def __cmp__(self, right):
"""
Generic comparison operator.
@param right: an error object or an error code.
@return: boolean value of the comparison result.
"""
class Visitor:
"""
Interface to access a record.
"""
NOP = "(magic data)"
"""magic data: no operation."""
REMOVE = "(magic data)"
"""magic data: remove the record."""
def visit_full(self, key, value):
"""
Visit a record.
@param key: the key.
@param value: the value.
@return: If it is a string, the value is replaced by the content. If it is Visitor.NOP, nothing is modified. If it is Visitor.REMOVE, the record is removed.
"""
def visit_empty(self, key):
"""
Visit a empty record space.
@param key: the key.
@return: If it is a string, the value is replaced by the content. If it is Visitor.NOP or Visitor.REMOVE, nothing is modified.
"""
class FileProcessor:
"""
Interface to process the database file.
"""
def process(self, path, size, count):
"""
Process the database file.
@param path: the path of the database file.
@param count: the number of records.
@param size: the size of the available region.
@return: true on success, or false on failure.
"""
class Cursor:
"""
Interface of cursor to indicate a record.
"""
def disable(self):
"""
Disable the cursor.
@return: always None.
@note: This method should be called explicitly when the cursor is no longer in use.
"""
def accept(self, visitor, writable = True, step = False):
"""
Accept a visitor to the current record.
@param visitor: a visitor object which implements the Visitor interface, or a function object which receives the key and the value.
@param writable: true for writable operation, or false for read-only operation.
@param step: true to move the cursor to the next record, or false for no move.
@return: true on success, or false on failure.
@note: The operation for each record is performed atomically and other threads accessing the same record are blocked. To avoid deadlock, any explicit database operation must not be performed in this method.
"""
def set_value(self, value, step = False):
"""
Set the value of the current record.
@param value: the value.
@param step: true to move the cursor to the next record,
@return: true on success, or false on failure.
"""
def remove(self):
"""
Remove the current record.
@return: true on success, or false on failure.
@note: If no record corresponds to the key, false is returned. The cursor is moved to the next record implicitly.
"""
def get_key(self, step = False):
"""
Get the key of the current record.
@param step: true to move the cursor to the next record, or false for no move.
@return: the key of the current record, or None on failure.
@note: If the cursor is invalidated, None is returned.
"""
def get_key_str(self, step = False):
"""
Get the key of the current record.
@note: Equal to the original Cursor::get_key method except that the return value is string.
"""
def get_value(self, step = False):
"""
Get the value of the current record.
@param step: true to move the cursor to the next record, or false for no move.
@return: the value of the current record, or None on failure.
@note: If the cursor is invalidated, None is returned.
"""
def get_value_str(self, step = False):
"""
Get the value of the current record.
@note: Equal to the original Cursor::get_value method except that the return value is string.
"""
def get(self, step = False):
"""
Get a pair of the key and the value of the current record.
@param step: true to move the cursor to the next record, or false for no move.
@return: a pair of the key and the value of the current record, or None on failure.
@note: If the cursor is invalidated, None is returned.
"""
def get_str(self, step = False):
"""
Get a pair of the key and the value of the current record.
@note: Equal to the original Cursor::get method except that the return value is string.
"""
def seize(self):
"""
Get a pair of the key and the value of the current record and remove it atomically.
@return: a pair of the key and the value of the current record, or None on failure.
@note: If the cursor is invalidated, None is returned. The cursor is moved to the next record implicitly.
"""
def seize_str(self):
"""
Get a pair of the key and the value of the current record and remove it atomically.
@note: Equal to the original Cursor::seize method except that the return value is string.
"""
def jump(self, key = None):
"""
Jump the cursor to a record for forward scan.
@param key: the key of the destination record. If it is None, the destination is the first record.
@return: true on success, or false on failure.
"""
def jump_back(self, key = None):
"""
Jump the cursor to a record for backward scan.
@param key: the key of the destination record. If it is None, the destination is the last record.
@return: true on success, or false on failure.
@note: This method is dedicated to tree databases. Some database types, especially hash databases, will provide a dummy implementation.
"""
def step(self):
"""
Step the cursor to the next record.
@return: true on success, or false on failure.
"""
def step_back(self):
"""
Step the cursor to the previous record.
@return: true on success, or false on failure.
@note: This method is dedicated to tree databases. Some database types, especially hash databases, may provide a dummy implementation.
"""
def db(self):
"""
Get the database object.
@return: the database object.
"""
def error(self):
"""
Get the last happened error.
@return: the last happened error.
"""
def __repr__(self):
"""
Get the representing expression.
@return: the representing expression.
"""
def __str__(self):
"""
Get the string expression.
@return: the string expression.
"""
def __next__(self):
"""
Get the next key.
@return: the next key, or None on failure.
"""
class DB:
"""
Interface of database abstraction.
"""
GEXCEPTIONAL = 1
"""generic mode: exceptional mode."""
GCONCURRENT = 2
"""generic mode: concurrent mode."""
OREADER = 1
"""open mode: open as a reader."""
OWRITER = 2
"""open mode: open as a writer."""
OCREATE = 4
"""open mode: writer creating."""
OTRUNCATE = 8
"""open mode: writer truncating."""
OAUTOTRAN = 16
"""open mode: auto transaction."""
OAUTOSYNC = 32
"""open mode: auto synchronization."""
ONOLOCK = 64
"""open mode: open without locking."""
OTRYLOCK = 128
"""open mode: lock without blocking."""
ONOREPAIR = 256
"""open mode: open without auto repair."""
MSET = 0
"""merge mode: overwrite the existing value."""
MADD = 1
"""merge mode: keep the existing value."""
MREPLACE = 2
"""merge mode: modify the existing record only."""
MAPPEND = 3
"""merge mode: append the new value."""
def __init__(self, opts = 0):
"""
Create a database object.
@param opts: the optional features by bitwise-or: DB.GEXCEPTIONAL for the exceptional mode, DB.GCONCURRENT for the concurrent mode.
@return: the database object.
@note: The exceptional mode means that fatal errors caused by methods are reported by exceptions raised. The concurrent mode means that database operations by multiple threads are performed concurrently without the giant VM lock. However, it has a side effect that such methods with call back of Python code as DB#accept, DB#accept_bulk, DB#iterate, and Cursor#accept are disabled.
"""
def error(self):
"""
Get the last happened error.
@return: the last happened error.
"""
def open(self, path = ":", mode = OWRITER | OCREATE):
"""
Open a database file.
@param path: the path of a database file. If it is "-", the database will be a prototype hash database. If it is "+", the database will be a prototype tree database. If it is ":", the database will be a stash database. If it is "*", the database will be a cache hash database. If it is "%", the database will be a cache tree database. If its suffix is ".kch", the database will be a file hash database. If its suffix is ".kct", the database will be a file tree database. If its suffix is ".kcd", the database will be a directory hash database. If its suffix is ".kcf", the database will be a directory tree database. If its suffix is ".kcx", the database will be a plain text database. Otherwise, this function fails. Tuning parameters can trail the name, separated by "#". Each parameter is composed of the name and the value, separated by "=". If the "type" parameter is specified, the database type is determined by the value in "-", "+", ":", "*", "%", "kch", "kct", "kcd", kcf", and "kcx". All database types support the logging parameters of "log", "logkinds", and "logpx". The prototype hash database and the prototype tree database do not support any other tuning parameter. The stash database supports "bnum". The cache hash database supports "opts", "bnum", "zcomp", "capcnt", "capsiz", and "zkey". The cache tree database supports all parameters of the cache hash database except for capacity limitation, and supports "psiz", "rcomp", "pccap" in addition. The file hash database supports "apow", "fpow", "opts", "bnum", "msiz", "dfunit", "zcomp", and "zkey". The file tree database supports all parameters of the file hash database and "psiz", "rcomp", "pccap" in addition. The directory hash database supports "opts", "zcomp", and "zkey". The directory tree database supports all parameters of the directory hash database and "psiz", "rcomp", "pccap" in addition. The plain text database does not support any other tuning parameter.
@param mode: the connection mode. DB.OWRITER as a writer, DB.OREADER as a reader. The following may be added to the writer mode by bitwise-or: DB.OCREATE, which means it creates a new database if the file does not exist, DB.OTRUNCATE, which means it creates a new database regardless if the file exists, DB.OAUTOTRAN, which means each updating operation is performed in implicit transaction, DB.OAUTOSYNC, which means each updating operation is followed by implicit synchronization with the file system. The following may be added to both of the reader mode and the writer mode by bitwise-or: DB.ONOLOCK, which means it opens the database file without file locking, DB.OTRYLOCK, which means locking is performed without blocking, DB.ONOREPAIR, which means the database file is not repaired implicitly even if file destruction is detected.
@return: true on success, or false on failure.
@note: The tuning parameter "log" is for the original "tune_logger" and the value specifies the path of the log file, or "-" for the standard output, or "+" for the standard error. "logkinds" specifies kinds of logged messages and the value can be "debug", "info", "warn", or "error". "logpx" specifies the prefix of each log message. "opts" is for "tune_options" and the value can contain "s" for the small option, "l" for the linear option, and "c" for the compress option. "bnum" corresponds to "tune_bucket". "zcomp" is for "tune_compressor" and the value can be "zlib" for the ZLIB raw compressor, "def" for the ZLIB deflate compressor, "gz" for the ZLIB gzip compressor, "lzo" for the LZO compressor, "lzma" for the LZMA compressor, or "arc" for the Arcfour cipher. "zkey" specifies the cipher key of the compressor. "capcnt" is for "cap_count". "capsiz" is for "cap_size". "psiz" is for "tune_page". "rcomp" is for "tune_comparator" and the value can be "lex" for the lexical comparator, "dec" for the decimal comparator, "lexdesc" for the lexical descending comparator, or "decdesc" for the decimal descending comparator. "pccap" is for "tune_page_cache". "apow" is for "tune_alignment". "fpow" is for "tune_fbp". "msiz" is for "tune_map". "dfunit" is for "tune_defrag". Every opened database must be closed by the PolyDB::close method when it is no longer in use. It is not allowed for two or more database objects in the same process to keep their connections to the same database file at the same time.
"""
def close(self):
"""
Close the database file.
@return: true on success, or false on failure.
"""
def accept(self, key, visitor, writable = True):
"""
Accept a visitor to a record.
@param key: the key.
@param visitor: a visitor object which implements the Visitor interface, or a function object which receives the key and the value.
@param writable: true for writable operation, or false for read-only operation.
@return: true on success, or false on failure.
@note: The operation for each record is performed atomically and other threads accessing the same record are blocked. To avoid deadlock, any explicit database operation must not be performed in this method.
"""
def accept_bulk(self, keys, visitor, writable = True):
"""
Accept a visitor to multiple records at once.
@param keys: specifies a sequence object of the keys.
@param visitor: a visitor object which implements the Visitor interface, or a function object which receives the key and the value.
@param writable: true for writable operation, or false for read-only operation.
@return: true on success, or false on failure.
@note: The operations for specified records are performed atomically and other threads accessing the same records are blocked. To avoid deadlock, any explicit database operation must not be performed in this method.
"""
def iterate(self, visitor, writable = True):
"""
Iterate to accept a visitor for each record.
@param visitor: a visitor object which implements the Visitor interface, or a function object which receives the key and the value.
@param writable: true for writable operation, or false for read-only operation.
@return: true on success, or false on failure.
@note: The whole iteration is performed atomically and other threads are blocked. To avoid deadlock, any explicit database operation must not be performed in this method.
"""
def set(self, key, value):
"""
Set the value of a record.
@param key: the key.
@param value: the value.
@return: true on success, or false on failure.
@note: If no record corresponds to the key, a new record is created. If the corresponding record exists, the value is overwritten.
"""
def add(self, key, value):
"""
Add a record.
@param key: the key.
@param value: the value.
@return: true on success, or false on failure.
@note: If no record corresponds to the key, a new record is created. If the corresponding record exists, the record is not modified and false is returned.
"""
def replace(self, key, value):
"""
Replace the value of a record.
@param key: the key.
@param value: the value.
@return: true on success, or false on failure.
@note: If no record corresponds to the key, no new record is created and false is returned. If the corresponding record exists, the value is modified.
"""
def append(self, key, value):
"""
Append the value of a record.
@param key: the key.
@param value: the value.
@return: true on success, or false on failure.
@note: If no record corresponds to the key, a new record is created. If the corresponding record exists, the given value is appended at the end of the existing value.
"""
def increment(self, key, num = 0, orig = 0):
"""
Add a number to the numeric integer value of a record.
@param key: the key.
@param num: the additional number.
@param orig: the origin number if no record corresponds to the key. If it is negative infinity and no record corresponds, this method fails. If it is positive infinity, the value is set as the additional number regardless of the current value.
@return: the result value, or None on failure.
@note: The value is serialized as an 8-byte binary integer in big-endian order, not a decimal string. If existing value is not 8-byte, this method fails.
"""
def increment_double(self, key, num = 0.0, orig = 0.0):
"""
Add a number to the numeric double value of a record.
@param key: the key.
@param num: the additional number.
@param orig: the origin number if no record corresponds to the key. If it is negative infinity and no record corresponds, this method fails. If it is positive infinity, the value is set as the additional number regardless of the current value.
@return: the result value, or None on failure.
@note: The value is serialized as an 16-byte binary fixed-point number in big-endian order, not a decimal string. If existing value is not 16-byte, this method fails.
"""
def cas(self, key, oval, nval):
"""
Perform compare-and-swap.
@param key: the key.
@param oval: the old value. None means that no record corresponds.
@param nval: the new value. None means that the record is removed.
@return: true on success, or false on failure.
"""
def remove(self, key):
"""
Remove a record.
@param key: the key.
@return: true on success, or false on failure.
@note: If no record corresponds to the key, false is returned.
"""
def get(self, key):
"""
Retrieve the value of a record.
@param key: the key.
@return: the value of the corresponding record, or None on failure.
"""
def get_str(self, key):
"""
Retrieve the value of a record.
@note: Equal to the original DB::get method except that the return value is string.
"""
def check(self, key):
"""
Check the existence of a record.
@param key: the key.
@return: the size of the value, or -1 on failure.
"""
def seize(self, key):
"""
Retrieve the value of a record and remove it atomically.
@param key: the key.
@return: the value of the corresponding record, or None on failure.
"""
def seize_str(self, key):
"""
Retrieve the value of a record and remove it atomically.
@note: Equal to the original DB::seize method except that the return value is string.
"""
def set_bulk(self, recs, atomic = True):
"""
Store records at once.
@param recs: a map object of the records to store.
@param atomic: true to perform all operations atomically, or false for non-atomic operations.
@return: the number of stored records, or -1 on failure.
"""
def remove_bulk(self, keys, atomic = True):
"""
Remove records at once.
@param keys: a sequence object of the keys of the records to remove.
@param atomic: true to perform all operations atomically, or false for non-atomic operations.
@return: the number of removed records, or -1 on failure.
"""
def get_bulk(self, keys, atomic = True):
"""
Retrieve records at once.
@param keys: a sequence object of the keys of the records to retrieve.
@param atomic: true to perform all operations atomically, or false for non-atomic operations.
@return: a map object of retrieved records, or None on failure.
"""
def get_bulk_str(self, keys, atomic = True):
"""
Retrieve records at once.
@note: Equal to the original DB::get_bulk method except that the return value is string map.
"""
def clear(self):
"""
Remove all records.
@return: true on success, or false on failure.
"""
def synchronize(self, hard = False, proc = None):
"""
Synchronize updated contents with the file and the device.
@param hard: true for physical synchronization with the device, or false for logical synchronization with the file system.
@param proc: a postprocessor object which implements the FileProcessor interface, or a function object which receives the same parameters. If it is None, no postprocessing is performed.
@return: true on success, or false on failure.
@note: The operation of the processor is performed atomically and other threads accessing the same record are blocked. To avoid deadlock, any explicit database operation must not be performed in this method.
"""
def occupy(self, writable = False, proc = None):
"""
Occupy database by locking and do something meanwhile.
@param writable: true to use writer lock, or false to use reader lock.
@param proc: a processor object which implements the FileProcessor interface, or a function object which receives the same parameters. If it is None, no processing is performed.
@return: true on success, or false on failure.
@note: The operation of the processor is performed atomically and other threads accessing the same record are blocked. To avoid deadlock, any explicit database operation must not be performed in this method.
"""
def copy(self, dest):
"""
Create a copy of the database file.
@param dest: the path of the destination file.
@return: true on success, or false on failure.
"""
def begin_transaction(self, hard = False):
"""
Begin transaction.
@param hard: true for physical synchronization with the device, or false for logical synchronization with the file system.
@return: true on success, or false on failure.
"""
def end_transaction(self, commit = True):
"""
End transaction.
@param commit: true to commit the transaction, or false to abort the transaction.
@return: true on success, or false on failure.
"""
def transaction(self, proc, hard = False):
"""
Perform entire transaction by a functor.
@param proc: the functor of operations during transaction. If the function returns true, the transaction is committed. If the function returns false or an exception is thrown, the transaction is aborted.
@param hard: true for physical synchronization with the device, or false for logical synchronization with the file system.
@return: true on success, or false on failure.
"""
def dump_snapshot(self, dest):
"""
Dump records into a snapshot file.
@param dest: the name of the destination file.
@return: true on success, or false on failure.
"""
def load_snapshot(self, src):
"""
Load records from a snapshot file.
@param src: the name of the source file.
@return: true on success, or false on failure.
"""
def count(self):
"""
Get the number of records.
@return: the number of records, or -1 on failure.
"""
def size(self):
"""
Get the size of the database file.
@return: the size of the database file in bytes, or -1 on failure.
"""
def path(self):
"""
Get the path of the database file.
@return: the path of the database file, or None on failure.
"""
def status(self):
"""
Get the miscellaneous status information.
@return: a dictionary object of the status information, or None on failure.
"""
def match_prefix(self, prefix, max = -1):
"""
Get keys matching a prefix string.
@param prefix: the prefix string.
@param max: the maximum number to retrieve. If it is negative, no limit is specified.
@return: a list object of matching keys, or None on failure.
"""
def match_regex(self, regex, max = -1):
"""
Get keys matching a regular expression string.
@param regex: the regular expression string.
@param max: the maximum number to retrieve. If it is negative, no limit is specified.
@return: a list object of matching keys, or None on failure.
"""
def match_similar(self, origin, range = 1, utf = False, max = -1):
"""
Get keys similar to a string in terms of the levenshtein distance.
@param origin: the origin string.
@param range: the maximum distance of keys to adopt.
@param utf: flag to treat keys as UTF-8 strings.
@param max: the maximum number to retrieve. If it is negative, no limit is specified.
@return: a list object of matching keys, or None on failure.
"""
def merge(self, srcary, mode = MSET):
"""
Merge records from other databases.
@param srcary: an array of the source detabase objects.
@param mode: the merge mode. DB.MSET to overwrite the existing value, DB.MADD to keep the existing value, DB.MAPPEND to append the new value.
@return: true on success, or false on failure.
"""
def cursor(self):
"""
Create a cursor object.
@return: the return value is the created cursor object. Each cursor should be disabled with the Cursor#disable method when it is no longer in use.
"""
def cursor_process(self, proc) :
"""
Process a cursor by a functor.
@param proc: the functor of operations for the cursor. The cursor is disabled implicitly after the block.
@return: always None.
"""
def shift(self):
"""
Remove the first record.
@return: a pair of the key and the value of the removed record, or None on failure.
"""
def shift_str(self):
"""
Remove the first record.
@note: Equal to the original DB::shift method except that the return value is string.
"""
def tune_exception_rule(self, codes):
"""
Set the rule about throwing exception.
@param codes: a sequence of error codes. If each method occurs an error corresponding to one of the specified codes, the error is thrown as an exception.
@return: true on success, or false on failure.
"""
def __repr__(self):
"""
Get the representing expression.
@return: the representing expression.
"""
def __str__(self):
"""
Get the string expression.
@return: the string expression.
"""
def __len__(self):
"""
Alias of the count method.
"""
def __getitem__(self, key, value):
"""
Alias of the get method.
"""
def __setitem__(self, key, value):
"""
Alias of the set method.
"""
def __iter__(self):
"""
Alias of the cursor method.
"""
def process(proc, path = "*", mode = OWRITER | OCREATE, opts = 0):
"""
Process a database by a functor. (static method)
@param proc: the functor to process the database, whose object is passd as the parameter.
@param path: the same to the one of the open method.
@param mode: the same to the one of the open method.
@param opts: the optional features by bitwise-or: DB.GCONCURRENT for the concurrent mode.
@return: None on success, or an error object on failure.
"""
# END OF FILE
kyotocabinet-python-1.23/kyotocabinet.cc 0000644 0001750 0001750 00000311476 11757455216 017506 0 ustar mikio mikio /*************************************************************************************************
* Python binding
* Copyright (C) 2009-2010 FAL Labs
* This file is part of Kyoto Cabinet.
* This program is free software: you can redistribute it and/or modify it under the terms of
* the GNU General Public License as published by the Free Software Foundation, either version
* 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
* without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU General Public License for more details.
* You should have received a copy of the GNU General Public License along with this program.
* If not, see .
*************************************************************************************************/
#include
namespace kc = kyotocabinet;
extern "C" {
#undef _POSIX_C_SOURCE
#undef _XOPEN_SOURCE
#include
#include
/* precedent type declaration */
class SoftString;
class CursorBurrow;
class SoftCursor;
class SoftVisitor;
class SoftFileProcessor;
struct Error_data;
struct Visitor_data;
struct FileProcessor_data;
struct Cursor_data;
struct DB_data;
class NativeFunction;
typedef std::map StringMap;
typedef std::vector StringVector;
/* function prototypes */
PyMODINIT_FUNC PyInit_kyotocabinet(void);
static bool setconstuint32(PyObject* pyobj, const char* name, uint32_t value);
static void throwruntime(const char* message);
static void throwinvarg();
static PyObject* newstring(const char* str);
static PyObject* newbytes(const char* ptr, size_t size);
static int64_t pyatoi(PyObject* pyobj);
static double pyatof(PyObject* pyobj);
static PyObject* maptopymap(const StringMap* map);
static PyObject* vectortopylist(const StringVector* vec);
static void threadyield();
static bool define_module();
static PyObject* kc_conv_bytes(PyObject* pyself, PyObject* pyargs);
static PyObject* kc_atoi(PyObject* pyself, PyObject* pyargs);
static PyObject* kc_atoix(PyObject* pyself, PyObject* pyargs);
static PyObject* kc_atof(PyObject* pyself, PyObject* pyargs);
static PyObject* kc_hash_murmur(PyObject* pyself, PyObject* pyargs);
static PyObject* kc_hash_fnv(PyObject* pyself, PyObject* pyargs);
static PyObject* kc_levdist(PyObject* pyself, PyObject* pyargs);
static bool define_err();
static bool err_define_child(const char* name, uint32_t code);
static PyObject* err_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds);
static void err_dealloc(Error_data* data);
static int err_init(Error_data* data, PyObject* pyargs, PyObject* pykwds);
static PyObject* err_repr(Error_data* data);
static PyObject* err_str(Error_data* data);
static PyObject* err_richcmp(Error_data* data, PyObject* right, int op);
static PyObject* err_set(Error_data* data, PyObject* pyargs);
static PyObject* err_code(Error_data* data);
static PyObject* err_name(Error_data* data);
static PyObject* err_message(Error_data* data);
static bool define_vis();
static PyObject* vis_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds);
static void vis_dealloc(Visitor_data* data);
static int vis_init(Visitor_data* data, PyObject* pyargs, PyObject* pykwds);
static PyObject* vis_visit_full(Visitor_data* data, PyObject* pyargs);
static PyObject* vis_visit_empty(Visitor_data* data, PyObject* pyargs);
static bool define_fproc();
static PyObject* fproc_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds);
static void fproc_dealloc(FileProcessor_data* data);
static int fproc_init(FileProcessor_data* data, PyObject* pyargs, PyObject* pykwds);
static PyObject* fproc_process(FileProcessor_data* data, PyObject* pyargs);
static bool define_cur();
static PyObject* cur_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds);
static void cur_dealloc(Cursor_data* data);
static int cur_init(Cursor_data* data, PyObject* pyargs, PyObject* pykwds);
static PyObject* cur_repr(Cursor_data* data);
static PyObject* cur_str(Cursor_data* data);
static PyObject* cur_disable(Cursor_data* data);
static PyObject* cur_accept(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_set_value(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_remove(Cursor_data* data);
static PyObject* cur_get_key(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_get_key_str(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_get_value(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_get_value_str(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_get(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_get_str(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_seize(Cursor_data* data);
static PyObject* cur_seize_str(Cursor_data* data);
static PyObject* cur_jump(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_jump_back(Cursor_data* data, PyObject* pyargs);
static PyObject* cur_step(Cursor_data* data);
static PyObject* cur_step_back(Cursor_data* data);
static PyObject* cur_db(Cursor_data* data);
static PyObject* cur_error(Cursor_data* data);
static PyObject* cur_op_iter(Cursor_data* data);
static PyObject* cur_op_iternext(Cursor_data* data);
static bool define_db();
static PyObject* db_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds);
static void db_dealloc(DB_data* data);
static bool db_raise(DB_data* data);
static int db_init(DB_data* data, PyObject* pyargs, PyObject* pykwds);
static PyObject* db_repr(DB_data* data);
static PyObject* db_str(DB_data* data);
static PyObject* db_error(DB_data* data);
static PyObject* db_open(DB_data* data, PyObject* pyargs);
static PyObject* db_close(DB_data* data);
static PyObject* db_accept(DB_data* data, PyObject* pyargs);
static PyObject* db_accept_bulk(DB_data* data, PyObject* pyargs);
static PyObject* db_iterate(DB_data* data, PyObject* pyargs);
static PyObject* db_set(DB_data* data, PyObject* pyargs);
static PyObject* db_add(DB_data* data, PyObject* pyargs);
static PyObject* db_replace(DB_data* data, PyObject* pyargs);
static PyObject* db_append(DB_data* data, PyObject* pyargs);
static PyObject* db_increment(DB_data* data, PyObject* pyargs);
static PyObject* db_increment_double(DB_data* data, PyObject* pyargs);
static PyObject* db_cas(DB_data* data, PyObject* pyargs);
static PyObject* db_remove(DB_data* data, PyObject* pyargs);
static PyObject* db_get(DB_data* data, PyObject* pyargs);
static PyObject* db_get_str(DB_data* data, PyObject* pyargs);
static PyObject* db_check(DB_data* data, PyObject* pyargs);
static PyObject* db_seize(DB_data* data, PyObject* pyargs);
static PyObject* db_seize_str(DB_data* data, PyObject* pyargs);
static PyObject* db_set_bulk(DB_data* data, PyObject* pyargs);
static PyObject* db_remove_bulk(DB_data* data, PyObject* pyargs);
static PyObject* db_get_bulk(DB_data* data, PyObject* pyargs);
static PyObject* db_get_bulk_str(DB_data* data, PyObject* pyargs);
static PyObject* db_clear(DB_data* data);
static PyObject* db_synchronize(DB_data* data, PyObject* pyargs);
static PyObject* db_occupy(DB_data* data, PyObject* pyargs);
static PyObject* db_copy(DB_data* data, PyObject* pyargs);
static PyObject* db_begin_transaction(DB_data* data, PyObject* pyargs);
static PyObject* db_end_transaction(DB_data* data, PyObject* pyargs);
static PyObject* db_transaction(DB_data* data, PyObject* pyargs);
static PyObject* db_dump_snapshot(DB_data* data, PyObject* pyargs);
static PyObject* db_load_snapshot(DB_data* data, PyObject* pyargs);
static PyObject* db_count(DB_data* data);
static PyObject* db_size(DB_data* data);
static PyObject* db_path(DB_data* data);
static PyObject* db_status(DB_data* data);
static PyObject* db_match_prefix(DB_data* data, PyObject* pyargs);
static PyObject* db_match_regex(DB_data* data, PyObject* pyargs);
static PyObject* db_match_similar(DB_data* data, PyObject* pyargs);
static PyObject* db_merge(DB_data* data, PyObject* pyargs);
static PyObject* db_cursor(DB_data* data);
static PyObject* db_cursor_process(DB_data* data, PyObject* pyargs);
static PyObject* db_shift(DB_data* data);
static PyObject* db_shift_str(DB_data* data);
static char* db_shift_impl(kc::PolyDB* db, size_t* ksp, const char** vbp, size_t* vsp);
static PyObject* db_tune_exception_rule(DB_data* data, PyObject* pyargs);
static Py_ssize_t db_op_len(DB_data* data);
static PyObject* db_op_getitem(DB_data* data, PyObject* pykey);
static int db_op_setitem(DB_data* data, PyObject* pykey, PyObject* pyvalue);
static PyObject* db_op_iter(DB_data* data);
static PyObject* db_process(PyObject* cls, PyObject* pyargs);
/* global variables */
PyObject* mod_kc;
PyObject* mod_th;
PyObject* mod_time;
PyObject* cls_err;
PyObject* cls_err_children[(int)kc::PolyDB::Error::MISC+1];
PyObject* cls_vis;
PyObject* obj_vis_nop;
PyObject* obj_vis_remove;
PyObject* cls_fproc;
PyObject* cls_cur;
PyObject* cls_db;
/**
* Generic options.
*/
enum GenericOption {
GEXCEPTIONAL = 1 << 0,
GCONCURRENT = 1 << 1
};
/**
* Wrapper to treat a Python string as a C++ string.
*/
class SoftString {
public:
explicit SoftString(PyObject* pyobj) :
pyobj_(pyobj), pystr_(NULL), pybytes_(NULL), ptr_(NULL), size_(0) {
Py_INCREF(pyobj_);
if (PyUnicode_Check(pyobj_)) {
pybytes_ = PyUnicode_AsUTF8String(pyobj_);
if (pybytes_) {
ptr_ = PyBytes_AS_STRING(pybytes_);
size_ = PyBytes_GET_SIZE(pybytes_);
} else {
PyErr_Clear();
ptr_ = "";
size_ = 0;
}
} else if (PyBytes_Check(pyobj_)) {
ptr_ = PyBytes_AS_STRING(pyobj_);
size_ = PyBytes_GET_SIZE(pyobj_);
} else if (PyByteArray_Check(pyobj_)) {
ptr_ = PyByteArray_AS_STRING(pyobj_);
size_ = PyByteArray_GET_SIZE(pyobj_);
} else if (pyobj_ == Py_None) {
ptr_ = "";
size_ = 0;
} else {
pystr_ = PyObject_Str(pyobj_);
if (pystr_) {
pybytes_ = PyUnicode_AsUTF8String(pystr_);
if (pybytes_) {
ptr_ = PyBytes_AS_STRING(pybytes_);
size_ = PyBytes_GET_SIZE(pybytes_);
} else {
PyErr_Clear();
ptr_ = "";
size_ = 0;
}
} else {
ptr_ = "(unknown)";
size_ = std::strlen(ptr_);
}
}
}
~SoftString() {
if (pybytes_) Py_DECREF(pybytes_);
if (pystr_) Py_DECREF(pystr_);
Py_DECREF(pyobj_);
}
const char* ptr() {
return ptr_;
}
const size_t size() {
return size_;
}
private:
PyObject* pyobj_;
PyObject* pystr_;
PyObject* pybytes_;
const char* ptr_;
size_t size_;
};
/**
* Burrow of cursors no longer in use.
*/
class CursorBurrow {
private:
typedef std::vector CursorList;
public:
explicit CursorBurrow() : dcurs_() {}
~CursorBurrow() {
sweap();
}
void sweap() {
if (dcurs_.size() > 0) {
CursorList::iterator dit = dcurs_.begin();
CursorList::iterator ditend = dcurs_.end();
while (dit != ditend) {
kc::PolyDB::Cursor* cur = *dit;
delete cur;
dit++;
}
dcurs_.clear();
}
}
void deposit(kc::PolyDB::Cursor* cur) {
dcurs_.push_back(cur);
}
private:
CursorList dcurs_;
} g_curbur;
/**
* Wrapper of a cursor.
*/
class SoftCursor {
public:
explicit SoftCursor(kc::PolyDB* db) : cur_(NULL) {
cur_ = db->cursor();
}
~SoftCursor() {
if (cur_) g_curbur.deposit(cur_);
}
kc::PolyDB::Cursor* cur() {
return cur_;
}
void disable() {
delete cur_;
cur_ = NULL;
}
private:
kc::PolyDB::Cursor* cur_;
};
/**
* Wrapper of a visitor.
*/
class SoftVisitor : public kc::PolyDB::Visitor {
public:
explicit SoftVisitor(PyObject* pyvisitor, bool writable) :
pyvisitor_(pyvisitor), writable_(writable), pyrv_(NULL), rv_(NULL),
pyextype_(NULL), pyexvalue_(NULL), pyextrace_(NULL) {
Py_INCREF(pyvisitor_);
}
~SoftVisitor() {
cleanup();
Py_DECREF(pyvisitor_);
}
bool exception(PyObject** typep, PyObject** valuep, PyObject** tracep) {
if (!pyextype_) return false;
*typep = pyextype_;
*valuep = pyexvalue_;
*tracep = pyextrace_;
return true;
}
private:
const char* visit_full(const char* kbuf, size_t ksiz,
const char* vbuf, size_t vsiz, size_t* sp) {
cleanup();
PyObject* pyrv;
if (PyCallable_Check(pyvisitor_)) {
pyrv = PyObject_CallFunction(pyvisitor_, (char*)"(y#y#)", kbuf, ksiz, vbuf, vsiz);
} else {
pyrv = PyObject_CallMethod(pyvisitor_, (char*)"visit_full",
(char*)"(y#y#)", kbuf, ksiz, vbuf, vsiz);
}
if (!pyrv) {
if (PyErr_Occurred()) PyErr_Fetch(&pyextype_, &pyexvalue_, &pyextrace_);
return NOP;
}
if (pyrv == Py_None || pyrv == obj_vis_nop) {
Py_DECREF(pyrv);
return NOP;
}
if (!writable_) {
Py_DECREF(pyrv);
throwruntime("confliction with the read-only parameter");
if (PyErr_Occurred()) PyErr_Fetch(&pyextype_, &pyexvalue_, &pyextrace_);
return NOP;
}
if (pyrv == obj_vis_remove) {
Py_DECREF(pyrv);
return REMOVE;
}
pyrv_ = pyrv;
rv_ = new SoftString(pyrv);
*sp = rv_->size();
return rv_->ptr();
}
const char* visit_empty(const char* kbuf, size_t ksiz, size_t* sp) {
cleanup();
PyObject* pyrv;
if (PyCallable_Check(pyvisitor_)) {
pyrv = PyObject_CallFunction(pyvisitor_, (char*)"(y#O)", kbuf, ksiz, Py_None);
} else {
pyrv = PyObject_CallMethod(pyvisitor_, (char*)"visit_empty",
(char*)"(y#)", kbuf, ksiz);
}
if (!pyrv) {
if (PyErr_Occurred()) PyErr_Fetch(&pyextype_, &pyexvalue_, &pyextrace_);
return NOP;
}
if (pyrv == Py_None || pyrv == obj_vis_nop) {
Py_DECREF(pyrv);
return NOP;
}
if (!writable_) {
Py_DECREF(pyrv);
throwruntime("confliction with the read-only parameter");
if (PyErr_Occurred()) PyErr_Fetch(&pyextype_, &pyexvalue_, &pyextrace_);
return NOP;
}
if (pyrv == obj_vis_remove) {
Py_DECREF(pyrv);
return REMOVE;
}
pyrv_ = pyrv;
rv_ = new SoftString(pyrv);
*sp = rv_->size();
return rv_->ptr();
}
void cleanup() {
if (pyextrace_) {
Py_DECREF(pyextrace_);
pyextrace_ = NULL;
}
if (pyexvalue_) {
Py_DECREF(pyexvalue_);
pyexvalue_ = NULL;
}
if (pyextype_) {
Py_DECREF(pyextype_);
pyextype_ = NULL;
}
delete rv_;
rv_ = NULL;
if (pyrv_) {
Py_DECREF(pyrv_);
pyrv_ = NULL;
}
}
PyObject* pyvisitor_;
bool writable_;
PyObject* pyrv_;
SoftString* rv_;
PyObject* pyextype_;
PyObject* pyexvalue_;
PyObject* pyextrace_;
};
/**
* Wrapper of a file processor.
*/
class SoftFileProcessor : public kc::PolyDB::FileProcessor {
public:
explicit SoftFileProcessor(PyObject* pyproc) :
pyproc_(pyproc), pyextype_(NULL), pyexvalue_(NULL), pyextrace_(NULL) {
Py_INCREF(pyproc_);
}
~SoftFileProcessor() {
if (pyextrace_) Py_DECREF(pyextrace_);
if (pyexvalue_) Py_DECREF(pyexvalue_);
if (pyextype_) Py_DECREF(pyextype_);
Py_DECREF(pyproc_);
}
bool exception(PyObject** typep, PyObject** valuep, PyObject** tracep) {
if (!pyextype_) return false;
*typep = pyextype_;
*valuep = pyexvalue_;
*tracep = pyextrace_;
return true;
}
private:
bool process(const std::string& path, int64_t count, int64_t size) {
PyObject* pyrv;
if (PyCallable_Check(pyproc_)) {
pyrv = PyObject_CallFunction(pyproc_, (char*)"(sLL)",
path.c_str(), (long long)count, (long long)size);
} else {
pyrv = PyObject_CallMethod(pyproc_, (char*)"process", (char*)"(sLL)",
path.c_str(), (long long)count, (long long)size);
}
if (!pyrv) {
if (PyErr_Occurred()) PyErr_Fetch(&pyextype_, &pyexvalue_, &pyextrace_);
return false;
}
bool rv = PyObject_IsTrue(pyrv);
Py_DECREF(pyrv);
return rv;
}
PyObject* pyproc_;
PyObject* pyextype_;
PyObject* pyexvalue_;
PyObject* pyextrace_;
};
/**
* Internal data of an error object.
*/
struct Error_data {
PyException_HEAD
PyObject* pycode;
PyObject* pymessage;
};
/**
* Internal data of a visitor object.
*/
struct Visitor_data {
PyObject_HEAD
};
/**
* Internal data of a file processor object.
*/
struct FileProcessor_data {
PyObject_HEAD
};
/**
* Internal data of a cursor object.
*/
struct Cursor_data {
PyObject_HEAD
SoftCursor* cur;
PyObject* pydb;
};
/**
* Internal data of a database object.
*/
struct DB_data {
PyObject_HEAD
kc::PolyDB* db;
uint32_t exbits;
PyObject* pylock;
};
/**
* Locking device of the database.
*/
class NativeFunction {
public:
NativeFunction(DB_data* data) : data_(data), thstate_(NULL) {
PyObject* pylock = data_->pylock;
if (pylock == Py_None) {
thstate_ = PyEval_SaveThread();
} else {
PyObject* pyrv = PyObject_CallMethod(pylock, (char*)"acquire", NULL);
if (pyrv) Py_DECREF(pyrv);
}
}
void cleanup() {
PyObject* pylock = data_->pylock;
if (pylock == Py_None) {
if (thstate_) PyEval_RestoreThread(thstate_);
} else {
PyObject* pyrv = PyObject_CallMethod(pylock, (char*)"release", NULL);
if (pyrv) Py_DECREF(pyrv);
}
}
private:
DB_data* data_;
PyThreadState* thstate_;
};
/**
* Entry point of the library.
*/
PyMODINIT_FUNC PyInit_kyotocabinet(void) {
if (!define_module()) return NULL;
if (!define_err()) return NULL;
if (!define_vis()) return NULL;
if (!define_fproc()) return NULL;
if (!define_cur()) return NULL;
if (!define_db()) return NULL;
return mod_kc;
}
/**
* Set a constant of unsigned integer.
*/
static bool setconstuint32(PyObject* pyobj, const char* name, uint32_t value) {
PyObject* pyname = PyUnicode_FromString(name);
PyObject* pyvalue = PyLong_FromUnsignedLong(value);
return PyObject_GenericSetAttr(pyobj, pyname, pyvalue) == 0;
}
/**
* Throw a runtime error.
*/
static void throwruntime(const char* message) {
PyErr_SetString(PyExc_RuntimeError, message);
}
/**
* throw the invalid argument error.
*/
static void throwinvarg() {
PyErr_SetString(PyExc_TypeError, "invalid arguments");
}
/**
* Create a new string.
*/
static PyObject* newstring(const char* str) {
return PyUnicode_DecodeUTF8(str, std::strlen(str), "ignore");
}
/**
* Create a new byte array.
*/
static PyObject* newbytes(const char* ptr, size_t size) {
return PyBytes_FromStringAndSize(ptr, size);
}
/**
* Convert a numeric parameter to an integer.
*/
static int64_t pyatoi(PyObject* pyobj) {
if (PyLong_Check(pyobj)) {
return PyLong_AsLong(pyobj);
} else if (PyFloat_Check(pyobj)) {
double dnum = PyFloat_AsDouble(pyobj);
if (kc::chknan(dnum)) {
return kc::INT64MIN;
} else if (kc::chkinf(dnum)) {
return dnum < 0 ? kc::INT64MIN : kc::INT64MAX;
}
return dnum;
} else if (PyUnicode_Check(pyobj) || PyBytes_Check(pyobj)) {
SoftString numstr(pyobj);
const char* str = numstr.ptr();
double dnum = kc::atof(str);
if (kc::chknan(dnum)) {
return kc::INT64MIN;
} else if (kc::chkinf(dnum)) {
return dnum < 0 ? kc::INT64MIN : kc::INT64MAX;
}
return dnum;
} else if (pyobj != Py_None) {
int64_t inum = 0;
PyObject* pylong = PyNumber_Long(pyobj);
if (pylong) {
inum = PyLong_AsLong(pyobj);
Py_DECREF(pylong);
}
return inum;
}
return 0;
}
/**
* Convert a numeric parameter to a real number.
*/
static double pyatof(PyObject* pyobj) {
if (PyLong_Check(pyobj)) {
return PyLong_AsLong(pyobj);
} else if (PyFloat_Check(pyobj)) {
return PyFloat_AsDouble(pyobj);
} else if (PyUnicode_Check(pyobj) || PyBytes_Check(pyobj)) {
SoftString numstr(pyobj);
const char* str = numstr.ptr();
return kc::atof(str);
} else if (pyobj != Py_None) {
double dnum = 0;
PyObject* pyfloat = PyNumber_Float(pyobj);
if (pyfloat) {
dnum = PyFloat_AsDouble(pyfloat);
Py_DECREF(pyfloat);
}
return dnum;
}
return 0;
}
/**
* Convert an internal map to a Python map.
*/
static PyObject* maptopymap(const StringMap* map) {
PyObject* pymap = PyDict_New();
StringMap::const_iterator it = map->begin();
StringMap::const_iterator itend = map->end();
while (it != itend) {
PyObject* pyvalue = newstring(it->second.c_str());
PyDict_SetItemString(pymap, it->first.c_str(), pyvalue);
Py_DECREF(pyvalue);
it++;
}
return pymap;
}
/**
* Convert an internal vector to a Python list.
*/
static PyObject* vectortopylist(const StringVector* vec) {
size_t num = vec->size();
PyObject* pylist = PyList_New(num);
for (size_t i = 0; i < num; i++) {
PyObject* pystr = newstring((*vec)[i].c_str());
PyList_SET_ITEM(pylist, i, pystr);
}
return pylist;
}
/**
* Pass the current execution state.
*/
static void threadyield() {
PyObject* pyrv = PyObject_CallMethod(mod_time, (char*)"sleep", (char*)"(I)", 0);
if (pyrv) Py_DECREF(pyrv);
}
/**
* Define objects of the module.
*/
static bool define_module() {
static PyModuleDef module_def = { PyModuleDef_HEAD_INIT };
size_t zoff = offsetof(PyModuleDef, m_name);
std::memset((char*)&module_def + zoff, 0, sizeof(module_def) - zoff);
module_def.m_name = "kyotocabinet";
module_def.m_doc = "a straightforward implementation of DBM";
module_def.m_size = -1;
static PyMethodDef method_table[] = {
{ "conv_bytes", (PyCFunction)kc_conv_bytes, METH_VARARGS,
"Convert any object to a byte array." },
{ "atoi", (PyCFunction)kc_atoi, METH_VARARGS,
"Convert a string to an integer." },
{ "atoix", (PyCFunction)kc_atoix, METH_VARARGS,
"Convert a string with a metric prefix to an integer." },
{ "atof", (PyCFunction)kc_atof, METH_VARARGS,
"Convert a string to a real number." },
{ "hash_murmur", (PyCFunction)kc_hash_murmur, METH_VARARGS,
"Get the hash value of a string by MurMur hashing." },
{ "hash_fnv", (PyCFunction)kc_hash_fnv, METH_VARARGS,
"Get the hash value of a string by FNV hashing." },
{ "levdist", (PyCFunction)kc_levdist, METH_VARARGS,
"Calculate the levenshtein distance of two strings." },
{ NULL, NULL, 0, NULL }
};
module_def.m_methods = method_table;
mod_kc = PyModule_Create(&module_def);
if (PyModule_AddStringConstant(mod_kc, "VERSION", kc::VERSION) != 0) return false;
mod_th = PyImport_ImportModule("threading");
mod_time = PyImport_ImportModule("time");
if (!mod_th) return false;
return true;
}
/**
* Implementation of conv_bytes.
*/
static PyObject* kc_conv_bytes(PyObject* pyself, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pyobj = PyTuple_GetItem(pyargs, 0);
SoftString str(pyobj);
return PyBytes_FromStringAndSize(str.ptr(), str.size());
}
/**
* Implementation of atoi.
*/
static PyObject* kc_atoi(PyObject* pyself, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pystr = PyTuple_GetItem(pyargs, 0);
SoftString str(pystr);
return PyLong_FromLongLong(kc::atoi(str.ptr()));
}
/**
* Implementation of atoix.
*/
static PyObject* kc_atoix(PyObject* pyself, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pystr = PyTuple_GetItem(pyargs, 0);
SoftString str(pystr);
return PyLong_FromLongLong(kc::atoix(str.ptr()));
}
/**
* Implementation of atof.
*/
static PyObject* kc_atof(PyObject* pyself, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pystr = PyTuple_GetItem(pyargs, 0);
SoftString str(pystr);
return PyFloat_FromDouble(kc::atof(str.ptr()));
}
/**
* Implementation of hash_murmur.
*/
static PyObject* kc_hash_murmur(PyObject* pyself, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pystr = PyTuple_GetItem(pyargs, 0);
SoftString str(pystr);
return PyLong_FromUnsignedLongLong(kc::hashmurmur(str.ptr(), str.size()));
}
/**
* Implementation of hash_fnv.
*/
static PyObject* kc_hash_fnv(PyObject* pyself, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pystr = PyTuple_GetItem(pyargs, 0);
SoftString str(pystr);
return PyLong_FromUnsignedLongLong(kc::hashfnv(str.ptr(), str.size()));
}
/**
* Implementation of levdist.
*/
static PyObject* kc_levdist(PyObject* pyself, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 2) {
throwinvarg();
return NULL;
}
PyObject* pya = PyTuple_GetItem(pyargs, 0);
PyObject* pyb = PyTuple_GetItem(pyargs, 1);
PyObject* pyutf = Py_None;
if (argc > 2) pyutf = PyTuple_GetItem(pyargs, 2);
SoftString astr(pya);
const char* abuf = astr.ptr();
size_t asiz = astr.size();
SoftString bstr(pyb);
const char* bbuf = bstr.ptr();
size_t bsiz = bstr.size();
bool utf = PyObject_IsTrue(pyutf);
size_t dist;
if (utf) {
uint32_t astack[128];
uint32_t* aary = asiz > sizeof(astack) / sizeof(*astack) ? new uint32_t[asiz] : astack;
size_t anum;
kc::strutftoucs(abuf, asiz, aary, &anum);
uint32_t bstack[128];
uint32_t* bary = bsiz > sizeof(bstack) / sizeof(*bstack) ? new uint32_t[bsiz] : bstack;
size_t bnum;
kc::strutftoucs(bbuf, bsiz, bary, &bnum);
dist = kc::strucsdist(aary, anum, bary, bnum);
if (bary != bstack) delete[] bary;
if (aary != astack) delete[] aary;
} else {
dist = kc::memdist(abuf, asiz, bbuf, bsiz);
}
return PyLong_FromUnsignedLongLong(dist);
}
/**
* Define objects of the Error class.
*/
static bool define_err() {
static PyTypeObject type_err = { PyVarObject_HEAD_INIT(NULL, 0) };
size_t zoff = offsetof(PyTypeObject, tp_name);
std::memset((char*)&type_err + zoff, 0, sizeof(type_err) - zoff);
type_err.tp_name = "kyotocabinet.Error";
type_err.tp_basicsize = sizeof(Error_data);
type_err.tp_itemsize = 0;
type_err.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE;
type_err.tp_doc = "Error data.";
type_err.tp_new = err_new;
type_err.tp_dealloc = (destructor)err_dealloc;
type_err.tp_init = (initproc)err_init;
type_err.tp_repr = (unaryfunc)err_repr;
type_err.tp_str = (unaryfunc)err_str;
type_err.tp_richcompare = (richcmpfunc)err_richcmp;
static PyMethodDef err_methods[] = {
{ "set", (PyCFunction)err_set, METH_VARARGS,
"Set the error information." },
{ "code", (PyCFunction)err_code, METH_NOARGS,
"Get the error code." },
{ "name", (PyCFunction)err_name, METH_NOARGS,
"Get the readable string of the code." },
{ "message", (PyCFunction)err_message, METH_NOARGS,
"Get the supplement message." },
{ NULL, NULL, 0, NULL }
};
type_err.tp_methods = err_methods;
type_err.tp_base = (PyTypeObject*)PyExc_RuntimeError;
if (PyType_Ready(&type_err) != 0) return false;
cls_err = (PyObject*)&type_err;
for (size_t i = 0; i < sizeof(cls_err_children) / sizeof(*cls_err_children); i++) {
cls_err_children[i] = NULL;
}
if (!err_define_child("SUCCESS", kc::PolyDB::Error::SUCCESS)) return false;
if (!err_define_child("NOIMPL", kc::PolyDB::Error::NOIMPL)) return false;
if (!err_define_child("INVALID", kc::PolyDB::Error::INVALID)) return false;
if (!err_define_child("NOREPOS", kc::PolyDB::Error::NOREPOS)) return false;
if (!err_define_child("NOPERM", kc::PolyDB::Error::NOPERM)) return false;
if (!err_define_child("BROKEN", kc::PolyDB::Error::BROKEN)) return false;
if (!err_define_child("DUPREC", kc::PolyDB::Error::DUPREC)) return false;
if (!err_define_child("NOREC", kc::PolyDB::Error::NOREC)) return false;
if (!err_define_child("LOGIC", kc::PolyDB::Error::LOGIC)) return false;
if (!err_define_child("SYSTEM", kc::PolyDB::Error::SYSTEM)) return false;
if (!err_define_child("MISC", kc::PolyDB::Error::MISC)) return false;
Py_INCREF(cls_err);
if (PyModule_AddObject(mod_kc, "Error", cls_err) != 0) return false;
return true;
}
/**
* Define the constant and the subclass of an error code.
*/
static bool err_define_child(const char* name, uint32_t code) {
if (!setconstuint32(cls_err, name, code)) return false;
char xname[kc::NUMBUFSIZ];
std::sprintf(xname, "X%s", name);
char fname[kc::NUMBUFSIZ*2];
std::sprintf(fname, "kyotocabinet.Error.%s", xname);
PyObject* pyxname = PyUnicode_FromString(xname);
PyObject* pyvalue = PyErr_NewException(fname, cls_err, NULL);
cls_err_children[code] = pyvalue;
return PyObject_GenericSetAttr(cls_err, pyxname, pyvalue) == 0;
}
/**
* Implementation of new.
*/
static PyObject* err_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds) {
Error_data* data = (Error_data*)pytype->tp_alloc(pytype, 0);
if (!data) return NULL;
data->pycode = PyLong_FromUnsignedLong(kc::PolyDB::Error::SUCCESS);
data->pymessage = PyUnicode_FromString("error");
return (PyObject*)data;
}
/**
* Implementation of dealloc.
*/
static void err_dealloc(Error_data* data) {
Py_DECREF(data->pymessage);
Py_DECREF(data->pycode);
Py_CLEAR(data->dict);
Py_CLEAR(data->args);
Py_CLEAR(data->traceback);
Py_CLEAR(data->cause);
Py_CLEAR(data->context);
Py_TYPE(data)->tp_free((PyObject*)data);
}
/**
* Implementation of init.
*/
static int err_init(Error_data* data, PyObject* pyargs, PyObject* pykwds) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 2) {
throwinvarg();
return -1;
}
if (argc > 1) {
PyObject* pycode = PyTuple_GetItem(pyargs, 0);
PyObject* pymessage = PyTuple_GetItem(pyargs, 1);
if (PyLong_Check(pycode) && PyUnicode_Check(pymessage)) {
Py_DECREF(data->pycode);
Py_DECREF(data->pymessage);
Py_INCREF(pycode);
data->pycode = pycode;
Py_INCREF(pymessage);
data->pymessage = pymessage;
}
} else if (argc > 0) {
PyObject* pyexpr = PyTuple_GetItem(pyargs, 0);
if (PyUnicode_Check(pyexpr)) {
pyexpr = PyUnicode_AsUTF8String(pyexpr);
const char* expr = PyBytes_AS_STRING(pyexpr);
uint32_t code = kc::atoi(expr);
const char* rp = std::strchr(expr, ':');
if (rp) expr = rp + 1;
while (*expr == ' ') {
expr++;
}
Py_DECREF(data->pycode);
Py_DECREF(data->pymessage);
data->pycode = PyLong_FromLongLong(code);
data->pymessage = PyUnicode_FromString(expr);
Py_DECREF(pyexpr);
}
}
return 0;
}
/**
* Implementation of repr.
*/
static PyObject* err_repr(Error_data* data) {
uint32_t code = (uint32_t)PyLong_AsLong(data->pycode);
const char* name = kc::PolyDB::Error::codename((kc::PolyDB::Error::Code)code);
return PyUnicode_FromFormat("", name, data->pymessage);
}
/**
* Implementation of str.
*/
static PyObject* err_str(Error_data* data) {
uint32_t code = (uint32_t)PyLong_AsLong(data->pycode);
const char* name = kc::PolyDB::Error::codename((kc::PolyDB::Error::Code)code);
return PyUnicode_FromFormat("%s: %U", name, data->pymessage);
}
/**
* Implementation of richcmp.
*/
static PyObject* err_richcmp(Error_data* data, PyObject* pyright, int op) {
bool rv;
uint32_t code = (uint32_t)PyLong_AsLong(data->pycode);
uint32_t rcode;
if (PyObject_IsInstance(pyright, cls_err)) {
Error_data* rdata = (Error_data*)pyright;
rcode = (uint32_t)PyLong_AsLong(rdata->pycode);
} else if (PyLong_Check(pyright)) {
rcode = (uint32_t)PyLong_AsLong(pyright);
} else {
rcode = kc::INT32MAX;
}
switch (op) {
case Py_LT: rv = code < rcode; break;
case Py_LE: rv = code <= rcode; break;
case Py_EQ: rv = code == rcode; break;
case Py_NE: rv = code != rcode; break;
case Py_GT: rv = code > rcode; break;
case Py_GE: rv = code >= rcode; break;
default: rv = false; break;
}
if (rv) Py_RETURN_TRUE;
Py_RETURN_FALSE;
}
/**
* Implementation of set.
*/
static PyObject* err_set(Error_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 2) {
throwinvarg();
return NULL;
}
PyObject* pycode = PyTuple_GetItem(pyargs, 0);
PyObject* pymessage = PyTuple_GetItem(pyargs, 1);
if (!PyLong_Check(pycode) && !PyUnicode_Check(pymessage)) {
throwinvarg();
return NULL;
}
Py_DECREF(data->pycode);
Py_DECREF(data->pymessage);
Py_INCREF(pycode);
data->pycode = pycode;
Py_INCREF(pymessage);
data->pymessage = pymessage;
Py_RETURN_NONE;
}
/**
* Implementation of code.
*/
static PyObject* err_code(Error_data* data) {
Py_INCREF(data->pycode);
return data->pycode;
}
/**
* Implementation of name.
*/
static PyObject* err_name(Error_data* data) {
uint32_t code = PyLong_AsLong(data->pycode);
const char* name = kc::PolyDB::Error::codename((kc::PolyDB::Error::Code)code);
return newstring(name);
}
/**
* Implementation of message.
*/
static PyObject* err_message(Error_data* data) {
Py_INCREF(data->pymessage);
return data->pymessage;
}
/**
* Define objects of the Visitor class.
*/
static bool define_vis() {
static PyTypeObject type_vis = { PyVarObject_HEAD_INIT(NULL, 0) };
size_t zoff = offsetof(PyTypeObject, tp_name);
std::memset((char*)&type_vis + zoff, 0, sizeof(type_vis) - zoff);
type_vis.tp_name = "kyotocabinet.Visitor";
type_vis.tp_basicsize = sizeof(Visitor_data);
type_vis.tp_itemsize = 0;
type_vis.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE;
type_vis.tp_doc = "Interface to access a record.";
type_vis.tp_new = vis_new;
type_vis.tp_dealloc = (destructor)vis_dealloc;
type_vis.tp_init = (initproc)vis_init;
static PyMethodDef vis_methods[] = {
{ "visit_full", (PyCFunction)vis_visit_full, METH_VARARGS,
"Visit a record.", },
{ "visit_empty", (PyCFunction)vis_visit_empty, METH_VARARGS,
"Visit a empty record space." },
{ NULL, NULL, 0, NULL }
};
type_vis.tp_methods = vis_methods;
if (PyType_Ready(&type_vis) != 0) return false;
cls_vis = (PyObject*)&type_vis;
PyObject* pyname = PyUnicode_FromString("NOP");
obj_vis_nop = PyUnicode_FromString("[NOP]");
if (PyObject_GenericSetAttr(cls_vis, pyname, obj_vis_nop) != 0) return false;
pyname = PyUnicode_FromString("REMOVE");
obj_vis_remove = PyUnicode_FromString("[REMOVE]");
if (PyObject_GenericSetAttr(cls_vis, pyname, obj_vis_remove) != 0) return false;
Py_INCREF(cls_vis);
if (PyModule_AddObject(mod_kc, "Visitor", cls_vis) != 0) return false;
return true;
}
/**
* Implementation of new.
*/
static PyObject* vis_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds) {
Visitor_data* data = (Visitor_data*)pytype->tp_alloc(pytype, 0);
if (!data) return NULL;
return (PyObject*)data;
}
/**
* Implementation of dealloc.
*/
static void vis_dealloc(Visitor_data* data) {
Py_TYPE(data)->tp_free((PyObject*)data);
}
/**
* Implementation of init.
*/
static int vis_init(Visitor_data* data, PyObject* pyargs, PyObject* pykwds) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 0) {
throwinvarg();
return -1;
}
return 0;
}
/**
* Implementation of visit_full.
*/
static PyObject* vis_visit_full(Visitor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 2) {
throwinvarg();
return NULL;
}
Py_INCREF(obj_vis_nop);
return obj_vis_nop;
}
/**
* Implementation of visit_empty.
*/
static PyObject* vis_visit_empty(Visitor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
Py_INCREF(obj_vis_nop);
return obj_vis_nop;
}
/**
* Define objects of the FileProcessor class.
*/
static bool define_fproc() {
static PyTypeObject type_fproc = { PyVarObject_HEAD_INIT(NULL, 0) };
size_t zoff = offsetof(PyTypeObject, tp_name);
std::memset((char*)&type_fproc + zoff, 0, sizeof(type_fproc) - zoff);
type_fproc.tp_name = "kyotocabinet.FileProcessor";
type_fproc.tp_basicsize = sizeof(FileProcessor_data);
type_fproc.tp_itemsize = 0;
type_fproc.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE;
type_fproc.tp_doc = "Interface to process the database file.";
type_fproc.tp_new = fproc_new;
type_fproc.tp_dealloc = (destructor)fproc_dealloc;
type_fproc.tp_init = (initproc)fproc_init;
static PyMethodDef fproc_methods[] = {
{ "process", (PyCFunction)fproc_process, METH_VARARGS,
"Process the database file.", },
{ NULL, NULL, 0, NULL }
};
type_fproc.tp_methods = fproc_methods;
if (PyType_Ready(&type_fproc) != 0) return false;
cls_fproc = (PyObject*)&type_fproc;
Py_INCREF(cls_fproc);
if (PyModule_AddObject(mod_kc, "FileProcessor", cls_fproc) != 0) return false;
return true;
}
/**
* Implementation of new.
*/
static PyObject* fproc_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds) {
FileProcessor_data* data = (FileProcessor_data*)pytype->tp_alloc(pytype, 0);
if (!data) return NULL;
return (PyObject*)data;
}
/**
* Implementation of dealloc.
*/
static void fproc_dealloc(FileProcessor_data* data) {
Py_TYPE(data)->tp_free((PyObject*)data);
}
/**
* Implementation of init.
*/
static int fproc_init(FileProcessor_data* data, PyObject* pyargs, PyObject* pykwds) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 0) {
throwinvarg();
return -1;
}
return 0;
}
/**
* Implementation of process.
*/
static PyObject* fproc_process(FileProcessor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 3) {
throwinvarg();
return NULL;
}
Py_RETURN_TRUE;
}
/**
* Define objects of the Cursor class.
*/
static bool define_cur() {
static PyTypeObject type_cur = { PyVarObject_HEAD_INIT(NULL, 0) };
size_t zoff = offsetof(PyTypeObject, tp_name);
std::memset((char*)&type_cur + zoff, 0, sizeof(type_cur) - zoff);
type_cur.tp_name = "kyotocabinet.Cursor";
type_cur.tp_basicsize = sizeof(Cursor_data);
type_cur.tp_itemsize = 0;
type_cur.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE;
type_cur.tp_doc = "Interface of cursor to indicate a record.";
type_cur.tp_new = cur_new;
type_cur.tp_dealloc = (destructor)cur_dealloc;
type_cur.tp_init = (initproc)cur_init;
type_cur.tp_repr = (unaryfunc)cur_repr;
type_cur.tp_str = (unaryfunc)cur_str;
static PyMethodDef cur_methods[] = {
{ "disable", (PyCFunction)cur_disable, METH_NOARGS,
"Disable the cursor." },
{ "accept", (PyCFunction)cur_accept, METH_VARARGS,
"Accept a visitor to the current record." },
{ "set_value", (PyCFunction)cur_set_value, METH_VARARGS,
"Set the value of the current record." },
{ "remove", (PyCFunction)cur_remove, METH_NOARGS,
"Remove the current record." },
{ "get_key", (PyCFunction)cur_get_key, METH_VARARGS,
"Get the key of the current record." },
{ "get_key_str", (PyCFunction)cur_get_key_str, METH_VARARGS,
"Get the key of the current record." },
{ "get_value", (PyCFunction)cur_get_value, METH_VARARGS,
"Get the value of the current record." },
{ "get_value_str", (PyCFunction)cur_get_value_str, METH_VARARGS,
"Get the value of the current record." },
{ "get", (PyCFunction)cur_get, METH_VARARGS,
"Get a pair of the key and the value of the current record." },
{ "get_str", (PyCFunction)cur_get_str, METH_VARARGS,
"Get a pair of the key and the value of the current record." },
{ "seize", (PyCFunction)cur_seize, METH_NOARGS,
"Get a pair of the key and the value of the current record and remove it atomically." },
{ "seize_str", (PyCFunction)cur_seize_str, METH_NOARGS,
"Get a pair of the key and the value of the current record and remove it atomically." },
{ "jump", (PyCFunction)cur_jump, METH_VARARGS,
"Jump the cursor to a record for forward scan." },
{ "jump_back", (PyCFunction)cur_jump_back, METH_VARARGS,
"Jump the cursor to a record for backward scan." },
{ "step", (PyCFunction)cur_step, METH_NOARGS,
"Step the cursor to the next record." },
{ "step_back", (PyCFunction)cur_step_back, METH_NOARGS,
"Step the cursor to the previous record." },
{ "db", (PyCFunction)cur_db, METH_NOARGS,
"Get the database object." },
{ "error", (PyCFunction)cur_error, METH_NOARGS,
"Get the last happened error." },
{ NULL, NULL, 0, NULL }
};
type_cur.tp_methods = cur_methods;
type_cur.tp_iter = (getiterfunc)cur_op_iter;
type_cur.tp_iternext = (iternextfunc)cur_op_iternext;
if (PyType_Ready(&type_cur) != 0) return false;
cls_cur = (PyObject*)&type_cur;
Py_INCREF(cls_cur);
if (PyModule_AddObject(mod_kc, "Cursor", cls_cur) != 0) return false;
return true;
}
/**
* Implementation of new.
*/
static PyObject* cur_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds) {
Cursor_data* data = (Cursor_data*)pytype->tp_alloc(pytype, 0);
if (!data) return NULL;
Py_INCREF(Py_None);
data->cur = NULL;
data->pydb = Py_None;
return (PyObject*)data;
}
/**
* Implementation of dealloc.
*/
static void cur_dealloc(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
Py_DECREF(pydb);
delete cur;
Py_TYPE(data)->tp_free((PyObject*)data);
}
/**
* Implementation of init.
*/
static int cur_init(Cursor_data* data, PyObject* pyargs, PyObject* pykwds) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return -1;
}
PyObject* pydb = PyTuple_GetItem(pyargs, 0);
if (!PyObject_IsInstance(pydb, cls_db)) {
throwinvarg();
return -1;
}
DB_data* dbdata = (DB_data*)pydb;
kc::PolyDB* db = dbdata->db;
NativeFunction nf((DB_data*)pydb);
g_curbur.sweap();
data->cur = new SoftCursor(db);
nf.cleanup();
Py_INCREF(pydb);
data->pydb = pydb;
return 0;
}
/**
* Implementation of repr.
*/
static PyObject* cur_repr(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) return newstring("");
NativeFunction nf((DB_data*)pydb);
kc::PolyDB* db = icur->db();
std::string path = db->path();
if (path.size() < 1) path = "(None)";
std::string str;
kc::strprintf(&str, "get_key(&ksiz);
if (kbuf) {
str.append(kbuf, ksiz);
delete[] kbuf;
} else {
str.append("(None)");
}
str.append(">");
nf.cleanup();
return PyUnicode_FromString(str.c_str());
}
/**
* Implementation of str.
*/
static PyObject* cur_str(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) return newstring("(disabled)");
NativeFunction nf((DB_data*)pydb);
kc::PolyDB* db = icur->db();
std::string path = db->path();
if (path.size() < 1) path = "(None)";
std::string str;
kc::strprintf(&str, "%s: ", path.c_str());
size_t ksiz;
char* kbuf = icur->get_key(&ksiz);
if (kbuf) {
str.append(kbuf, ksiz);
delete[] kbuf;
} else {
str.append("(None)");
}
nf.cleanup();
return PyUnicode_FromString(str.c_str());
}
/**
* Implementation of disable.
*/
static PyObject* cur_disable(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
NativeFunction nf((DB_data*)pydb);
cur->disable();
nf.cleanup();
Py_RETURN_NONE;
}
/**
* Implementation of accept.
*/
static PyObject* cur_accept(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1) {
throwinvarg();
return NULL;
}
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_FALSE;
if (((DB_data*)pydb)->pylock == Py_None) {
icur->db()->set_error(kc::PolyDB::Error::INVALID, "unsupported method");
if (db_raise((DB_data*)pydb)) return NULL;
Py_RETURN_NONE;
}
PyObject* pyvisitor = PyTuple_GetItem(pyargs, 0);
PyObject* pywritable = Py_None;
if (argc > 1) pywritable = PyTuple_GetItem(pyargs, 1);
PyObject* pystep = Py_None;
if (argc > 2) pystep = PyTuple_GetItem(pyargs, 2);
bool writable = pywritable == Py_None || PyObject_IsTrue(pywritable);
bool step = PyObject_IsTrue(pystep);
bool rv;
if (PyObject_IsInstance(pyvisitor, cls_vis) || PyCallable_Check(pyvisitor)) {
SoftVisitor visitor(pyvisitor, writable);
NativeFunction nf((DB_data*)pydb);
rv = icur->accept(&visitor, writable, step);
nf.cleanup();
PyObject* pyextype, *pyexvalue, *pyextrace;
if (visitor.exception(&pyextype, &pyexvalue, &pyextrace)) {
PyErr_SetObject(pyextype, pyexvalue);
return NULL;
}
} else {
throwinvarg();
return NULL;
}
if (rv) Py_RETURN_TRUE;
if (db_raise((DB_data*)pydb)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of set_value.
*/
static PyObject* cur_set_value(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
PyObject* pyvalue = PyTuple_GetItem(pyargs, 0);
PyObject* pystep = Py_None;
if (argc > 1) pystep = PyTuple_GetItem(pyargs, 1);
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_FALSE;
SoftString value(pyvalue);
bool step = PyObject_IsTrue(pystep);
NativeFunction nf((DB_data*)pydb);
bool rv = icur->set_value(value.ptr(), value.size(), step);
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise((DB_data*)pydb)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of remove.
*/
static PyObject* cur_remove(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_FALSE;
NativeFunction nf((DB_data*)pydb);
bool rv = icur->remove();
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise((DB_data*)pydb)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of get_key.
*/
static PyObject* cur_get_key(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pystep = Py_None;
if (argc > 0) pystep = PyTuple_GetItem(pyargs, 0);
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
bool step = PyObject_IsTrue(pystep);
NativeFunction nf((DB_data*)pydb);
size_t ksiz;
char* kbuf = icur->get_key(&ksiz, step);
nf.cleanup();
PyObject* pyrv;
if (kbuf) {
pyrv = newbytes(kbuf, ksiz);
delete[] kbuf;
} else {
if (db_raise((DB_data*)pydb)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of get_key_str.
*/
static PyObject* cur_get_key_str(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pystep = Py_None;
if (argc > 0) pystep = PyTuple_GetItem(pyargs, 0);
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
bool step = PyObject_IsTrue(pystep);
NativeFunction nf((DB_data*)pydb);
size_t ksiz;
char* kbuf = icur->get_key(&ksiz, step);
nf.cleanup();
PyObject* pyrv;
if (kbuf) {
pyrv = newstring(kbuf);
delete[] kbuf;
} else {
if (db_raise((DB_data*)pydb)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of get_value.
*/
static PyObject* cur_get_value(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pystep = Py_None;
if (argc > 0) pystep = PyTuple_GetItem(pyargs, 0);
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
bool step = PyObject_IsTrue(pystep);
NativeFunction nf((DB_data*)pydb);
size_t vsiz;
char* vbuf = icur->get_value(&vsiz, step);
nf.cleanup();
PyObject* pyrv;
if (vbuf) {
pyrv = newbytes(vbuf, vsiz);
delete[] vbuf;
} else {
if (db_raise((DB_data*)pydb)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of get_value_str.
*/
static PyObject* cur_get_value_str(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pystep = Py_None;
if (argc > 0) pystep = PyTuple_GetItem(pyargs, 0);
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
bool step = PyObject_IsTrue(pystep);
NativeFunction nf((DB_data*)pydb);
size_t vsiz;
char* vbuf = icur->get_value(&vsiz, step);
nf.cleanup();
PyObject* pyrv;
if (vbuf) {
pyrv = newstring(vbuf);
delete[] vbuf;
} else {
if (db_raise((DB_data*)pydb)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of get.
*/
static PyObject* cur_get(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pystep = Py_None;
if (argc > 0) pystep = PyTuple_GetItem(pyargs, 0);
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
bool step = PyObject_IsTrue(pystep);
NativeFunction nf((DB_data*)pydb);
const char* vbuf;
size_t ksiz, vsiz;
char* kbuf = icur->get(&ksiz, &vbuf, &vsiz, step);
nf.cleanup();
PyObject* pyrv;
if (kbuf) {
pyrv = PyTuple_New(2);
PyObject* pykey = newbytes(kbuf, ksiz);
PyObject* pyvalue = newbytes(vbuf, vsiz);
PyTuple_SetItem(pyrv, 0, pykey);
PyTuple_SetItem(pyrv, 1, pyvalue);
delete[] kbuf;
} else {
if (db_raise((DB_data*)pydb)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of get_str.
*/
static PyObject* cur_get_str(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pystep = Py_None;
if (argc > 0) pystep = PyTuple_GetItem(pyargs, 0);
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
bool step = PyObject_IsTrue(pystep);
NativeFunction nf((DB_data*)pydb);
const char* vbuf;
size_t ksiz, vsiz;
char* kbuf = icur->get(&ksiz, &vbuf, &vsiz, step);
nf.cleanup();
PyObject* pyrv;
if (kbuf) {
pyrv = PyTuple_New(2);
PyObject* pykey = newstring(kbuf);
PyObject* pyvalue = newstring(vbuf);
PyTuple_SetItem(pyrv, 0, pykey);
PyTuple_SetItem(pyrv, 1, pyvalue);
delete[] kbuf;
} else {
if (db_raise((DB_data*)pydb)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of seize.
*/
static PyObject* cur_seize(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
NativeFunction nf((DB_data*)pydb);
const char* vbuf;
size_t ksiz, vsiz;
char* kbuf = icur->seize(&ksiz, &vbuf, &vsiz);
nf.cleanup();
PyObject* pyrv;
if (kbuf) {
pyrv = PyTuple_New(2);
PyObject* pykey = newbytes(kbuf, ksiz);
PyObject* pyvalue = newbytes(vbuf, vsiz);
PyTuple_SetItem(pyrv, 0, pykey);
PyTuple_SetItem(pyrv, 1, pyvalue);
delete[] kbuf;
} else {
if (db_raise((DB_data*)pydb)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of seize_str.
*/
static PyObject* cur_seize_str(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
NativeFunction nf((DB_data*)pydb);
const char* vbuf;
size_t ksiz, vsiz;
char* kbuf = icur->seize(&ksiz, &vbuf, &vsiz);
nf.cleanup();
PyObject* pyrv;
if (kbuf) {
pyrv = PyTuple_New(2);
PyObject* pykey = newstring(kbuf);
PyObject* pyvalue = newstring(vbuf);
PyTuple_SetItem(pyrv, 0, pykey);
PyTuple_SetItem(pyrv, 1, pyvalue);
delete[] kbuf;
} else {
if (db_raise((DB_data*)pydb)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of jump.
*/
static PyObject* cur_jump(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pykey = Py_None;
if (argc > 0) pykey = PyTuple_GetItem(pyargs, 0);
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_FALSE;
bool rv;
if (pykey == Py_None) {
NativeFunction nf((DB_data*)pydb);
rv = icur->jump();
nf.cleanup();
} else {
SoftString key(pykey);
NativeFunction nf((DB_data*)pydb);
rv = icur->jump(key.ptr(), key.size());
nf.cleanup();
}
if (rv) Py_RETURN_TRUE;
if (db_raise((DB_data*)pydb)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of jump_back.
*/
static PyObject* cur_jump_back(Cursor_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pykey = Py_None;
if (argc > 0) pykey = PyTuple_GetItem(pyargs, 0);
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_FALSE;
bool rv;
if (pykey == Py_None) {
NativeFunction nf((DB_data*)pydb);
rv = icur->jump_back();
nf.cleanup();
} else {
SoftString key(pykey);
NativeFunction nf((DB_data*)pydb);
rv = icur->jump_back(key.ptr(), key.size());
nf.cleanup();
}
if (rv) Py_RETURN_TRUE;
if (db_raise((DB_data*)pydb)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of step.
*/
static PyObject* cur_step(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_FALSE;
NativeFunction nf((DB_data*)pydb);
bool rv = icur->step();
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise((DB_data*)pydb)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of step_back.
*/
static PyObject* cur_step_back(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_FALSE;
NativeFunction nf((DB_data*)pydb);
bool rv = icur->step_back();
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise((DB_data*)pydb)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of db.
*/
static PyObject* cur_db(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_FALSE;
Py_INCREF(data->pydb);
return pydb;
}
/**
* Implementation of error.
*/
static PyObject* cur_error(Cursor_data* data) {
SoftCursor* cur = data->cur;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) Py_RETURN_NONE;
kc::PolyDB::Error err = icur->error();
PyObject* pyerr = PyObject_CallMethod(mod_kc, (char*)"Error",
(char*)"(IU)", err.code(), err.message());
return pyerr;
}
/**
* Implementation of __iter__.
*/
static PyObject* cur_op_iter(Cursor_data* data) {
Py_INCREF((PyObject*)data);
return (PyObject*)data;
}
/**
* Implementation of __next__.
*/
static PyObject* cur_op_iternext(Cursor_data* data) {
SoftCursor* cur = data->cur;
PyObject* pydb = data->pydb;
kc::PolyDB::Cursor* icur = cur->cur();
if (!icur) return NULL;
NativeFunction nf((DB_data*)pydb);
size_t ksiz;
char* kbuf = icur->get_key(&ksiz, true);
nf.cleanup();
PyObject* pyrv;
if (kbuf) {
pyrv = newbytes(kbuf, ksiz);
delete[] kbuf;
} else {
pyrv = NULL;
}
return pyrv;
}
/**
* Define objects of the DB class.
*/
static bool define_db() {
static PyTypeObject type_db = { PyVarObject_HEAD_INIT(NULL, 0) };
size_t zoff = offsetof(PyTypeObject, tp_name);
std::memset((char*)&type_db + zoff, 0, sizeof(type_db) - zoff);
type_db.tp_name = "kyotocabinet.DB";
type_db.tp_basicsize = sizeof(DB_data);
type_db.tp_itemsize = 0;
type_db.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE;
type_db.tp_doc = "Interface of database abstraction.";
type_db.tp_new = db_new;
type_db.tp_dealloc = (destructor)db_dealloc;
type_db.tp_init = (initproc)db_init;
type_db.tp_repr = (unaryfunc)db_repr;
type_db.tp_str = (unaryfunc)db_str;
static PyMethodDef db_methods[] = {
{ "error", (PyCFunction)db_error, METH_NOARGS,
"Get the last happened error." },
{ "open", (PyCFunction)db_open, METH_VARARGS,
"Open a database file." },
{ "close", (PyCFunction)db_close, METH_NOARGS,
"Close the database file." },
{ "accept", (PyCFunction)db_accept, METH_VARARGS,
"Accept a visitor to a record." },
{ "accept_bulk", (PyCFunction)db_accept_bulk, METH_VARARGS,
"Accept a visitor to multiple records at once." },
{ "iterate", (PyCFunction)db_iterate, METH_VARARGS,
"Iterate to accept a visitor for each record." },
{ "set", (PyCFunction)db_set, METH_VARARGS,
"Set the value of a record." },
{ "add", (PyCFunction)db_add, METH_VARARGS,
"Add a record." },
{ "replace", (PyCFunction)db_replace, METH_VARARGS,
"Replace the value of a record." },
{ "append", (PyCFunction)db_append, METH_VARARGS,
"Append the value of a record." },
{ "increment", (PyCFunction)db_increment, METH_VARARGS,
"Add a number to the numeric integer value of a record." },
{ "increment_double", (PyCFunction)db_increment_double, METH_VARARGS,
"Add a number to the numeric double value of a record." },
{ "cas", (PyCFunction)db_cas, METH_VARARGS,
"Perform compare-and-swap." },
{ "remove", (PyCFunction)db_remove, METH_VARARGS,
"Remove a record." },
{ "get", (PyCFunction)db_get, METH_VARARGS,
"Retrieve the value of a record." },
{ "get_str", (PyCFunction)db_get_str, METH_VARARGS,
"Retrieve the value of a record." },
{ "check", (PyCFunction)db_check, METH_VARARGS,
"Check the existence of a record." },
{ "seize", (PyCFunction)db_seize, METH_VARARGS,
"Retrieve the value of a record and remove it atomically." },
{ "get_seize", (PyCFunction)db_seize_str, METH_VARARGS,
"Retrieve the value of a record and remove it atomically." },
{ "set_bulk", (PyCFunction)db_set_bulk, METH_VARARGS,
"Store records at once." },
{ "remove_bulk", (PyCFunction)db_remove_bulk, METH_VARARGS,
"Remove records at once." },
{ "get_bulk", (PyCFunction)db_get_bulk, METH_VARARGS,
"Retrieve records at once." },
{ "get_bulk_str", (PyCFunction)db_get_bulk_str, METH_VARARGS,
"Retrieve records at once." },
{ "clear", (PyCFunction)db_clear, METH_NOARGS,
"Remove all records." },
{ "synchronize", (PyCFunction)db_synchronize, METH_VARARGS,
"Synchronize updated contents with the file and the device." },
{ "occupy", (PyCFunction)db_occupy, METH_VARARGS,
"Occupy database by locking and do something meanwhile." },
{ "copy", (PyCFunction)db_copy, METH_VARARGS,
"Create a copy of the database file." },
{ "begin_transaction", (PyCFunction)db_begin_transaction, METH_VARARGS,
"Begin transaction." },
{ "end_transaction", (PyCFunction)db_end_transaction, METH_VARARGS,
"End transaction." },
{ "transaction", (PyCFunction)db_transaction, METH_VARARGS,
"Perform entire transaction by a functor." },
{ "dump_snapshot", (PyCFunction)db_dump_snapshot, METH_VARARGS,
"Dump records into a snapshot file." },
{ "load_snapshot", (PyCFunction)db_load_snapshot, METH_VARARGS,
"Load records from a snapshot file." },
{ "count", (PyCFunction)db_count, METH_NOARGS,
"Get the number of records." },
{ "size", (PyCFunction)db_size, METH_NOARGS,
"Get the size of the database file." },
{ "path", (PyCFunction)db_path, METH_NOARGS,
"Get the path of the database file." },
{ "status", (PyCFunction)db_status, METH_NOARGS,
"Get the miscellaneous status information." },
{ "match_prefix", (PyCFunction)db_match_prefix, METH_VARARGS,
"Get keys matching a prefix string." },
{ "match_regex", (PyCFunction)db_match_regex, METH_VARARGS,
"Get keys matching a regular expression string." },
{ "match_similar", (PyCFunction)db_match_similar, METH_VARARGS,
"Get keys similar to a string in terms of the levenshtein distance." },
{ "merge", (PyCFunction)db_merge, METH_VARARGS,
"Merge records from other databases." },
{ "cursor", (PyCFunction)db_cursor, METH_NOARGS,
"Create a cursor object." },
{ "cursor_process", (PyCFunction)db_cursor_process, METH_VARARGS,
"Process a cursor by the block parameter." },
{ "shift", (PyCFunction)db_shift, METH_NOARGS,
"Remove the first record." },
{ "shift_str", (PyCFunction)db_shift_str, METH_NOARGS,
"Remove the first record." },
{ "tune_exception_rule", (PyCFunction)db_tune_exception_rule, METH_VARARGS,
"Set the rule about throwing exception." },
{ "process", (PyCFunction)db_process, METH_VARARGS | METH_CLASS,
"Process a database by a functor" },
{ NULL, NULL, 0, NULL }
};
type_db.tp_methods = db_methods;
static PyMappingMethods type_db_map;
std::memset(&type_db_map, 0, sizeof(type_db_map));
type_db_map.mp_length = (lenfunc)db_op_len;
type_db_map.mp_subscript = (binaryfunc)db_op_getitem;
type_db_map.mp_ass_subscript = (objobjargproc)db_op_setitem;
type_db.tp_as_mapping = &type_db_map;
type_db.tp_iter = (getiterfunc)db_op_iter;
if (PyType_Ready(&type_db) != 0) return false;
cls_db = (PyObject*)&type_db;
if (!setconstuint32(cls_db, "GEXCEPTIONAL", GEXCEPTIONAL)) return false;
if (!setconstuint32(cls_db, "GCONCURRENT", GCONCURRENT)) return false;
if (!setconstuint32(cls_db, "OREADER", kc::PolyDB::OREADER)) return false;
if (!setconstuint32(cls_db, "OWRITER", kc::PolyDB::OWRITER)) return false;
if (!setconstuint32(cls_db, "OCREATE", kc::PolyDB::OCREATE)) return false;
if (!setconstuint32(cls_db, "OTRUNCATE", kc::PolyDB::OTRUNCATE)) return false;
if (!setconstuint32(cls_db, "OAUTOTRAN", kc::PolyDB::OAUTOTRAN)) return false;
if (!setconstuint32(cls_db, "OAUTOSYNC", kc::PolyDB::OAUTOSYNC)) return false;
if (!setconstuint32(cls_db, "ONOLOCK", kc::PolyDB::ONOLOCK)) return false;
if (!setconstuint32(cls_db, "OTRYLOCK", kc::PolyDB::OTRYLOCK)) return false;
if (!setconstuint32(cls_db, "ONOREPAIR", kc::PolyDB::ONOREPAIR)) return false;
if (!setconstuint32(cls_db, "MSET", kc::PolyDB::MSET)) return false;
if (!setconstuint32(cls_db, "MADD", kc::PolyDB::MADD)) return false;
if (!setconstuint32(cls_db, "MREPLACE", kc::PolyDB::MREPLACE)) return false;
if (!setconstuint32(cls_db, "MAPPEND", kc::PolyDB::MAPPEND)) return false;
Py_INCREF(cls_db);
if (PyModule_AddObject(mod_kc, "DB", cls_db) != 0) return false;
return true;
}
/**
* Implementation of new.
*/
static PyObject* db_new(PyTypeObject* pytype, PyObject* pyargs, PyObject* pykwds) {
DB_data* data = (DB_data*)pytype->tp_alloc(pytype, 0);
if (!data) return NULL;
data->db = NULL;
data->exbits = 0;
data->pylock = NULL;
return (PyObject*)data;
}
/**
* Implementation of dealloc.
*/
static void db_dealloc(DB_data* data) {
kc::PolyDB* db = data->db;
PyObject* pylock = data->pylock;
Py_DECREF(pylock);
delete db;
Py_TYPE(data)->tp_free((PyObject*)data);
}
/**
* Raise the exception of an error code.
*/
static bool db_raise(DB_data* data) {
if (data->exbits == 0) return false;
kc::PolyDB::Error err = data->db->error();
uint32_t code = err.code();
if (data->exbits & (1 << code)) {
PyErr_Format(cls_err_children[code], "%u: %s", code, err.message());
return true;
}
return false;
}
/**
* Implementation of init.
*/
static int db_init(DB_data* data, PyObject* pyargs, PyObject* pykwds) {
int32_t argc = PyTuple_Size(pyargs);
PyObject* pyopts = Py_None;
if (argc > 0) pyopts = PyTuple_GetItem(pyargs, 0);
data->db = new kc::PolyDB();
uint32_t opts = PyLong_Check(pyopts) ? (uint32_t)PyLong_AsLong(pyopts) : 0;
if (opts & GEXCEPTIONAL) {
uint32_t exbits = 0;
exbits |= 1 << kc::PolyDB::Error::NOIMPL;
exbits |= 1 << kc::PolyDB::Error::INVALID;
exbits |= 1 << kc::PolyDB::Error::NOREPOS;
exbits |= 1 << kc::PolyDB::Error::NOPERM;
exbits |= 1 << kc::PolyDB::Error::BROKEN;
exbits |= 1 << kc::PolyDB::Error::SYSTEM;
exbits |= 1 << kc::PolyDB::Error::MISC;
data->exbits = exbits;
} else {
data->exbits = 0;
}
if (opts & GCONCURRENT) {
Py_INCREF(Py_None);
data->pylock = Py_None;
} else {
data->pylock = PyObject_CallMethod(mod_th, (char*)"Lock", NULL);
}
return 0;
}
/**
* Implementation of repr.
*/
static PyObject* db_repr(DB_data* data) {
kc::PolyDB* db = data->db;
std::string path = db->path();
if (path.size() < 1) path = "(None)";
std::string str;
NativeFunction nf(data);
kc::strprintf(&str, "",
path.c_str(), (long long)db->count(), (long long)db->size());
nf.cleanup();
return PyUnicode_FromString(str.c_str());
}
/**
* Implementation of str.
*/
static PyObject* db_str(DB_data* data) {
kc::PolyDB* db = data->db;
std::string path = db->path();
if (path.size() < 1) path = "(None)";
std::string str;
NativeFunction nf(data);
kc::strprintf(&str, "%s: %lld: %lld",
path.c_str(), (long long)db->count(), (long long)db->size());
nf.cleanup();
return PyUnicode_FromString(str.c_str());
}
/**
* Implementation of error.
*/
static PyObject* db_error(DB_data* data) {
kc::PolyDB* db = data->db;
kc::PolyDB::Error err = db->error();
PyObject* pyerr = PyObject_CallMethod(mod_kc, (char*)"Error",
(char*)"(IU)", err.code(), err.message());
return pyerr;
}
/**
* Implementation of open.
*/
static PyObject* db_open(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 2) {
throwinvarg();
return NULL;
}
PyObject* pypath = Py_None;
if (argc > 0) pypath = PyTuple_GetItem(pyargs, 0);
PyObject* pymode = Py_None;
if (argc > 1) pymode = PyTuple_GetItem(pyargs, 1);
kc::PolyDB* db = data->db;
SoftString path(pypath);
const char* tpath = path.size() > 0 ? path.ptr() : ":";
uint32_t mode = PyLong_Check(pymode) ? (uint32_t)PyLong_AsLong(pymode) :
kc::PolyDB::OWRITER | kc::PolyDB::OCREATE;
NativeFunction nf(data);
bool rv = db->open(tpath, mode);
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of close.
*/
static PyObject* db_close(DB_data* data) {
kc::PolyDB* db = data->db;
NativeFunction nf(data);
g_curbur.sweap();
bool rv = db->close();
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of accept.
*/
static PyObject* db_accept(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 2 || argc > 3) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
if (data->pylock == Py_None) {
db->set_error(kc::PolyDB::Error::INVALID, "unsupported method");
if (db_raise(data)) return NULL;
Py_RETURN_NONE;
}
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
PyObject* pyvisitor = PyTuple_GetItem(pyargs, 1);
PyObject* pywritable = Py_None;
if (argc > 2) pywritable = PyTuple_GetItem(pyargs, 2);
bool writable = pywritable == Py_None || PyObject_IsTrue(pywritable);
bool rv;
if (PyObject_IsInstance(pyvisitor, cls_vis) || PyCallable_Check(pyvisitor)) {
SoftVisitor visitor(pyvisitor, writable);
NativeFunction nf(data);
rv = db->accept(key.ptr(), key.size(), &visitor, writable);
nf.cleanup();
PyObject* pyextype, *pyexvalue, *pyextrace;
if (visitor.exception(&pyextype, &pyexvalue, &pyextrace)) {
PyErr_SetObject(pyextype, pyexvalue);
return NULL;
}
} else {
throwinvarg();
return NULL;
}
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of accept_bulk.
*/
static PyObject* db_accept_bulk(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 2 || argc > 3) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
if (data->pylock == Py_None) {
db->set_error(kc::PolyDB::Error::INVALID, "unsupported method");
if (db_raise(data)) return NULL;
Py_RETURN_NONE;
}
PyObject* pykeys = PyTuple_GetItem(pyargs, 0);
if (!PySequence_Check(pykeys)) {
throwinvarg();
return NULL;
}
StringVector keys;
int32_t knum = PySequence_Length(pykeys);
for (int32_t i = 0; i < knum; i++) {
PyObject* pykey = PySequence_GetItem(pykeys, i);
SoftString key(pykey);
keys.push_back(std::string(key.ptr(), key.size()));
Py_DECREF(pykey);
}
PyObject* pyvisitor = PyTuple_GetItem(pyargs, 1);
PyObject* pywritable = Py_None;
if (argc > 2) pywritable = PyTuple_GetItem(pyargs, 2);
bool writable = pywritable == Py_None || PyObject_IsTrue(pywritable);
bool rv;
if (PyObject_IsInstance(pyvisitor, cls_vis) || PyCallable_Check(pyvisitor)) {
SoftVisitor visitor(pyvisitor, writable);
NativeFunction nf(data);
rv = db->accept_bulk(keys, &visitor, writable);
nf.cleanup();
PyObject* pyextype, *pyexvalue, *pyextrace;
if (visitor.exception(&pyextype, &pyexvalue, &pyextrace)) {
PyErr_SetObject(pyextype, pyexvalue);
return NULL;
}
} else {
throwinvarg();
return NULL;
}
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of iterate.
*/
static PyObject* db_iterate(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
if (data->pylock == Py_None) {
db->set_error(kc::PolyDB::Error::INVALID, "unsupported method");
if (db_raise(data)) return NULL;
Py_RETURN_NONE;
}
PyObject* pyvisitor = PyTuple_GetItem(pyargs, 0);
PyObject* pywritable = Py_None;
if (argc > 1) pywritable = PyTuple_GetItem(pyargs, 1);
bool writable = pywritable == Py_None || PyObject_IsTrue(pywritable);
bool rv;
if (PyObject_IsInstance(pyvisitor, cls_vis) || PyCallable_Check(pyvisitor)) {
SoftVisitor visitor(pyvisitor, writable);
NativeFunction nf(data);
rv = db->iterate(&visitor, writable);
nf.cleanup();
PyObject* pyextype, *pyexvalue, *pyextrace;
if (visitor.exception(&pyextype, &pyexvalue, &pyextrace)) {
PyErr_SetObject(pyextype, pyexvalue);
return NULL;
}
} else {
throwinvarg();
return NULL;
}
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of set.
*/
static PyObject* db_set(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
PyObject* pyvalue = PyTuple_GetItem(pyargs, 1);
SoftString key(pykey);
SoftString value(pyvalue);
NativeFunction nf(data);
bool rv = db->set(key.ptr(), key.size(), value.ptr(), value.size());
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of add.
*/
static PyObject* db_add(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
PyObject* pyvalue = PyTuple_GetItem(pyargs, 1);
SoftString key(pykey);
SoftString value(pyvalue);
NativeFunction nf(data);
bool rv = db->add(key.ptr(), key.size(), value.ptr(), value.size());
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of replace.
*/
static PyObject* db_replace(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
PyObject* pyvalue = PyTuple_GetItem(pyargs, 1);
SoftString key(pykey);
SoftString value(pyvalue);
NativeFunction nf(data);
bool rv = db->replace(key.ptr(), key.size(), value.ptr(), value.size());
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of append.
*/
static PyObject* db_append(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
PyObject* pyvalue = PyTuple_GetItem(pyargs, 1);
SoftString key(pykey);
SoftString value(pyvalue);
NativeFunction nf(data);
bool rv = db->append(key.ptr(), key.size(), value.ptr(), value.size());
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of increment.
*/
static PyObject* db_increment(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 3) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
PyObject* pynum = Py_None;
if (argc > 1) pynum = PyTuple_GetItem(pyargs, 1);
int64_t num = pynum == Py_None ? 0 : pyatoi(pynum);
PyObject* pyorig = Py_None;
if (argc > 2) pyorig = PyTuple_GetItem(pyargs, 2);
int64_t orig = pyorig == Py_None ? 0 : pyatoi(pyorig);
PyObject* pyrv;
NativeFunction nf(data);
num = db->increment(key.ptr(), key.size(), num, orig);
nf.cleanup();
if (num == kc::INT64MIN) {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
} else {
pyrv = PyLong_FromLongLong(num);
}
return pyrv;
}
/**
* Implementation of increment_double.
*/
static PyObject* db_increment_double(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 3) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
PyObject* pynum = Py_None;
if (argc > 1) pynum = PyTuple_GetItem(pyargs, 1);
double num = pynum == Py_None ? 0 : pyatof(pynum);
PyObject* pyorig = Py_None;
if (argc > 2) pyorig = PyTuple_GetItem(pyargs, 2);
double orig = pyorig == Py_None ? 0 : pyatof(pyorig);
PyObject* pyrv;
NativeFunction nf(data);
num = db->increment_double(key.ptr(), key.size(), num, orig);
nf.cleanup();
if (kc::chknan(num)) {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
} else {
pyrv = PyFloat_FromDouble(num);
}
return pyrv;
}
/**
* Implementation of cas.
*/
static PyObject* db_cas(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 3) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
PyObject* pyoval = PyTuple_GetItem(pyargs, 1);
SoftString oval(pyoval);
const char* ovbuf = NULL;
size_t ovsiz = 0;
if (pyoval != Py_None) {
ovbuf = oval.ptr();
ovsiz = oval.size();
}
PyObject* pynval = PyTuple_GetItem(pyargs, 2);
SoftString nval(pynval);
const char* nvbuf = NULL;
size_t nvsiz = 0;
if (pynval != Py_None) {
nvbuf = nval.ptr();
nvsiz = nval.size();
}
NativeFunction nf(data);
bool rv = db->cas(key.ptr(), key.size(), ovbuf, ovsiz, nvbuf, nvsiz);
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of remove.
*/
static PyObject* db_remove(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
NativeFunction nf(data);
bool rv = db->remove(key.ptr(), key.size());
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of get.
*/
static PyObject* db_get(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
NativeFunction nf(data);
size_t vsiz;
char* vbuf = db->get(key.ptr(), key.size(), &vsiz);
nf.cleanup();
PyObject* pyrv;
if (vbuf) {
pyrv = newbytes(vbuf, vsiz);
delete[] vbuf;
} else {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of get_str.
*/
static PyObject* db_get_str(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
NativeFunction nf(data);
size_t vsiz;
char* vbuf = db->get(key.ptr(), key.size(), &vsiz);
nf.cleanup();
PyObject* pyrv;
if (vbuf) {
pyrv = newstring(vbuf);
delete[] vbuf;
} else {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of check.
*/
static PyObject* db_check(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
NativeFunction nf(data);
int32_t vsiz = db->check(key.ptr(), key.size());
nf.cleanup();
if (vsiz < 0 && db_raise(data)) return NULL;
return PyLong_FromLongLong(vsiz);
}
/**
* Implementation of seize.
*/
static PyObject* db_seize(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
NativeFunction nf(data);
size_t vsiz;
char* vbuf = db->seize(key.ptr(), key.size(), &vsiz);
nf.cleanup();
PyObject* pyrv;
if (vbuf) {
pyrv = newbytes(vbuf, vsiz);
delete[] vbuf;
} else {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of seize_str.
*/
static PyObject* db_seize_str(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykey = PyTuple_GetItem(pyargs, 0);
SoftString key(pykey);
NativeFunction nf(data);
size_t vsiz;
char* vbuf = db->seize(key.ptr(), key.size(), &vsiz);
nf.cleanup();
PyObject* pyrv;
if (vbuf) {
pyrv = newstring(vbuf);
delete[] vbuf;
} else {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of set_bulk.
*/
static PyObject* db_set_bulk(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pyrecs = PyTuple_GetItem(pyargs, 0);
if (!PyMapping_Check(pyrecs)) {
throwinvarg();
return NULL;
}
StringMap recs;
PyObject* pyitems = PyMapping_Items(pyrecs);
int32_t rnum = PySequence_Length(pyitems);
for (int32_t i = 0; i < rnum; i++) {
PyObject* pyitem = PySequence_GetItem(pyitems, i);
if (PyTuple_Size(pyitem) == 2) {
PyObject* pykey = PyTuple_GetItem(pyitem, 0);
PyObject* pyvalue = PyTuple_GetItem(pyitem, 1);
SoftString key(pykey);
SoftString value(pyvalue);
recs[std::string(key.ptr(), key.size())] = std::string(value.ptr(), value.size());
}
Py_DECREF(pyitem);
}
Py_DECREF(pyitems);
PyObject* pyatomic = Py_True;
if (argc > 1) pyatomic = PyTuple_GetItem(pyargs, 1);
bool atomic = PyObject_IsTrue(pyatomic);
NativeFunction nf(data);
int64_t rv = db->set_bulk(recs, atomic);
nf.cleanup();
if (rv < 0 && db_raise(data)) return NULL;
return PyLong_FromLongLong(rv);
}
/**
* Implementation of remove_bulk.
*/
static PyObject* db_remove_bulk(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykeys = PyTuple_GetItem(pyargs, 0);
if (!PySequence_Check(pykeys)) {
throwinvarg();
return NULL;
}
StringVector keys;
int32_t knum = PySequence_Length(pykeys);
for (int32_t i = 0; i < knum; i++) {
PyObject* pykey = PySequence_GetItem(pykeys, i);
SoftString key(pykey);
keys.push_back(std::string(key.ptr(), key.size()));
Py_DECREF(pykey);
}
PyObject* pyatomic = Py_True;
if (argc > 1) pyatomic = PyTuple_GetItem(pyargs, 1);
bool atomic = PyObject_IsTrue(pyatomic);
NativeFunction nf(data);
int64_t rv = db->remove_bulk(keys, atomic);
nf.cleanup();
if (rv < 0 && db_raise(data)) return NULL;
return PyLong_FromLongLong(rv);
}
/**
* Implementation of get_bulk.
*/
static PyObject* db_get_bulk(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykeys = PyTuple_GetItem(pyargs, 0);
if (!PySequence_Check(pykeys)) {
throwinvarg();
return NULL;
}
StringVector keys;
int32_t knum = PySequence_Length(pykeys);
for (int32_t i = 0; i < knum; i++) {
PyObject* pykey = PySequence_GetItem(pykeys, i);
SoftString key(pykey);
keys.push_back(std::string(key.ptr(), key.size()));
Py_DECREF(pykey);
}
PyObject* pyatomic = Py_True;
if (argc > 1) pyatomic = PyTuple_GetItem(pyargs, 1);
bool atomic = PyObject_IsTrue(pyatomic);
NativeFunction nf(data);
StringMap recs;
int64_t rv = db->get_bulk(keys, &recs, atomic);
nf.cleanup();
if (rv < 0) {
if (db_raise(data)) return NULL;
Py_RETURN_NONE;
}
PyObject* pyrecs = PyDict_New();
StringMap::const_iterator it = recs.begin();
StringMap::const_iterator itend = recs.end();
while (it != itend) {
PyObject* pykey = newbytes(it->first.data(), it->first.size());
PyObject* pyvalue = newbytes(it->second.data(), it->second.size());
PyDict_SetItem(pyrecs, pykey, pyvalue);
Py_DECREF(pyvalue);
Py_DECREF(pykey);
it++;
}
return pyrecs;
}
/**
* Implementation of get_bulk_str.
*/
static PyObject* db_get_bulk_str(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pykeys = PyTuple_GetItem(pyargs, 0);
if (!PySequence_Check(pykeys)) {
throwinvarg();
return NULL;
}
StringVector keys;
int32_t knum = PySequence_Length(pykeys);
for (int32_t i = 0; i < knum; i++) {
PyObject* pykey = PySequence_GetItem(pykeys, i);
SoftString key(pykey);
keys.push_back(std::string(key.ptr(), key.size()));
Py_DECREF(pykey);
}
PyObject* pyatomic = Py_True;
if (argc > 1) pyatomic = PyTuple_GetItem(pyargs, 1);
bool atomic = PyObject_IsTrue(pyatomic);
NativeFunction nf(data);
StringMap recs;
int64_t rv = db->get_bulk(keys, &recs, atomic);
nf.cleanup();
if (rv < 0) {
if (db_raise(data)) return NULL;
Py_RETURN_NONE;
}
return maptopymap(&recs);
}
/**
* Implementation of clear.
*/
static PyObject* db_clear(DB_data* data) {
kc::PolyDB* db = data->db;
NativeFunction nf(data);
bool rv = db->clear();
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of synchronize.
*/
static PyObject* db_synchronize(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 2) {
throwinvarg();
return NULL;
}
PyObject* pyhard = Py_None;
if (argc > 0) pyhard = PyTuple_GetItem(pyargs, 0);
PyObject* pyproc = Py_None;
if (argc > 1) pyproc = PyTuple_GetItem(pyargs, 1);
kc::PolyDB* db = data->db;
bool hard = PyObject_IsTrue(pyhard);
bool rv;
if (PyObject_IsInstance(pyproc, cls_fproc) || PyCallable_Check(pyproc)) {
if (data->pylock == Py_None) {
db->set_error(kc::PolyDB::Error::INVALID, "unsupported method");
if (db_raise(data)) return NULL;
Py_RETURN_NONE;
}
SoftFileProcessor proc(pyproc);
NativeFunction nf(data);
rv = db->synchronize(hard, &proc);
nf.cleanup();
PyObject* pyextype, *pyexvalue, *pyextrace;
if (proc.exception(&pyextype, &pyexvalue, &pyextrace)) {
PyErr_SetObject(pyextype, pyexvalue);
return NULL;
}
} else {
NativeFunction nf(data);
rv = db->synchronize(hard, NULL);
nf.cleanup();
}
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of occupy.
*/
static PyObject* db_occupy(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 2) {
throwinvarg();
return NULL;
}
PyObject* pywritable = Py_None;
if (argc > 0) pywritable = PyTuple_GetItem(pyargs, 0);
PyObject* pyproc = Py_None;
if (argc > 1) pyproc = PyTuple_GetItem(pyargs, 1);
kc::PolyDB* db = data->db;
bool writable = PyObject_IsTrue(pywritable);
bool rv;
if (PyObject_IsInstance(pyproc, cls_fproc) || PyCallable_Check(pyproc)) {
if (data->pylock == Py_None) {
db->set_error(kc::PolyDB::Error::INVALID, "unsupported method");
if (db_raise(data)) return NULL;
Py_RETURN_NONE;
}
SoftFileProcessor proc(pyproc);
NativeFunction nf(data);
rv = db->occupy(writable, &proc);
nf.cleanup();
PyObject* pyextype, *pyexvalue, *pyextrace;
if (proc.exception(&pyextype, &pyexvalue, &pyextrace)) {
PyErr_SetObject(pyextype, pyexvalue);
return NULL;
}
} else {
NativeFunction nf(data);
rv = db->occupy(writable, NULL);
nf.cleanup();
}
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of copy.
*/
static PyObject* db_copy(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pydest = PyTuple_GetItem(pyargs, 0);
kc::PolyDB* db = data->db;
SoftString dest(pydest);
NativeFunction nf(data);
bool rv = db->copy(dest.ptr());
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of begin_transaction.
*/
static PyObject* db_begin_transaction(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pyhard = Py_None;
if (argc > 0) pyhard = PyTuple_GetItem(pyargs, 0);
kc::PolyDB* db = data->db;
bool hard = PyObject_IsTrue(pyhard);
bool err = false;
while (true) {
NativeFunction nf(data);
bool rv = db->begin_transaction_try(hard);
nf.cleanup();
if (rv) break;
if (db->error() != kc::PolyDB::Error::LOGIC) {
err = true;
break;
}
threadyield();
}
if (err) {
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
Py_RETURN_TRUE;
}
/**
* Implementation of end_transaction.
*/
static PyObject* db_end_transaction(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc > 1) {
throwinvarg();
return NULL;
}
PyObject* pycommit = Py_None;
if (argc > 0) pycommit = PyTuple_GetItem(pyargs, 0);
kc::PolyDB* db = data->db;
bool commit = pycommit == Py_None || PyObject_IsTrue(pycommit);
NativeFunction nf(data);
bool rv = db->end_transaction(commit);
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of transaction.
*/
static PyObject* db_transaction(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
PyObject* pyproc = PyTuple_GetItem(pyargs, 0);
PyObject* pyhard = Py_None;
if (argc > 1) pyhard = PyTuple_GetItem(pyargs, 1);
PyObject* pyrv = PyObject_CallMethod((PyObject*)data, (char*)"begin_transaction",
(char*)"(O)", pyhard);
if (!pyrv) return NULL;
if (!PyObject_IsTrue(pyrv)) {
Py_DECREF(pyrv);
Py_RETURN_FALSE;
}
Py_DECREF(pyrv);
pyrv = PyObject_CallFunction(pyproc, NULL);
bool commit = false;
if (pyrv) commit = PyObject_IsTrue(pyrv);
Py_DECREF(pyrv);
pyrv = PyObject_CallMethod((PyObject*)data, (char*)"end_transaction",
(char*)"(O)", commit ? Py_True : Py_False);
if (!pyrv) return NULL;
if (!PyObject_IsTrue(pyrv)) {
Py_DECREF(pyrv);
Py_RETURN_FALSE;
}
Py_DECREF(pyrv);
Py_RETURN_TRUE;
}
/**
* Implementation of dump_snapshot.
*/
static PyObject* db_dump_snapshot(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pydest = PyTuple_GetItem(pyargs, 0);
kc::PolyDB* db = data->db;
SoftString dest(pydest);
NativeFunction nf(data);
bool rv = db->dump_snapshot(dest.ptr());
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of load_snapshot.
*/
static PyObject* db_load_snapshot(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pysrc = PyTuple_GetItem(pyargs, 0);
kc::PolyDB* db = data->db;
SoftString src(pysrc);
NativeFunction nf(data);
bool rv = db->load_snapshot(src.ptr());
nf.cleanup();
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of count.
*/
static PyObject* db_count(DB_data* data) {
kc::PolyDB* db = data->db;
NativeFunction nf(data);
int64_t count = db->count();
nf.cleanup();
if (count < 0 && db_raise(data)) return NULL;
return PyLong_FromLongLong(count);
}
/**
* Implementation of size.
*/
static PyObject* db_size(DB_data* data) {
kc::PolyDB* db = data->db;
NativeFunction nf(data);
int64_t size = db->size();
nf.cleanup();
if (size < 0 && db_raise(data)) return NULL;
return PyLong_FromLongLong(size);
}
/**
* Implementation of path.
*/
static PyObject* db_path(DB_data* data) {
kc::PolyDB* db = data->db;
NativeFunction nf(data);
const std::string& path = db->path();
nf.cleanup();
if (path.size() < 1) {
if (db_raise(data)) return NULL;
Py_RETURN_NONE;
}
return PyUnicode_FromString(path.c_str());
}
/**
* Implementation of status.
*/
static PyObject* db_status(DB_data* data) {
kc::PolyDB* db = data->db;
StringMap status;
NativeFunction nf(data);
bool rv = db->status(&status);
nf.cleanup();
if (rv) return maptopymap(&status);
if (db_raise(data)) return NULL;
Py_RETURN_NONE;
}
/**
* Implementation of match_prefix.
*/
static PyObject* db_match_prefix(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pyprefix = PyTuple_GetItem(pyargs, 0);
SoftString prefix(pyprefix);
PyObject* pymax = Py_None;
if (argc > 1) pymax = PyTuple_GetItem(pyargs, 1);
int64_t max = pymax == Py_None ? -1 : pyatoi(pymax);
PyObject* pyrv;
NativeFunction nf(data);
StringVector keys;
max = db->match_prefix(std::string(prefix.ptr(), prefix.size()), &keys, max);
nf.cleanup();
if (max >= 0) {
pyrv = vectortopylist(&keys);
} else {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of match_regex.
*/
static PyObject* db_match_regex(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pyregex = PyTuple_GetItem(pyargs, 0);
SoftString regex(pyregex);
PyObject* pymax = Py_None;
if (argc > 1) pymax = PyTuple_GetItem(pyargs, 1);
int64_t max = pymax == Py_None ? -1 : pyatoi(pymax);
PyObject* pyrv;
NativeFunction nf(data);
StringVector keys;
max = db->match_regex(std::string(regex.ptr(), regex.size()), &keys, max);
nf.cleanup();
if (max >= 0) {
pyrv = vectortopylist(&keys);
} else {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of match_similar.
*/
static PyObject* db_match_similar(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 4) {
throwinvarg();
return NULL;
}
kc::PolyDB* db = data->db;
PyObject* pyorigin = PyTuple_GetItem(pyargs, 0);
SoftString origin(pyorigin);
PyObject* pyrange = Py_None;
if (argc > 1) pyrange = PyTuple_GetItem(pyargs, 1);
int64_t range = pyrange == Py_None ? 1 : pyatoi(pyrange);
PyObject* pyutf = Py_None;
if (argc > 2) pyutf = PyTuple_GetItem(pyargs, 2);
bool utf = PyObject_IsTrue(pyutf);
PyObject* pymax = Py_None;
if (argc > 3) pymax = PyTuple_GetItem(pyargs, 3);
int64_t max = pymax == Py_None ? -1 : pyatoi(pymax);
PyObject* pyrv;
NativeFunction nf(data);
StringVector keys;
max = db->match_similar(std::string(origin.ptr(), origin.size()), range, utf, &keys, max);
nf.cleanup();
if (max >= 0) {
pyrv = vectortopylist(&keys);
} else {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of merge.
*/
static PyObject* db_merge(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 2) {
throwinvarg();
return NULL;
}
PyObject* pysrcary = PyTuple_GetItem(pyargs, 0);
if (!PySequence_Check(pysrcary)) {
throwinvarg();
return NULL;
}
PyObject* pymode = Py_None;
if (argc > 1) pymode = PyTuple_GetItem(pyargs, 1);
uint32_t mode = PyLong_Check(pymode) ? (uint32_t)PyLong_AsLong(pymode) :
kc::PolyDB::OWRITER | kc::PolyDB::OCREATE;
kc::PolyDB* db = data->db;
int32_t num = PySequence_Length(pysrcary);
if (num < 1) Py_RETURN_TRUE;
kc::BasicDB** srcary = new kc::BasicDB*[num];
size_t srcnum = 0;
for (int32_t i = 0; i < num; i++) {
PyObject* pysrcdb = PySequence_GetItem(pysrcary, i);
if (PyObject_IsInstance(pysrcdb, cls_db)) {
DB_data* srcdbdata = (DB_data*)pysrcdb;
srcary[srcnum++] = srcdbdata->db;
}
Py_DECREF(pysrcdb);
}
NativeFunction nf(data);
bool rv = db->merge(srcary, srcnum, (kc::PolyDB::MergeMode)mode);
nf.cleanup();
delete[] srcary;
if (rv) Py_RETURN_TRUE;
if (db_raise(data)) return NULL;
Py_RETURN_FALSE;
}
/**
* Implementation of cursor.
*/
static PyObject* db_cursor(DB_data* data) {
PyObject* pycur = PyObject_CallMethod(mod_kc, (char*)"Cursor",
(char*)"(O)", (PyObject*)data);
return pycur;
}
/**
* Implementation of cursor_process.
*/
static PyObject* db_cursor_process(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pyproc = PyTuple_GetItem(pyargs, 0);
if (!PyCallable_Check(pyproc)) {
throwinvarg();
return NULL;
}
PyObject* pycur = PyObject_CallMethod(mod_kc, (char*)"Cursor",
(char*)"(O)", (PyObject*)data);
if (!pycur) return NULL;
PyObject* pyrv = PyObject_CallFunction(pyproc, (char*)"(O)", pycur);
if (!pyrv) {
Py_DECREF(pycur);
return NULL;
}
Py_DECREF(pyrv);
pyrv = PyObject_CallMethod(pycur, (char*)"disable", NULL);
if (!pyrv) {
Py_DECREF(pycur);
return NULL;
}
Py_DECREF(pyrv);
Py_DECREF(pycur);
Py_RETURN_NONE;
}
/**
* Implementation of shift.
*/
static PyObject* db_shift(DB_data* data) {
kc::PolyDB* db = data->db;
NativeFunction nf(data);
char* kbuf;
const char* vbuf;
size_t ksiz, vsiz;
kbuf = db_shift_impl(db, &ksiz, &vbuf, &vsiz);
nf.cleanup();
PyObject* pyrv;
if (kbuf) {
pyrv = PyTuple_New(2);
PyObject* pykey = newbytes(kbuf, ksiz);
PyObject* pyvalue = newbytes(vbuf, vsiz);
PyTuple_SetItem(pyrv, 0, pykey);
PyTuple_SetItem(pyrv, 1, pyvalue);
delete[] kbuf;
} else {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of shift_str.
*/
static PyObject* db_shift_str(DB_data* data) {
kc::PolyDB* db = data->db;
NativeFunction nf(data);
char* kbuf;
const char* vbuf;
size_t ksiz, vsiz;
kbuf = db_shift_impl(db, &ksiz, &vbuf, &vsiz);
nf.cleanup();
PyObject* pyrv;
if (kbuf) {
pyrv = PyTuple_New(2);
PyObject* pykey = newstring(kbuf);
PyObject* pyvalue = newstring(vbuf);
PyTuple_SetItem(pyrv, 0, pykey);
PyTuple_SetItem(pyrv, 1, pyvalue);
delete[] kbuf;
} else {
if (db_raise(data)) return NULL;
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Common implementation of shift and shift_str.
*/
static char* db_shift_impl(kc::PolyDB* db, size_t* ksp, const char** vbp, size_t* vsp) {
kc::PolyDB::Cursor cur(db);
if (!cur.jump()) return NULL;
class VisitorImpl : public kc::PolyDB::Visitor {
public:
explicit VisitorImpl() : kbuf_(NULL), ksiz_(0), vbuf_(NULL), vsiz_(0) {}
char* rv(size_t* ksp, const char** vbp, size_t* vsp) {
*ksp = ksiz_;
*vbp = vbuf_;
*vsp = vsiz_;
return kbuf_;
}
private:
const char* visit_full(const char* kbuf, size_t ksiz,
const char* vbuf, size_t vsiz, size_t* sp) {
size_t rsiz = ksiz + 1 + vsiz + 1;
kbuf_ = new char[rsiz];
std::memcpy(kbuf_, kbuf, ksiz);
kbuf_[ksiz] = '\0';
ksiz_ = ksiz;
vbuf_ = kbuf_ + ksiz + 1;
std::memcpy(vbuf_, vbuf, vsiz);
vbuf_[vsiz] = '\0';
vsiz_ = vsiz;
return REMOVE;
}
char* kbuf_;
size_t ksiz_;
char* vbuf_;
size_t vsiz_;
} visitor;
if (!cur.accept(&visitor, true, false)) {
*ksp = 0;
*vbp = NULL;
*vsp = 0;
return NULL;
}
return visitor.rv(ksp, vbp, vsp);
}
/**
* Implementation of tune_exception_rule.
*/
static PyObject* db_tune_exception_rule(DB_data* data, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc != 1) {
throwinvarg();
return NULL;
}
PyObject* pycodes = PyTuple_GetItem(pyargs, 0);
if (!PySequence_Check(pycodes)) Py_RETURN_FALSE;
uint32_t exbits = 0;
int32_t num = PySequence_Length(pycodes);
for (int32_t i = 0; i < num; i++) {
PyObject* pycode = PySequence_GetItem(pycodes, i);
if (PyLong_Check(pycode)) {
uint32_t code = PyLong_AsLong(pycode);
if (code <= kc::PolyDB::Error::MISC) exbits |= 1 << code;
}
Py_DECREF(pycode);
}
data->exbits = exbits;
Py_RETURN_TRUE;
}
/**
* Implementation of __len__.
*/
static Py_ssize_t db_op_len(DB_data* data) {
kc::PolyDB* db = data->db;
NativeFunction nf(data);
int64_t count = db->count();
nf.cleanup();
return count;
}
/**
* Implementation of __getitem__.
*/
static PyObject* db_op_getitem(DB_data* data, PyObject* pykey) {
kc::PolyDB* db = data->db;
SoftString key(pykey);
NativeFunction nf(data);
size_t vsiz;
char* vbuf = db->get(key.ptr(), key.size(), &vsiz);
nf.cleanup();
PyObject* pyrv;
if (vbuf) {
pyrv = newbytes(vbuf, vsiz);
delete[] vbuf;
} else {
Py_INCREF(Py_None);
pyrv = Py_None;
}
return pyrv;
}
/**
* Implementation of __setitem__.
*/
static int db_op_setitem(DB_data* data, PyObject* pykey, PyObject* pyvalue) {
kc::PolyDB* db = data->db;
if (pyvalue) {
SoftString key(pykey);
SoftString value(pyvalue);
NativeFunction nf(data);
bool rv = db->set(key.ptr(), key.size(), value.ptr(), value.size());
nf.cleanup();
if (rv) return 0;
throwruntime("DB::set failed");
return -1;
} else {
SoftString key(pykey);
NativeFunction nf(data);
bool rv = db->remove(key.ptr(), key.size());
nf.cleanup();
if (rv) return 0;
throwruntime("DB::remove failed");
return -1;
}
}
/**
* Implementation of __iter__.
*/
static PyObject* db_op_iter(DB_data* data) {
PyObject* pycur = PyObject_CallMethod(mod_kc, (char*)"Cursor",
(char*)"(O)", (PyObject*)data);
PyObject* pyrv = PyObject_CallMethod(pycur, (char*)"jump", NULL);
if (pyrv) Py_DECREF(pyrv);
return pycur;
}
/**
* Implementation of process.
*/
static PyObject* db_process(PyObject* cls, PyObject* pyargs) {
int32_t argc = PyTuple_Size(pyargs);
if (argc < 1 || argc > 4) {
throwinvarg();
return NULL;
}
PyObject* pyproc = PyTuple_GetItem(pyargs, 0);
if (!PyCallable_Check(pyproc)) {
throwinvarg();
return NULL;
}
PyObject* pypath = Py_None;
if (argc > 1) pypath = PyTuple_GetItem(pyargs, 1);
PyObject* pymode = Py_None;
if (argc > 2) pymode = PyTuple_GetItem(pyargs, 2);
PyObject* pyopts = Py_None;
if (argc > 3) pyopts = PyTuple_GetItem(pyargs, 3);
PyObject* pydb = PyObject_CallMethod(mod_kc, (char*)"DB", (char*)"(O)", pyopts);
if (!pydb) return NULL;
PyObject* pyrv = PyObject_CallMethod(pydb, (char*)"open", (char*)"(OO)", pypath, pymode);
if (!PyObject_IsTrue(pyrv)) {
Py_DECREF(pyrv);
PyObject* pyerr = PyObject_CallMethod(pydb, (char*)"error", NULL);
Py_DECREF(pydb);
return pyerr;
}
pyrv = PyObject_CallFunction(pyproc, (char*)"(O)", pydb);
if (!pyrv) {
Py_DECREF(pydb);
return NULL;
}
Py_DECREF(pyrv);
pyrv = PyObject_CallMethod(pydb, (char*)"close", NULL);
if (!pyrv) {
Py_DECREF(pydb);
return NULL;
}
if (!PyObject_IsTrue(pyrv)) {
Py_DECREF(pyrv);
PyObject* pyerr = PyObject_CallMethod(pydb, (char*)"error", NULL);
Py_DECREF(pydb);
return pyerr;
}
Py_DECREF(pyrv);
Py_DECREF(pydb);
Py_RETURN_NONE;
}
}
// END OF FILE
kyotocabinet-python-1.23/example/ 0000755 0001750 0001750 00000000000 11425313212 016100 5 ustar mikio mikio kyotocabinet-python-1.23/example/kcdbex2.py 0000644 0001750 0001750 00000002027 11372276520 020010 0 ustar mikio mikio from kyotocabinet import *
import sys
# create the database object
db = DB()
# open the database
if not db.open("casket.kch", DB.OREADER):
print("open error: " + str(db.error()), file=sys.stderr)
# define the visitor
class VisitorImpl(Visitor):
# call back function for an existing record
def visit_full(self, key, value):
print("{}:{}".format(key.decode(), value.decode()))
return self.NOP
# call back function for an empty record space
def visit_empty(self, key):
print("{} is missing".format(key.decode()), file=sys.stderr)
return self.NOP
visitor = VisitorImpl()
# retrieve a record with visitor
if not db.accept("foo", visitor, False) or \
not db.accept("dummy", visitor, False):
print("accept error: " + str(db.error()), file=sys.stderr)
# traverse records with visitor
if not db.iterate(visitor, False):
print("iterate error: " + str(db.error()), file=sys.stderr)
# close the database
if not db.close():
print("close error: " + str(db.error()), file=sys.stderr)
kyotocabinet-python-1.23/example/kcdbex3.py 0000644 0001750 0001750 00000002123 11376142241 020002 0 ustar mikio mikio from kyotocabinet import *
import sys
# define the functor
def dbproc(db):
# store records
db[b'foo'] = b'step'; # bytes is fundamental
db['bar'] = 'hop'; # string is also ok
db[3] = 'jump'; # number is also ok
# retrieve a record value
print("{}".format(db['foo'].decode()))
# update records in transaction
def tranproc():
db['foo'] = 2.71828
return True
db.transaction(tranproc)
# multiply a record value
def mulproc(key, value):
return float(value) * 2
db.accept('foo', mulproc)
# traverse records by iterator
for key in db:
print("{}:{}".format(key.decode(), db[key].decode()))
# upcase values by iterator
def upproc(key, value):
return value.upper()
db.iterate(upproc)
# traverse records by cursor
def curproc(cur):
cur.jump()
def printproc(key, value):
print("{}:{}".format(key.decode(), value.decode()))
return Visitor.NOP
while cur.accept(printproc):
cur.step()
db.cursor_process(curproc)
# process the database by the functor
DB.process(dbproc, 'casket.kch')
kyotocabinet-python-1.23/example/memsize.py 0000644 0001750 0001750 00000001541 11425313170 020127 0 ustar mikio mikio from kyotocabinet import *
import sys
import os
import re
import time
def memoryusage():
for line in open("/proc/self/status"):
line = line.rstrip()
if line.startswith("VmRSS:"):
line = re.sub(r".*:\s*(\d+).*", r"\1", line)
return float(line) / 1024
return -1
musage = memoryusage()
rnum = 1000000
if len(sys.argv) > 1:
rnum = int(sys.argv[1])
if len(sys.argv) > 2:
hash = DB()
if not hash.open(sys.argv[2], DB.OWRITER | DB.OCREATE | DB.OTRUNCATE):
raise RuntimeError(hash.error())
else:
hash = {}
stime = time.time()
for i in range(0, rnum):
key = "{:08d}".format(i)
value = "{:08d}".format(i)
hash[key] = value
etime = time.time()
print("Count: {}".format(len(hash)))
print("Time: {:.3f} sec.".format(etime - stime))
print("Usage: {:.3f} MB".format(memoryusage() - musage))
kyotocabinet-python-1.23/example/kcdbex1.py 0000644 0001750 0001750 00000001437 11372275745 020023 0 ustar mikio mikio from kyotocabinet import *
import sys
# create the database object
db = DB()
# open the database
if not db.open("casket.kch", DB.OWRITER | DB.OCREATE):
print("open error: " + str(db.error()), file=sys.stderr)
# store records
if not db.set("foo", "hop") or \
not db.set("bar", "step") or \
not db.set("baz", "jump"):
print("set error: " + str(db.error()), file=sys.stderr)
# retrieve records
value = db.get_str("foo")
if value:
print(value)
else:
print("get error: " + str(db.error()), file=sys.stderr)
# traverse records
cur = db.cursor()
cur.jump()
while True:
rec = cur.get_str(True)
if not rec: break
print(rec[0] + ":" + rec[1])
cur.disable()
# close the database
if not db.close():
print("close error: " + str(db.error()), file=sys.stderr)
kyotocabinet-python-1.23/README 0000644 0001750 0001750 00000001266 11420706455 015344 0 ustar mikio mikio ================================================================
Kyoto Cabinet: a straightforward implementation of DBM
Copyright (C) 2009-2010 FAL Labs
================================================================
Please read the following documents with a WWW browser.
How to install Kyoto Cabinet is explained in the API document.
README - this file
COPYING - license
doc/index.html - index of documents
Kyoto Cabinet is released under the terms of the GNU General Public
License version 3. See the file `COPYING' for details.
Kyoto Cabinet was written by FAL Labs. You can contact the author
by e-mail to `info@fallabs.com'.
Thanks.
== END OF FILE ==