apsw-3.8.11.1-r1/0000755000000000000000000000000012576610040011734 5ustar rootrootapsw-3.8.11.1-r1/doc/0000755000000000000000000000000012576610040012501 5ustar rootrootapsw-3.8.11.1-r1/doc/vfs.html0000664000000000000000000015524112565423350014203 0ustar rootroot Virtual File System (VFS) — APSW 3.8.11.1-r1 documentation

Virtual File System (VFS)

SQLite 3.6 has new VFS functionality which defines the interface between the SQLite core and the underlying operating system. The majority of the functionality deals with files. APSW exposes this functionality letting you provide your own routines. You can also inherit from an existing vfs making it easy to augment or override specific routines. For example you could obfuscate your database by XORing the data implemented by augmenting the read and write methods. The method names are exactly the same as SQLite uses making it easier to read the SQLite documentation, trouble tickets, web searches or mailing lists. The SQLite convention results in names like xAccess, xCurrentTime and xWrite.

You specify which VFS to use as a parameter to the Connection constructor.

db=apsw.Connection("file", vfs="myvfs")

The easiest way to get started is to make a VFS derived class that inherits from the default vfs. Then override methods you want to change behaviour of. If you want to just change how file operations are done then you have to override VFS.xOpen() to return a file instance that has your overridden VFSFile methods. The example demonstrates obfuscating the database file contents.

Note

All strings supplied and returned to VFS/VFSFile routines are treated as Unicode.

Exceptions and errors

To return an error from any routine you should raise an exception. The exception will be translated into the appropriate SQLite error code for SQLite. To return a specific SQLite error code use exceptionfor(). If the exception does not map to any specific error code then SQLITE_ERROR which corresponds to SQLError is returned to SQLite.

The SQLite code that deals with VFS errors behaves in varying ways. Some routines have no way to return an error (eg xDlOpen just returns zero/NULL on being unable to load a library, xSleep has no error return parameter), others are unified (eg almost any error in xWrite will be returned to the user as disk full error). Sometimes errors are ignored as they are harmless such as when a journal can’t be deleted after a commit (the journal is marked as obsolete before being deleted). Simple operations such as opening a database can result in many different VFS function calls such as hot journals being detected, locking, and read/writes for playback/rollback.

To avoid confusion with exceptions being raised in the VFS and exceptions from normal code to open Connections or execute SQL queries, VFS exceptions are not raised in the normal way. (If they were, only one could be raised and it would obscure whatever exceptions the Connection open or SQL query execute wanted to raise.) Instead the VFS.excepthook() or VFSFile.excepthook() method is called with a tuple of exception type, exception value and exception traceback. The default implementation of excepthook calls sys.excepthook() which under Python 2 shows the stack trace and under Python 3 merely prints the exception value. (If sys.excepthook fails then PyErr_Display() is called.)

In normal VFS usage there will be no exceptions raised, or specific expected ones which APSW clears after noting them and returning the appropriate value back to SQLite. The exception hooking behaviour helps you find issues in your code or unexpected behaviour of the external environment. Remember that augmented stack traces are available which significantly increase detail about the exceptions.

As an example, lets say you have a divide by zero error in your xWrite routine. The table below shows what happens with time going down and across.

Python Query Code SQLite and APSW C code Python VFS code
cursor.execute("update table set foo=3")    
  SQLite starts executing query  
    Your VFS routines are called
    Your xWrite divides by zero
    VFSFile.excepthook() is called with ZeroDivision exception
  SQLITE_ERROR (closest matching SQLite error code) is returned to SQLite by APSW  
  SQLite error handling and recovery operates which calls more VFS routines. More VFS routines are called. Any exceptions in these routines will result in VFSFile.excepthook() being called with them.
  SQLite returns SQLITE_FULL to APSW  
APSW returns apsw.FullError    

VFS class

class VFS(name[, base=None, makedefault=False, maxpathname=1024])

Provides operating system access. You can get an overview in the SQLite documentation. To create a VFS your Python class must inherit from VFS.

Parameters:name – The name to register this vfs under. If the name already exists then this vfs will replace the prior one of the same name. Use apsw.vfsnames() to get a list of registered vfs names. :param base: If you would like to inherit behaviour from an already registered vfs then give their name. To inherit from the default vfs, use a zero length string "" as the name. :param makedefault: If true then this vfs will be registered as the default, and will be used by any opens that don’t specify a vfs. :param maxpathname: The maximum length of database name in bytes when represented in UTF-8. If a pathname is passed in longer than this value then SQLite will not be able to open it.
Raises ValueError:
 If base is not None and the named vfs is not currently registered.
Calls:
VFS.excepthook(etype, evalue, etraceback)

Called when there has been an exception in a VFS routine. The default implementation calls sys.excepthook and if that fails then PyErr_Display. The three arguments correspond to what sys.exc_info() would return.

Parameters:
  • etype – The exception type
  • evalue – The exception value
  • etraceback – The exception traceback. Note this includes all frames all the way up to the thread being started.
VFS.unregister()

Unregisters the VFS making it unavailable to future database opens. You do not need to call this as the VFS is automatically unregistered by when the VFS has no more references or open datatabases using it. It is however useful to call if you have made your VFS be the default and wish to immediately make it be unavailable. It is safe to call this routine multiple times.

Calls: sqlite3_vfs_unregister

VFS.xAccess(pathname, flags) → bool

SQLite wants to check access permissions. Return True or False accordingly.

Parameters:
  • pathname – File or directory to check
  • flags – One of the access flags
VFS.xCurrentTime() → float

Return the Julian Day Number as a floating point number where the integer portion is the day and the fractional part is the time. Do not adjust for timezone (ie use UTC).

VFS.xDelete(filename, syncdir)

Delete the named file.

Note

SQLite has 3 different behaviours depending on version for how to handle missing files.

SQLite < 3.7.8 Raise an IOError if the file does not exist.
SQLite >= 3.7.8 and SQLite < 3.7.15 Do not raise an exception
SQLite >= 3.7.15 Raise an IOError exception with extendedresult SQLITE_IOERR_DELETE_NOENT
Parameters:
  • filename – File to delete
  • syncdir – If True then the directory should be synced ensuring that the file deletion has been recorded on the disk platters. ie if there was an immediate power failure after this call returns, on a reboot the file would still be deleted.
VFS.xDlClose(handle)

Close and unload the library corresponding to the handle you returned from xDlOpen(). You can use ctypes to do this:

def xDlClose(handle):
   # Note leading underscore in _ctypes
   _ctypes.dlclose(handle)       # Linux/Mac/Unix
   _ctypes.FreeLibrary(handle)   # Windows
VFS.xDlError() → string

Return an error string describing the last error of xDlOpen() or xDlSym() (ie they returned zero/NULL). If you do not supply this routine then SQLite provides a generic message. To implement this method, catch exceptions in xDlOpen() or xDlSym(), turn them into strings, save them, and return them in this routine. If you have an error in this routine or return None then SQLite’s generic message will be used.

VFS.xDlOpen(filename) → number

Load the shared library. You should return a number which will be treated as a void pointer at the C level. On error you should return 0 (NULL). The number is passed as is to xDlSym()/xDlClose() so it can represent anything that is convenient for you (eg an index into an array). You can use ctypes to load a library:

def xDlOpen(name):
   return ctypes.cdll.LoadLibrary(name)._handle
VFS.xDlSym(handle, symbol) → address

Returns the address of the named symbol which will be called by SQLite. On error you should return 0 (NULL). You can use ctypes:

def xDlSym(ptr, name):
   return _ctypes.dlsym (ptr, name)  # Linux/Unix/Mac etc (note leading underscore)
   return ctypes.win32.kernel32.GetProcAddress (ptr, name)  # Windows
Parameters:
  • handle – The value returned from an earlier xDlOpen() call
  • symbol – A string
Return type:

An int/long with the symbol address

VFS.xFullPathname(name) → string

Return the absolute pathname for name. You can use os.path.abspath to do this.

VFS.xGetLastError() → string

This method is to return text describing the last error that happened in this thread. If not implemented SQLite’s more generic message is used. However the method is never called by SQLite.

VFS.xGetSystemCall(name) → int

Returns a pointer for the current method implementing the named system call. Return None if the call does not exist.

VFS.xNextSystemCall(name) → String or None

This method is repeatedly called to iterate over all of the system calls in the vfs. When called with None you should return the name of the first system call. In subsequent calls return the name after the one passed in. If name is the last system call then return None.

Note

Because of internal SQLite implementation semantics memory will be leaked on each call to this function. Consequently you should build up the list of call names once rather than repeatedly doing it.

VFS.xOpen(name, flags) → VFSFile or similar object

This method should return a new file object based on name. You can return a VFSFile from a completely different VFS.

Parameters:
  • name – File to open. Note that name may be None in which case you should open a temporary file with a name of your choosing. May be an instance of URIFilename.
  • flags – A list of two integers [inputflags, outputflags]. Each integer is one or more of the open flags binary orred together. The inputflags tells you what SQLite wants. For example SQLITE_OPEN_DELETEONCLOSE means the file should be automatically deleted when closed. The outputflags describes how you actually did open the file. For example if you opened it read only then SQLITE_OPEN_READONLY should be set.
VFS.xRandomness(numbytes) → bytes

This method is called once when SQLite needs to seed the random number generator. It is called on the default VFS only. It is not called again, even across apsw.shutdown() calls. You can return less than the number of bytes requested including None. If you return more then the surplus is ignored.

Return type:(Python 2) string, buffer (Python 3) bytes, buffer
VFS.xSetSystemCall(name, pointer) → bool

Change a system call used by the VFS. This is useful for testing and some other scenarios such as sandboxing.

Parameters:
  • name – The string name of the system call
  • pointer – A pointer provided as an int/long. There is no reference counting or other memory tracking of the pointer. If you provide one you need to ensure it is around for the lifetime of this and any other related VFS.

Raise an exception to return an error. If the system call does not exist then raise NotFoundError.

Returns:True if the system call was set. False if the system call is not known.
VFS.xSleep(microseconds) → integer

Pause exection of the thread for at least the specified number of microseconds (millionths of a second). This routine is typically called from the busy handler.

Returns:How many microseconds you actually requested the operating system to sleep for. For example if your operating system sleep call only takes seconds then you would have to have rounded the microseconds number up to the nearest second and should return that rounded up value.

VFSFile class

class VFSFile(vfs, name, flags)

Wraps access to a file. You only need to derive from this class if you want the file object returned from VFS.xOpen() to inherit from an existing VFS implementation.

Note

All file sizes and offsets are 64 bit quantities even on 32 bit operating systems.

Parameters:
  • vfs – The vfs you want to inherit behaviour from. You can use an empty string "" to inherit from the default vfs.
  • name – The name of the file being opened. May be an instance of URIFilename.
  • flags – A two list [inflags, outflags] as detailed in VFS.xOpen().
Raises ValueError:
 

If the named VFS is not registered.

Note

If the VFS that you inherit from supports write ahead logging then your VFSFile will also support the xShm methods necessary to implement wal.

See also

VFS.xOpen()

VFSFile.excepthook(etype, evalue, etraceback)

Called when there has been an exception in a VFSFile routine. The default implementation calls sys.excepthook and if that fails then PyErr_Display. The three arguments correspond to what sys.exc_info() would return.

Parameters:
  • etype – The exception type
  • evalue – The exception value
  • etraceback – The exception traceback. Note this includes all frames all the way up to the thread being started.
VFSFile.xCheckReservedLock()

Returns True if any database connection (in this or another process) has a lock other than SQLITE_LOCK_NONE or SQLITE_LOCK_SHARED.

VFSFile.xClose()

Close the database. Note that even if you return an error you should still close the file. It is safe to call this method mutliple times.

VFSFile.xDeviceCharacteristics() → int

Return I/O capabilities (bitwise or of appropriate values). If you do not implement the function or have an error then 0 (the SQLite default) is returned.

VFSFile.xFileControl(op, ptr) → bool

Receives file control request typically issued by Connection.filecontrol(). See Connection.filecontrol() for an example of how to pass a Python object to this routine.

Parameters:
  • op – A numeric code. Codes below 100 are reserved for SQLite internal use.
  • ptr – An integer corresponding to a pointer at the C level.
Returns:

A boolean indicating if the op was understood

As of SQLite 3.6.10, this method is called by SQLite if you have inherited from an underlying VFSFile. Consequently ensure you pass any unrecognised codes through to your super class. For example:

def xFileControl(self, op, ptr):
    if op==1027:
        process_quick(ptr)
    elif op==1028:
        obj=ctypes.py_object.from_address(ptr).value
    else:
        # this ensures superclass implementation is called
        return super(MyFile, self).xFileControl(op, ptr)
    # we understood the op
    return True
VFSFile.xFileSize() → int

Return the size of the file in bytes. Remember that file sizes are 64 bit quantities even on 32 bit operating systems.

VFSFile.xLock(level)

Increase the lock to the level specified which is one of the SQLITE_LOCK family of constants. If you can’t increase the lock level because someone else has locked it, then raise BusyError.

VFSFile.xRead(amount, offset) → bytes

Read the specified amount of data starting at offset. You should make every effort to read all the data requested, or return an error. If you have the file open for non-blocking I/O or if signals happen then it is possible for the underlying operating system to do a partial read. You will need to request the remaining data. Except for empty files SQLite considers short reads to be a fatal error.

Parameters:
  • amount – Number of bytes to read
  • offset – Where to start reading. This number may be 64 bit once the database is larger than 2GB.
Return type:

(Python 2) string, buffer. (Python 3) bytes, buffer

VFSFile.xSectorSize() → int

Return the native underlying sector size. SQLite uses the value returned in determining the default database page size. If you do not implement the function or have an error then 4096 (the SQLite default) is returned.

VFSFile.xSync(flags)

Ensure data is on the disk platters (ie could survive a power failure immediately after the call returns) with the sync flags detailing what needs to be synced. You can sync more than what is requested.

VFSFile.xTruncate(newsize)

Set the file length to newsize (which may be more or less than the current length).

VFSFile.xUnlock(level)

Decrease the lock to the level specified which is one of the SQLITE_LOCK family of constants.

VFSFile.xWrite(data, offset)

Write the data starting at absolute offset. You must write all the data requested, or return an error. If you have the file open for non-blocking I/O or if signals happen then it is possible for the underlying operating system to do a partial write. You will need to write the remaining data.

Parameters:
  • offset – Where to start writing. This number may be 64 bit once the database is larger than 2GB.
  • data – (Python 2) string, (Python 3) bytes

URIFilename class

class URIFilename

SQLite uses a convoluted method of storing uri parameters after the filename binding the C filename representation and parameters together. This class encapsulates that binding. The example shows usage of this class.

Your VFS.xOpen() method will generally be passed one of these instead of a string as the filename if the URI flag was used or the main database flag is set.

You can safely pass it on to the VFSFile constructor which knows how to get the name back out.

URIFilename.filename() → str

Returns the filename.

URIFilename.uri_boolean(name, default) → bool

Returns the boolean value for parameter name or default if not present.

Calls: sqlite3_uri_boolean

URIFilename.uri_int(name, default) → int

Returns the integer value for parameter name or default if not present.

Calls: sqlite3_uri_int64

URIFilename.uri_parameter(name) → str

Returns the value of parameter name or None.

Calls: sqlite3_uri_parameter

Table Of Contents

Previous topic

Virtual Tables

Next topic

Shell

This Page

apsw-3.8.11.1-r1/doc/download.html0000664000000000000000000003363412565423350015215 0ustar rootroot Download — APSW 3.8.11.1-r1 documentation

Download

Source and binaries

You can download this release as binaries for Windows. Just run the executable corresponding with the Python version you are using. The Windows binaries all include the FTS and RTree extensions. (FTS3_PARENTHESIS is on.)

Download in source form for other platforms or if you want to compile yourself on Windows. See the recommended way to build or all the options available.

Some Linux distributions also have packages which may trail the SQLite and APSW releases by a year, or more. It is also possible to build RPMs and DEB packages from the source, although this involves setting up package management tools and various dependencies on your build machine.

Debian Install python-apsw
Fedora Install python-apsw
Ubuntu Install python-apsw
Ubuntu PPA PPA building has been broken for over two years because Canonical/Ubuntu add a broken flag to the PPA.
Gentoo Install dev-python/apsw
Arch Linux Install python-apsw

Verifying your download

Downloads are digitally signed so you can verify they have not been tampered with. Download and extract the zip file of signatures listed above. These instructions are for GNU Privacy Guard. (GPG is installed as standard on most Unix/Linux platforms and can be downloaded for Windows.)

Verify

To verify a file just use –verify specifying the corresponding .asc filename. This example verifies the source:

$ gpg --verify apsw-3.8.11.1-r1.zip.asc
gpg: Signature made ... date ... using DSA key ID 0DFBD904
gpg: Good signature from "Roger Binns <rogerb@rogerbinns.com>"

If you get a “good signature” then the file has not been tampered with and you are good to go.

Getting the signing key

You may not have the signing key available in which case the last line will be something like this:

gpg: Can't check signature: public key not found

You can get a copy of the key using this command:

$ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0DFBD904
gpg: requesting key 0DFBD904 from hkp server keyserver.ubuntu.com
gpg: /home/username/.gnupg/trustdb.gpg: trustdb created
gpg: key 0DFBD904: public key "Roger Binns <rogerb@rogerbinns.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1

Repeat the verify step.

Source code control

The source is controlled by Git - start at https://github.com/rogerbinns/apsw

easy_install/pip/pypi

APSW is not available at the Python Package Index (pypi) and hence cannot be installed using easy_install, pip or similar tools. The reason for this is that the tools do not provide a way of providing options to the setup.py included with APSW and hence there is no way for APSW to know if you want SQLite downloaded, a consistent version of SQLite or the latest, to use a system SQLite instead, error if an a system version is not available etc. I could pick a sensible default but everyone else using pypi would be disadvantaged or worse get undesired behaviour (eg different versions of SQLite depending on when a machine did an install). Additionally the world of Python packaging is going through another series of changes (distutils2 aka packaging) so some solution may come out of that.

I’m happy to work with anyone who has a solution to this problem.

Table Of Contents

Previous topic

Example

Next topic

Building

This Page

apsw-3.8.11.1-r1/doc/copyright.html0000664000000000000000000001433312565423350015411 0ustar rootroot Copyright and License — APSW 3.8.11.1-r1 documentation

Previous topic

Benchmarking

Next topic

Change History

This Page

apsw-3.8.11.1-r1/doc/exceptions.html0000664000000000000000000011236612565423350015567 0ustar rootroot Exceptions — APSW 3.8.11.1-r1 documentation

Exceptions

apsw.Error is the base for APSW exceptions.

exception Error
Error.result

For exceptions corresponding to SQLite error codes codes this attribute is the numeric error code.

Error.extendedresult

APSW runs with extended result codes turned on. This attribute includes the detailed code.

As an example, if SQLite issued a read request and the system returned less data than expected then result would have the value SQLITE_IOERR while extendedresult would have the value SQLITE_IOERR_SHORT_READ.

APSW specific exceptions

The following exceptions happen when APSW detects various problems.

exception ThreadingViolationError

You have used an object concurrently in two threads. For example you may try to use the same cursor in two different threads at the same time, or tried to close the same connection in two threads at the same time.

You can also get this exception by using a cursor as an argument to itself (eg as the input data for Cursor.executemany()). Cursors can only be used for one thing at a time.

exception ForkingViolationError

See apsw.fork_checker().

exception IncompleteExecutionError

You have tried to start a new SQL execute call before executing all the previous ones. See the execution model for more details.

exception ConnectionNotClosedError

This exception is no longer generated. It was required in earlier releases due to constraints in threading usage with SQLite.

exception ConnectionClosedError

You have called Connection.close() and then continued to use the Connection or associated cursors.

exception CursorClosedError

You have called Cursor.close() and then tried to use the cursor.

exception BindingsError

There are several causes for this exception. When using tuples, an incorrect number of bindings where supplied:

cursor.execute("select ?,?,?", (1,2))     # too few bindings
cursor.execute("select ?,?,?", (1,2,3,4)) # too many bindings

You are using named bindings, but not all bindings are named. You should either use entirely the named style or entirely numeric (unnamed) style:

cursor.execute("select * from foo where x=:name and y=?")

Note

It is not considered an error to have missing keys in a dictionary. For example this is perfectly valid:

cursor.execute("insert into foo values($a,:b,$c)", {'a': 1})

b and c are not in the dict. For missing keys, None/NULL will be used. This is so you don’t have to add lots of spurious values to the supplied dict. If your schema requires every column have a value, then SQLite will generate an error due to some values being None/NULL so that case will be caught.

exception ExecutionCompleteError

A statement is complete but you try to run it more anyway!

exception ExecTraceAbort

The execution tracer returned False so execution was aborted.

exception ExtensionLoadingError

An error happened loading an extension.

exception VFSNotImplementedError

A call cannot be made to an inherited Virtual File System (VFS) method as the VFS does not implement the method.

exception VFSFileClosedError

The VFS file is closed so the operation cannot be performed.

SQLite Exceptions

The following lists which Exception classes correspond to which SQLite error codes.

General Errors

exception SQLError

SQLITE_ERROR. This error is documented as a bad SQL query or missing database, but is also returned for a lot of other situations. It is the default error code unless there is a more specific one.

exception MismatchError

SQLITE_MISMATCH. Data type mismatch. For example a rowid or integer primary key must be an integer.

exception NotFoundError

SQLITE_NOTFOUND. Returned when various internal items were not found such as requests for non-existent system calls or file controls.

Internal Errors

exception InternalError

SQLITE_INTERNAL. (No longer used) Internal logic error in SQLite.

exception ProtocolError

SQLITE_PROTOCOL. (No longer used) Database lock protocol error.

exception MisuseError

SQLITE_MISUSE. SQLite library used incorrectly.

exception RangeError

SQLITE_RANGE. (Cannot be generated using APSW). 2nd parameter to sqlite3_bind out of range

Permissions Etc

exception PermissionsError

SQLITE_PERM. Access permission denied by the operating system, or parts of the database are readonly such as a cursor.

exception ReadOnlyError

SQLITE_READONLY. Attempt to write to a readonly database.

exception CantOpenError

SQLITE_CANTOPEN. Unable to open the database file.

exception AuthError

SQLITE_AUTH. Authorization denied.

Abort/Busy Etc

exception AbortError

SQLITE_ABORT. Callback routine requested an abort.

exception BusyError

SQLITE_BUSY. The database file is locked. Use Connection.setbusytimeout() to change how long SQLite waits for the database to be unlocked or Connection.setbusyhandler() to use your own handler.

exception LockedError

SQLITE_LOCKED. A table in the database is locked.

exception InterruptError

SQLITE_INTERRUPT. Operation terminated by sqlite3_interrupt - use Connection.interrupt().

exception SchemaChangeError

SQLITE_SCHEMA. The database schema changed. A prepared statement becomes invalid if the database schema was changed. Behind the scenes SQLite reprepares the statement. Another or the same Connection may change the schema again before the statement runs. SQLite will attempt up to 5 times before giving up and returning this error.

exception ConstraintError

SQLITE_CONSTRAINT. Abort due to constraint violation. This would happen if the schema required a column to be within a specific range. If you have multiple constraints, you can’t tell which one was the cause.

Memory/Disk

exception NoMemError

SQLITE_NOMEM. A memory allocation failed.

exception IOError

SQLITE_IOERR. Some kind of disk I/O error occurred. The extended error code will give more detail.

exception CorruptError

SQLITE_CORRUPT. The database disk image appears to be a SQLite database but the values inside are inconsistent.

exception FullError

SQLITE_FULL. The disk appears to be full.

exception TooBigError

SQLITE_TOOBIG. String or BLOB exceeds size limit. You can change the limits using Connection.limit().

exception NoLFSError

SQLITE_NOLFS. SQLite has attempted to use a feature not supported by the operating system such as large file support.

exception EmptyError

SQLITE_EMPTY. Database is completely empty.

exception FormatError

SQLITE_FORMAT. (No longer used) Auxiliary database format error.

exception NotADBError

SQLITE_NOTADB. File opened that is not a database file. SQLite has a header on database files to verify they are indeed SQLite databases.

Augmented stack traces

When an exception occurs, Python does not include frames from non-Python code (ie the C code called from Python). This can make it more difficult to work out what was going on when an exception occurred for example when there are callbacks to collations, functions or virtual tables, triggers firing etc.

This is an example showing the difference between the tracebacks you would have got with earlier versions of apsw and the augmented traceback:

import apsw

def myfunc(x):
  1/0

con=apsw.Connection(":memory:")
con.createscalarfunction("foo", myfunc)
con.createscalarfunction("fam", myfunc)
cursor=con.cursor()
cursor.execute("create table bar(x,y,z);insert into bar values(1,2,3)")
cursor.execute("select foo(1) from bar")
Original Traceback Augmented Traceback
Traceback (most recent call last):
  File "t.py", line 11, in <module>
    cursor.execute("select foo(1) from bar")
  File "t.py", line 4, in myfunc
    1/0
ZeroDivisionError: integer division or modulo by zero
Traceback (most recent call last):
  File "t.py", line 11, in <module>
    cursor.execute("select foo(1) from bar")
  File "apsw.c", line 3412, in resetcursor
  File "apsw.c", line 1597, in user-defined-scalar-foo
  File "t.py", line 4, in myfunc
    1/0
ZeroDivisionError: integer division or modulo by zero

In the original traceback you can’t even see that code in apsw was involved. The augmented traceback shows that there were indeed two function calls within apsw and gives you line numbers should you need to examine the code. Also note how you are told that the call was in user-defined-scalar-foo (ie you can tell which function was called.)

But wait, there is more!!! In order to further aid troubleshooting, the augmented stack traces make additional information available. Each frame in the traceback has local variables defined with more information. You can print out the variables using ASPN recipe 52215

In the recipe, the initial code in print_exc_plus() is far more complicated than need be, and also won’t work correctly with all tracebacks (it depends on f_prev being set which isn’t always the case). Change the function to start like this:

tb = sys.exc_info()[2]
stack = []

while tb:
    stack.append(tb.tb_frame)
    tb = tb.tb_next

traceback.print_exc()
print "Locals by frame, innermost last"

Here is a far more complex example from some virtual tables code I was writing. The BestIndex method in my code had returned an incorrect value. The augmented traceback includes local variables using recipe 52215. I can see what was passed in to my method, what I returned and which item was erroneous. The original traceback is almost completely useless.

Original traceback:

Traceback (most recent call last):
  File "tests.py", line 1387, in testVtables
    cursor.execute(allconstraints)
TypeError: Bad constraint (#2) - it should be one of None, an integer or a tuple of an integer and a boolean

Augmented traceback with local variables:

Traceback (most recent call last):
  File "tests.py", line 1387, in testVtables
    cursor.execute(allconstraints)
                VTable =  __main__.VTable
                   cur =  <apsw.Cursor object at 0x988f30>
                     i =  10
                  self =  testVtables (__main__.APSW)
        allconstraints =  select rowid,* from foo where rowid>-1000 ....

  File "apsw.c", line 4050, in Cursor_execute.sqlite3_prepare
            Connection =  <apsw.Connection object at 0x978800>
             statement =  select rowid,* from foo where rowid>-1000 ....

  File "apsw.c", line 2681, in VirtualTable.xBestIndex
                  self =  <__main__.VTable instance at 0x98d8c0>
                  args =  (((-1, 4), (0, 32), (1, 8), (2, 4), (3, 64)), ((2, False),))
                result =  ([4, (3,), [2, False], [1], [0]], 997, u'\xea', False)

  File "apsw.c", line 2559, in VirtualTable.xBestIndex.result_constraint
               indices =  [4, (3,), [2, False], [1], [0]]
                  self =  <__main__.VTable instance at 0x98d8c0>
                result =  ([4, (3,), [2, False], [1], [0]], 997, u'\xea', False)
            constraint =  (3,)

TypeError: Bad constraint (#2) - it should be one of None, an integer or a tuple of an integer and a boolean

Table Of Contents

Previous topic

Shell

Next topic

Types

This Page

apsw-3.8.11.1-r1/doc/cursor.html0000664000000000000000000011461012565423350014715 0ustar rootroot Cursors (executing SQL) — APSW 3.8.11.1-r1 documentation

Cursors (executing SQL)

A cursor encapsulates a SQL query and returning results. To make a new cursor you should call cursor() on your database:

db=apsw.Connection("databasefilename")
cursor=db.cursor()

A cursor executes SQL:

cursor.execute("create table example(title, isbn)")

You can also read data back. The row is returned as a tuple of the column values:

for row in cursor.execute("select * from example"):
   print row

There are two ways of supplying data to a query. The really bad way is to compose a string:

sql="insert into example values('%s', %d)" % ("string", 8390823904)
cursor.execute(sql)

If there were any single quotes in string then you would have invalid syntax. Additionally this is how SQL injection attacks happen. Instead you should use bindings:

sql="insert into example values(?, ?)"
cursor.execute(sql, ("string", 8390823904))

# You can also use dictionaries
sql="insert into example values(:title, :isbn)"
cursor.execute(sql, {"title": "string", "isbn": 8390823904})

# You can use local variables as the dictionary
title="..."
isbn="...."
cursor.execute(sql, locals())

Cursors are cheap. Use as many as you need. It is safe to use them across threads, such as calling execute() in one thread, passing the cursor to another thread that then calls Cursor.next(). The only thing you can’t do is call methods at exactly the same time on the same cursor in two different threads - eg trying to call execute() in both at the same time, or execute() in one and Cursor.next() in another. (If you do attempt this, it will be detected and ThreadingViolationError will be raised.)

Behind the scenes a Cursor maps to a SQLite statement. APSW maintains a cache so that the mapping is very fast, and the SQLite objects are reused when possible.

A unique feature of APSW is that your query can be multiple semi-colon separated statements. For example:

cursor.execute("select ... ; insert into ... ; update ... ; select ...")

Note

SQLite fetches data as it is needed. If table example had 10 million rows it would only get the next row as requested (the for loop effectively calls next() to get each row). This code would not work as expected:

for row in cursor.execute("select * from example"):
   cursor.execute("insert .....")

The nested execute() would start a new query abandoning any remaining results from the SELECT cursor. There are two ways to work around this. Use a different cursor:

for row in cursor1.execute("select * from example"):
   cursor2.execute("insert ...")

You can also get all the rows immediately by filling in a list:

rows=list( cursor.execute("select * from example") )
for row in rows:
   cursor.execute("insert ...")

This last approach is recommended since you don’t have to worry about the database changing while doing the select. You should also understand transactions and where to put the transaction boundaries.

Note

Cursors on the same Connection are not isolated from each other. Anything done on one cursor is immediately visible to all other Cursors on the same connection. This still applies if you start transactions. Connections are isolated from each other with cursors on other connections not seeing changes until they are committed.

Cursor class

class Cursor

You obtain cursors by calling Connection.cursor().

Cursor.close(force=False)

It is very unlikely you will need to call this method. It exists because older versions of SQLite required all Connection/Cursor activity to be confined to the same thread. That is no longer the case. Cursors are automatically garbage collected and when there are none left will allow the connection to be garbage collected if it has no other references.

A cursor is open if there are remaining statements to execute (if your query included multiple statements), or if you called executemany() and not all of the sequenceofbindings have been used yet.

Parameters:force – If False then you will get exceptions if there is remaining work to do be in the Cursor such as more statements to execute, more data from the executemany binding sequence etc. If force is True then all remaining work and state information will be silently discarded.
Cursor.description

Based on the DB-API cursor property, this returns the same as getdescription() but with 5 Nones appended. See also APSW issue 131.

Cursor.execute(statements[, bindings]) → iterator

Executes the statements using the supplied bindings. Execution returns when the first row is available or all statements have completed.

Parameters:
  • statements – One or more SQL statements such as select * from books or begin; insert into books ...; select last_insert_rowid(); end.
  • bindings – If supplied should either be a sequence or a dictionary. Each item must be one of the supported types

If you use numbered bindings in the query then supply a sequence. Any sequence will work including lists and iterators. For example:

cursor.execute("insert into books values(?,?)", ("title", "number"))

Note

A common gotcha is wanting to insert a single string but not putting it in a tuple:

cursor.execute("insert into books values(?)", "a title")

The string is a sequence of 8 characters and so it will look like you are supplying 8 bindings when only one is needed. Use a one item tuple with a trailing comma like this:

cursor.execute("insert into books values(?)", ("a title",) )

If you used names in the statement then supply a dictionary as the binding. It is ok to be missing entries from the dictionary - None/null will be used. For example:

cursor.execute("insert into books values(:title, :isbn, :rating)",
     {"title": "book title", "isbn": 908908908})

The return is the cursor object itself which is also an iterator. This allows you to write:

for row in cursor.execute("select * from books"):
   print row
Raises:
  • TypeError – The bindings supplied were neither a dict nor a sequence
  • BindingsError – You supplied too many or too few bindings for the statements
  • IncompleteExecutionError – There are remaining unexecuted queries from your last execute
Calls:
Cursor.executemany(statements, sequenceofbindings) → iterator

This method is for when you want to execute the same statements over a sequence of bindings. Conceptually it does this:

for binding in sequenceofbindings:
    cursor.execute(statements, binding)

Example:

rows=(  (1, 7),
        (2, 23),
        (4, 92),
        (12, 12) )

cursor.executemany("insert into nums values(?,?)", rows)

The return is the cursor itself which acts as an iterator. Your statements can return data. See execute() for more information.

Cursor.fetchall() → list

Returns all remaining result rows as a list. This method is defined in DBAPI. It is a longer way of doing list(cursor).

Cursor.fetchone() → row or None

Returns the next row of data or None if there are no more rows.

Cursor.getconnection() → Connection

Returns the Connection this cursor belongs to. An example usage is to get another cursor:

def func(cursor):
  # I don't want to alter existing cursor, so make a new one
  mycursor=cursor.getconnection().cursor()
  mycursor.execute("....")
Cursor.getdescription() → tuple

Returns a tuple describing each column in the result row. The return is identical for every row of the results. You can only call this method once you have started executing a statement and before you have finished:

# This will error
cursor.getdescription()

for row in cursor.execute("select ....."):
   # this works
   print cursor.getdescription()
   print row

The information about each column is a tuple of (column_name, declared_column_type). The type is what was declared in the CREATE TABLE statement - the value returned in the row will be whatever type you put in for that row and column. (This is known as manifest typing which is also the way that Python works. The variable a could contain an integer, and then you could put a string in it. Other static languages such as C or other SQL databases only let you put one type in - eg a could only contain an integer or a string, but never both.)

Example:

cursor.execute("create table books(title string, isbn number, wibbly wobbly zebra)")
cursor.execute("insert into books values(?,?,?)", (97, "fjfjfj", 3.7))
cursor.execute("insert into books values(?,?,?)", ("fjfjfj", 3.7, 97))

for row in cursor.execute("select * from books"):
   print cursor.getdescription()
   print row

Output:

# row 0 - description
(('title', 'string'), ('isbn', 'number'), ('wibbly', 'wobbly zebra'))
# row 0 - values
(97, 'fjfjfj', 3.7)
# row 1 - description
(('title', 'string'), ('isbn', 'number'), ('wibbly', 'wobbly zebra'))
# row 1 - values
('fjfjfj', 3.7, 97)
Calls:
Cursor.getexectrace() → callable or None

Returns the currently installed (via setexectrace()) execution tracer.

See also

Cursor.getrowtrace() → callable or None

Returns the currently installed (via setrowtrace()) row tracer.

See also

Cursor.setexectrace(callable)

callable is called with the cursor, statement and bindings for each execute() or executemany() on this cursor.

If callable is None then any existing execution tracer is removed.

Cursor.setrowtrace(callable)

callable is called with cursor and row being returned. You can change the data that is returned or cause the row to be skipped altogether.

If callable is None then any existing row tracer is removed.

Table Of Contents

Previous topic

Connections to a database

Next topic

Blob Input/Output

This Page

apsw-3.8.11.1-r1/doc/backup.html0000664000000000000000000004567012565423346014663 0ustar rootroot Backup — APSW 3.8.11.1-r1 documentation

Backup

A backup object encapsulates copying one database to another. You call Connection.backup() on the destination database to get the backup object. Call step() to copy some pages repeatedly dealing with errors as appropriate. Finally finish() cleans up committing or rolling back and releasing locks.

Here is an example usage using the with statement to ensure finish() is called:

# copies source.main into db
with db.backup("main", source, "main") as b:
    while not b.done:
        b.step(100)
        print b.remaining, b.pagecount, "\r",

If you are not using with then you’ll need to ensure finish() is called:

# copies source.main into db
b=db.backup("main", source, "main")
try:
    while not b.done:
        b.step(100)
        print b.remaining, b.pagecount, "\r",
finally:
    b.finish()

Important details

The database is copied page by page. This means that there is not a round trip via SQL. All pages are copied including free ones.

The destination database is locked during the copy. You will get a ThreadingViolationError if you attempt to use it.

backup class

class backup

You create a backup instance by calling Connection.backup().

backup.__enter__() → self

You can use the backup object as a context manager as defined in PEP 0343. The __exit__() method ensures that backup is finished.

backup.__exit__() → False

Implements context manager in conjunction with __enter__() ensuring that the copy is finished.

backup.close([force=False])

Does the same thing as finish(). This extra api is provided to give the same api as other APSW objects such as Connection.close(), blob.close() and Cursor.close(). It is safe to call this method multiple times.

Parameters:force – If true then any exceptions are ignored.
backup.done

A boolean that is True if the copy completed in the last call to step().

backup.finish()

Completes the copy process. If all pages have been copied then the transaction is committed on the destination database, otherwise it is rolled back. This method must be called for your backup to take effect. The backup object will always be finished even if there is an exception. It is safe to call this method multiple times.

Calls: sqlite3_backup_finish

backup.pagecount

Read only. How many pages were in the source database after the last step. If you haven’t called step() or the backup object has been finished then zero is returned.

Calls: sqlite3_backup_pagecount

backup.remaining

Read only. How many pages were remaining to be copied after the last step. If you haven’t called step() or the backup object has been finished then zero is returned.

Calls: sqlite3_backup_remaining

backup.step([npages=All]) → bool

Copies npages pages from the source to destination database. The source database is locked during the copy so using smaller values allows other access to the source database. The destination database is always locked until the backup object is finished.

Parameters:npages – How many pages to copy. If the parameter is omitted or negative then all remaining pages are copied. The default page size is 1024 bytes (1kb) which can be changed before database creation using a pragma.

This method may throw a BusyError or LockedError if unable to lock the source database. You can catch those and try again.

Returns:True if this copied the last remaining outstanding pages, else false. This is the same value as done

Calls: sqlite3_backup_step

Table Of Contents

Previous topic

Blob Input/Output

Next topic

Virtual Tables

This Page

apsw-3.8.11.1-r1/doc/pysqlite.html0000664000000000000000000005247412565423350015263 0ustar rootroot pysqlite differences — APSW 3.8.11.1-r1 documentation

pysqlite differences

pysqlite and APSW approached the problem of providing access to SQLite from Python from fundamentally different directions.

APSW only wraps version 3 of SQLite and provides access in whatever way is normal for SQLite. It makes no effort to hide how SQLite is different from other databases.

pysqlite tries to provide a DBAPI compliant wrapper for SQLite and in doing so needs to make it have the same behaviour as other databases. Consequently it does hide some of SQLite’s nuances.

Note

I suggest using APSW when you want to directly use SQLite and its functionality or are using your own code to deal with database independence rather than DBAPI. Use pysqlite and DBAPI if your needs are simple, and you don’t want to use SQLite features.

What APSW does better

APSW has the following enhancements/differences over pysqlite 2 (wrapping SQLite 3):

  • APSW stays up to date with SQLite. As features are added and functionality changed in SQLite, APSW tracks them.

  • APSW gives all functionality of SQLite including virtual tables, Virtual File System (VFS), BLOB I/O, backups and file control.

  • You can use the same Connection across threads with APSW without needing any additional level of locking. pysqlite requires that the Connection and any cursors are used in the same thread. You can disable its checking, but unless you are very careful with your own mutexes you will have a crash or a deadlock.

  • APSW is a single file for the extension, apsw.pyd on Windows and apsw.so on Unix/Mac (Note PEP 3149). There are no other files needed and the build instructions show you how to include SQLite statically in this file. You can put this file anywhere your Python session can reach. pysqlite is one binary file and several .py files, all of which need to be available.

  • Nothing happens behind your back. By default pysqlite tries to manage transactions by parsing your SQL for you, but you can turn it off. This can result in very unexpected behaviour with pysqlite.

  • When using a Connection as a context manager APSW uses SQLite’s ability to have nested transactions. pysqlite only deals with one transaction at a time and cannot nest them. (Savepoints were introduced in SQLite 3.6.8 - another illustration of the benefits of keeping up to date with SQLite.)

  • APSW always handles Unicode correctly (this was one of the major reasons for writing it in the first place). pysqlite has since fixed many of its issues but you are still stuck with some.

  • You can use semi-colons at the end of commands and you can have multiple commands in the execute string in APSW. There are no restrictions on the type of commands used. For example this will work fine in APSW but is not allowed in pysqlite:

    import apsw
    con=apsw.Connection(":memory:")
    cur=con.cursor()
    for row in cur.execute("create table foo(x,y,z);insert into foo values (?,?,?);"
                           "insert into foo values(?,?,?);select * from foo;drop table foo;"
                           "create table bar(x,y);insert into bar values(?,?);"
                           "insert into bar values(?,?);select * from bar;",
                           (1,2,3,4,5,6,7,8,9,10)):
                               print row
    

    And the output as you would expect:

    (1, 2, 3)
    (4, 5, 6)
    (7, 8)
    (9, 10)
    
  • Cursor.executemany() also works with statements that return data such as selects, and you can have multiple statements. pysqlite’s executescript() method doesn’t allow any form of data being returned (it silently ignores any returned data).

  • pysqlite swallows exceptions in your callbacks making it far harder to debug problems. That also prevents you from raising exceptions in your callbacks to be handled in your code that called SQLite. pysqlite does let you turn on printing of tracebacks, but that is a poor substitute. apsw does the right thing as demonstrated by this example.

    Source:

    def badfunc(t):
        return 1/0
    
    # pysqlite
    from pysqlite2 import dbapi2 as sqlite
    
    con = sqlite.connect(":memory:")
    con.create_function("badfunc", 1, badfunc)
    cur = con.cursor()
    cur.execute("select badfunc(3)")
    
    # apsw
    import apsw
    con = apsw.Connection(":memory:")
    con.createscalarfunction("badfunc", badfunc, 1)
    cur = con.cursor()
    cur.execute("select badfunc(3)")
    

    Exceptions:

    # pysqlite
    
    Traceback (most recent call last):
      File "func.py", line 8, in ?
        cur.execute("select badfunc(3)")
    pysqlite2.dbapi2.OperationalError: user-defined function raised exception
    
    # apsw
    
    Traceback (most recent call last):
      File "t.py", line 8, in ?
        cur.execute("select badfunc(3)")
      File "apsw.c", line 3660, in resetcursor
      File "apsw.c", line 1871, in user-defined-scalar-badfunc
      File "t.py", line 3, in badfunc
        return 1/0
    
  • APSW has significantly enhanced debuggability. More details are available than just what is printed out when exceptions happen like above. See augmented stack traces

  • APSW has execution and row tracers. pysqlite has no equivalent to execution tracers and does have data adaptors which aren’t the same thing as a row tracer (for example you can’t skip rows or add a new column to each row returned). pysqlite does have a row factory but you can easily emulate that with the row tracer and Cursor.getdescription().

  • APSW has an apswtrace utility script that traces execution and results in your code without having to modify it in any way. It also outputs summary reports making it easy to see what your most time consuming queries are, which are most popular etc.

  • APSW has an exception corresponding to each SQLite error code and provides the extended error code. pysqlite combines several SQLite error codes into corresponding DBAPI exceptions. This is a good example of the difference in approach of the two wrappers.

  • The APSW test suite is larger and tests more functionality. Code coverage by the test suite is 99.6%. pysqlite is good at 81% for C code although there are several places that coverage can be improved. I haven’t measured code coverage for pysqlite’s Python code. The consequences of this are that APSW catches issues earlier and gives far better diagnostics. As an example try returning an unsupported type from a registered scalar function.

  • APSW is faster than pysqlite in my testing. Try the speedtest benchmark.

What pysqlite does better

  • pysqlite has an adaptor system that lets you pretend SQLite stores and returns more types than it really supports. Note that the database won’t be useful in a non-pysqlite context (eg PHP code looking at the same database isn’t going to recognise your Point class). You can implement something similar in APSW by intercepting Cursor.execute() calls that suitably mangles the bindings going to SQLite and does something similar to the rows the iterator returns.

  • pysqlite lets you work with a database that contains invalid Unicode data by setting a text factory that deals with the text data.

    APSW does not let you put non-Unicode data into the database in the first place and it will be considered invalid by other tools reading the data (eg Java, PHP). If you somehow do manage to get non-Unicode data as a SQLite string, you can cast it to a blob:

    for row in cursor.execute("select CAST(column as BLOB) from table"):
       # row[0] is buffer (py2) or bytes (py3) here
       deal_with_binary_data(row[0])
    

Table Of Contents

Previous topic

DBAPI notes

Next topic

Benchmarking

This Page

apsw-3.8.11.1-r1/doc/.buildinfo0000664000000000000000000000034612565423350014466 0ustar rootroot# Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. config: d31deff515668a7b3dba9f90e4317cbe tags: 645f666f9bcd5a90fca523b33c5a78b7 apsw-3.8.11.1-r1/doc/_sources/0000755000000000000000000000000012576610040014323 5ustar rootrootapsw-3.8.11.1-r1/doc/_sources/connection.txt0000664000000000000000000006606512565422266017253 0ustar rootroot.. Automatically generated by code2rst.py code2rst.py src/connection.c doc/connection.rst Edit src/connection.c not this file! .. currentmodule:: apsw .. _connections: Connections to a database ************************* A :class:`Connection` encapsulates access to a database. You then use :class:`cursors ` to issue queries against the database. You can have multple :class:`Connections ` open against the same database in the same process, across threads and in other processes. Connection class ================ .. index:: sqlite3_open_v2 .. class:: Connection(filename, flags=SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, vfs=None, statementcachesize=100) This object wraps a `sqlite3 pointer `_. Opens the named database. You can use ``:memory:`` to get a private temporary in-memory database that is not shared with any other connections. :param flags: One or more of the `open flags `_ orred together :param vfs: The name of the `vfs `_ to use. If :const:`None` then the default vfs will be used. :param statementcachesize: Use zero to disable the statement cache, or a number larger than the total distinct SQL statements you execute frequently. .. seealso:: * :attr:`apsw.connection_hooks` * :ref:`statementcache` * :ref:`vfs` Calls: `sqlite3_open_v2 `__ .. method:: Connection.__enter__() -> context You can use the database as a `context manager `_ as defined in :pep:`0343`. When you use *with* a transaction is started. If the block finishes with an exception then the transaction is rolled back, otherwise it is committed. For example:: with connection: connection.cursor().execute("....") with connection: # nested is supported call_function(connection) connection.cursor().execute("...") with connection as db: # You can also use 'as' call_function2(db) db.cursor().execute("...") Behind the scenes the `savepoint `_ functionality introduced in SQLite 3.6.8 is used. .. method:: Connection.__exit__() -> False Implements context manager in conjunction with :meth:`~Connection.__enter__`. Any exception that happened in the *with* block is raised after commiting or rolling back the savepoint. .. index:: sqlite3_backup_init .. method:: Connection.backup(databasename, sourceconnection, sourcedatabasename) -> backup Opens a :ref:`backup object `. All data will be copied from source database to this database. :param databasename: Name of the database. This will be ``main`` for the main connection and the name you specified for `attached `_ databases. :param sourceconnection: The :class:`Connection` to copy a database from. :param sourcedatabasename: Name of the database in the source (eg ``main``). :rtype: :class:`backup` .. seealso:: * :ref:`Backup` Calls: `sqlite3_backup_init `__ .. index:: sqlite3_blob_open .. method:: Connection.blobopen(database, table, column, rowid, writeable) -> blob Opens a blob for :ref:`incremental I/O `. :param database: Name of the database. This will be ``main`` for the main connection and the name you specified for `attached `_ databases. :param table: The name of the table :param column: The name of the column :param rowid: The id that uniquely identifies the row. :param writeable: If True then you can read and write the blob. If False then you can only read it. :rtype: :class:`blob` .. seealso:: * :ref:`Blob I/O example ` * `SQLite row ids `_ Calls: `sqlite3_blob_open `__ .. index:: sqlite3_changes .. method:: Connection.changes() -> int Returns the number of database rows that were changed (or inserted or deleted) by the most recently completed INSERT, UPDATE, or DELETE statement. Calls: `sqlite3_changes `__ .. index:: sqlite3_close .. method:: Connection.close([force=False]) Closes the database. If there are any outstanding :class:`cursors `, :class:`blobs ` or :class:`backups ` then they are closed too. It is normally not necessary to call this method as the database is automatically closed when there are no more references. It is ok to call the method multiple times. If your user defined functions or collations have direct or indirect references to the Connection then it won't be automatically garbage collected because of circular referencing that can't be automatically broken. Calling *close* will free all those objects and what they reference. SQLite is designed to survive power failures at even the most awkward moments. Consequently it doesn't matter if it is closed when the process is exited, or even if the exit is graceful or abrupt. In the worst case of having a transaction in progress, that transaction will be rolled back by the next program to open the database, reverting the database to a know good state. If *force* is *True* then any exceptions are ignored. Calls: `sqlite3_close `__ .. index:: sqlite3_collation_needed .. method:: Connection.collationneeded(callable) *callable* will be called if a statement requires a `collation `_ that hasn't been registered. Your callable will be passed two parameters. The first is the connection object. The second is the name of the collation. If you have the collation code available then call :meth:`Connection.createcollation`. This is useful for creating collations on demand. For example you may include the `locale `_ in the collation name, but since there are thousands of locales in popular use it would not be useful to :meth:`prereigster ` them all. Using :meth:`~Connection.collationneeded` tells you when you need to register them. .. seealso:: * :meth:`~Connection.createcollation` Calls: `sqlite3_collation_needed `__ .. index:: sqlite3_db_config .. method:: Connection.config(op[, *args]) :param op: A `configuration operation `__ :param args: Zero or more arguments as appropriate for *op* Calls: `sqlite3_db_config `__ .. index:: sqlite3_create_function_v2 .. method:: Connection.createaggregatefunction(name, factory[, numargs=-1]) Registers an aggregate function. Aggregate functions operate on all the relevant rows such as counting how many there are. :param name: The string name of the function. It should be less than 255 characters :param callable: The function that will be called :param numargs: How many arguments the function takes, with -1 meaning any number When a query starts, the *factory* will be called and must return a tuple of 3 items: a context object This can be of any type a step function This function is called once for each row. The first parameter will be the context object and the remaining parameters will be from the SQL statement. Any value returned will be ignored. a final function This function is called at the very end with the context object as a parameter. The value returned is set as the return for the function. The final function is always called even if an exception was raised by the step function. This allows you to ensure any resources are cleaned up. .. note:: You can register the same named function but with different callables and *numargs*. See :meth:`~Connection.createscalarfunction` for an example. .. seealso:: * :ref:`Example ` * :meth:`~Connection.createscalarfunction` Calls: `sqlite3_create_function_v2 `__ .. index:: sqlite3_create_collation_v2 .. method:: Connection.createcollation(name, callback) You can control how SQLite sorts (termed `collation `_) when giving the ``COLLATE`` term to a `SELECT `_. For example your collation could take into account locale or do numeric sorting. The *callback* will be called with two items. It should return -1 if the first is less then the second, 0 if they are equal, and 1 if first is greater:: def mycollation(one, two): if one < two: return -1 if one == two: return 0 if one > two: return 1 .. seealso:: * :ref:`Example ` Calls: `sqlite3_create_collation_v2 `__ .. index:: sqlite3_create_module_v2 .. method:: Connection.createmodule(name, datasource) Registers a virtual table. See :ref:`virtualtables` for details. .. seealso:: * :ref:`Example ` Calls: `sqlite3_create_module_v2 `__ .. index:: sqlite3_create_function_v2 .. method:: Connection.createscalarfunction(name, callable[, numargs=-1, deterministic=False]) Registers a scalar function. Scalar functions operate on one set of paramaters once. :param name: The string name of the function. It should be less than 255 characters :param callable: The function that will be called :param numargs: How many arguments the function takes, with -1 meaning any number :param deterministic: When True this means the function always returns the same result for the same input arguments. SQLite's query planner can perform additional optimisations for deterministic functions. For example a random() function is not deterministic while one that returns the length of a string is. .. note:: You can register the same named function but with different *callable* and *numargs*. For example:: connection.createscalarfunction("toip", ipv4convert, 4) connection.createscalarfunction("toip", ipv6convert, 16) connection.createscalarfunction("toip", strconvert, -1) The one with the correct *numargs* will be called and only if that doesn't exist then the one with negative *numargs* will be called. .. seealso:: * :ref:`Example ` * :meth:`~Connection.createaggregatefunction` Calls: `sqlite3_create_function_v2 `__ .. method:: Connection.cursor() -> Cursor Creates a new :class:`Cursor` object on this database. :rtype: :class:`Cursor` .. index:: sqlite3_db_filename .. method:: Connection.db_filename(name) -> String Returns the full filename of the named (attached) database. The main database is named "main". Calls: `sqlite3_db_filename `__ .. index:: sqlite3_enable_load_extension .. method:: Connection.enableloadextension(enable) Enables/disables `extension loading `_ which is disabled by default. :param enable: If True then extension loading is enabled, else it is disabled. .. seealso:: * :meth:`~Connection.loadextension` Calls: `sqlite3_enable_load_extension `__ .. index:: sqlite3_file_control .. method:: Connection.filecontrol(dbname, op, pointer) -> bool Calls the :meth:`~VFSFile.xFileControl` method on the :ref:`VFS` implementing :class:`file access ` for the database. :param dbname: The name of the database to affect (eg "main", "temp", attached name) :param op: A `numeric code `_ with values less than 100 reserved for SQLite internal use. :param pointer: A number which is treated as a ``void pointer`` at the C level. :returns: True or False indicating if the VFS understood the op. If you want data returned back then the *pointer* needs to point to something mutable. Here is an example using `ctypes `_ of passing a Python dictionary to :meth:`~VFSFile.xFileControl` which can then modify the dictionary to set return values:: obj={"foo": 1, 2: 3} # object we want to pass objwrap=ctypes.py_object(obj) # objwrap must live before and after the call else # it gets garbage collected connection.filecontrol( "main", # which db 123, # our op code ctypes.addressof(objwrap)) # get pointer The :meth:`~VFSFile.xFileControl` method then looks like this:: def xFileControl(self, op, pointer): if op==123: # our op code obj=ctypes.py_object.from_address(pointer).value # play with obj - you can use id() to verify it is the same print obj["foo"] obj["result"]="it worked" return True else: # pass to parent/superclass return super(MyFile, self).xFileControl(op, pointer) This is how you set the chunk size by which the database grows. Do not combine it into one line as the c_int would be garbage collected before the filecontrol call is made:: chunksize=ctypes.c_int(32768) connection.filecontrol("main", apsw.SQLITE_FCNTL_CHUNK_SIZE, ctypes.addressof(chunksize)) Calls: `sqlite3_file_control `__ .. index:: sqlite3_db_filename .. attribute:: Connection.filename The filename of the database. Calls: `sqlite3_db_filename `__ .. index:: sqlite3_get_autocommit .. method:: Connection.getautocommit() -> bool Returns if the Connection is in auto commit mode (ie not in a transaction). Calls: `sqlite3_get_autocommit `__ .. method:: Connection.getexectrace() -> callable or None Returns the currently installed (via :meth:`~Connection.setexectrace`) execution tracer. .. seealso:: * :ref:`tracing` .. method:: Connection.getrowtrace() -> callable or None Returns the currently installed (via :meth:`~Connection.setrowtrace`) row tracer. .. seealso:: * :ref:`tracing` .. index:: sqlite3_interrupt .. method:: Connection.interrupt() Causes any pending operations on the database to abort at the earliest opportunity. You can call this from any thread. For example you may have a long running query when the user presses the stop button in your user interface. :exc:`InterruptError` will be raised in the query that got interrupted. Calls: `sqlite3_interrupt `__ .. index:: sqlite3_last_insert_rowid .. method:: Connection.last_insert_rowid() -> int Returns the integer key of the most recent insert in the database. Calls: `sqlite3_last_insert_rowid `__ .. index:: sqlite3_limit .. method:: Connection.limit(id[, newval]) -> int If called with one parameter then the current limit for that *id* is returned. If called with two then the limit is set to *newval*. :param id: One of the `runtime limit ids `_ :param newval: The new limit. This is a 32 bit signed integer even on 64 bit platforms. :returns: The limit in place on entry to the call. .. seealso:: * :ref:`Example ` Calls: `sqlite3_limit `__ .. index:: sqlite3_load_extension .. method:: Connection.loadextension(filename[, entrypoint]) Loads *filename* as an `extension `_ :param filename: The file to load. This must be Unicode or Unicode compatible :param entrypoint: The initialization method to call. If this parameter is not supplied then the SQLite default of ``sqlite3_extension_init`` is used. :raises ExtensionLoadingError: If the extension could not be loaded. The exception string includes more details. .. seealso:: * :meth:`~Connection.enableloadextension` Calls: `sqlite3_load_extension `__ .. attribute:: Connection.open_flags The integer flags used to open the database. .. attribute:: Connection.open_vfs The string name of the vfs used to open the database. .. index:: sqlite3_overload_function .. method:: Connection.overloadfunction(name, nargs) Registers a placeholder function so that a virtual table can provide an implementation via :meth:`VTTable.FindFunction`. :param name: Function name :param nargs: How many arguments the function takes Due to :cvstrac:`3507` underlying errors will not be returned. Calls: `sqlite3_overload_function `__ .. index:: sqlite3_db_readonly .. method:: Connection.readonly(name) -> bool True or False if the named (attached) database was opened readonly or file permissions don't allow writing. The main database is named "main". An exception is raised if the database doesn't exist. Calls: `sqlite3_db_readonly `__ .. index:: sqlite3_set_authorizer .. method:: Connection.setauthorizer(callable) While `preparing `_ statements, SQLite will call any defined authorizer to see if a particular action is ok to be part of the statement. Typical usage would be if you are running user supplied SQL and want to prevent harmful operations. You should also set the :class:`statementcachesize ` to zero. The authorizer callback has 5 parameters: * An `operation code `_ * A string (or None) dependent on the operation `(listed as 3rd) `_ * A string (or None) dependent on the operation `(listed as 4th) `_ * A string name of the database (or None) * Name of the innermost trigger or view doing the access (or None) The authorizer callback should return one of :const:`SQLITE_OK`, :const:`SQLITE_DENY` or :const:`SQLITE_IGNORE`. (:const:`SQLITE_DENY` is returned if there is an error in your Python code). .. seealso:: * :ref:`Example ` * :ref:`statementcache` Calls: `sqlite3_set_authorizer `__ .. index:: sqlite3_busy_handler .. method:: Connection.setbusyhandler(callable) Sets the busy handler to callable. callable will be called with one integer argument which is the number of prior calls to the busy callback for the same lock. If the busy callback returns something that evaluates to False, then SQLite returns :const:`SQLITE_BUSY` to the calling code.. If the callback returns something that evaluates to True, then SQLite tries to open the table again and the cycle repeats. If you previously called :meth:`~Connection.setbusytimeout` then calling this overrides that. .. seealso:: * :meth:`Connection.setbusytimeout` * :ref:`Busy handling ` Calls: `sqlite3_busy_handler `__ .. index:: sqlite3_busy_timeout .. method:: Connection.setbusytimeout(millseconds) If the database is locked such as when another connection is making changes, SQLite will keep retrying. This sets the maximum amount of time SQLite will keep retrying before giving up. If the database is still busy then :class:`apsw.BusyError` will be returned. :param milliseconds: Maximum thousandths of a second to wait. If you previously called :meth:`~Connection.setbusyhandler` then calling this overrides that. .. seealso:: * :meth:`Connection.setbusyhandler` * :ref:`Busy handling ` Calls: `sqlite3_busy_timeout `__ .. index:: sqlite3_commit_hook .. method:: Connection.setcommithook(callable) *callable* will be called just before a commit. It should return zero for the commit to go ahead and non-zero for it to be turned into a rollback. In the case of an exception in your callable, a non-zero (ie rollback) value is returned. .. seealso:: * :ref:`Example ` Calls: `sqlite3_commit_hook `__ .. method:: Connection.setexectrace(callable) *callable* is called with the cursor, statement and bindings for each :meth:`~Cursor.execute` or :meth:`~Cursor.executemany` on this Connection, unless the :class:`Cursor` installed its own tracer. Your execution tracer can also abort execution of a statement. If *callable* is :const:`None` then any existing execution tracer is removed. .. seealso:: * :ref:`tracing` * :ref:`rowtracer` * :meth:`Cursor.setexectrace` .. index:: sqlite3_profile .. method:: Connection.setprofile(callable) Sets a callable which is invoked at the end of execution of each statement and passed the statement string and how long it took to execute. (The execution time is in nanoseconds.) Note that it is called only on completion. If for example you do a ``SELECT`` and only read the first result, then you won't reach the end of the statement. Calls: `sqlite3_profile `__ .. index:: sqlite3_progress_handler .. method:: Connection.setprogresshandler(callable[, nsteps=20]) Sets a callable which is invoked every *nsteps* SQLite inststructions. The callable should return a non-zero value to abort or zero to continue. (If there is an error in your Python *callable* then non-zero will be returned). .. seealso:: * :ref:`Example ` Calls: `sqlite3_progress_handler `__ .. index:: sqlite3_rollback_hook .. method:: Connection.setrollbackhook(callable) Sets a callable which is invoked during a rollback. If *callable* is :const:`None` then any existing rollback hook is removed. The *callable* is called with no parameters and the return value is ignored. Calls: `sqlite3_rollback_hook `__ .. method:: Connection.setrowtrace(callable) *callable* is called with the cursor and row being returned for :class:`cursors ` associated with this Connection, unless the Cursor installed its own tracer. You can change the data that is returned or cause the row to be skipped altogether. If *callable* is :const:`None` then any existing row tracer is removed. .. seealso:: * :ref:`tracing` * :ref:`rowtracer` * :meth:`Cursor.setexectrace` .. index:: sqlite3_update_hook .. method:: Connection.setupdatehook(callable) Calls *callable* whenever a row is updated, deleted or inserted. If *callable* is :const:`None` then any existing update hook is removed. The update hook cannot make changes to the database while the query is still executing, but can record them for later use or apply them in a different connection. The update hook is called with 4 paramaters: type (int) :const:`SQLITE_INSERT`, :const:`SQLITE_DELETE` or :const:`SQLITE_UPDATE` database name (string) This is ``main`` for the database or the name specified in `ATTACH `_ table name (string) The table on which the update happened rowid (64 bit integer) The affected row .. seealso:: * :ref:`Example ` Calls: `sqlite3_update_hook `__ .. index:: sqlite3_wal_hook .. method:: Connection.setwalhook(callable) *callable* will be called just after data is committed in :ref:`wal` mode. It should return :const:`SQLITE_OK` or an error code. The callback is called with 3 parameters: * The Connection * The database name (eg "main" or the name of an attached database) * The number of pages in the wal log You can pass in None in order to clear an existing hook. Calls: `sqlite3_wal_hook `__ .. method:: Connection.sqlite3pointer() -> int Returns the underlying `sqlite3 * `_ for the connection. This method is useful if there are other C level libraries in the same process and you want them to use the APSW connection handle. The value is returned as a number using :meth:`PyLong_FromVoidPtr` under the hood. You should also ensure that you increment the reference count on the :class:`Connection` for as long as the other libraries are using the pointer. It is also a very good idea to call :meth:`sqlitelibversion` and ensure it is the same as the other libraries. .. index:: sqlite3_db_status .. method:: Connection.status(op, reset=False) -> (int, int) Returns current and highwater measurements for the database. :param op: A `status parameter `_ :param reset: If *True* then the highwater is set to the current value :returns: A tuple of current value and highwater value .. seealso:: The :func:`status` example which works in exactly the same way. * :ref:`Status example ` Calls: `sqlite3_db_status `__ .. index:: sqlite3_total_changes .. method:: Connection.totalchanges() -> int Returns the total number of database rows that have be modified, inserted, or deleted since the database connection was opened. Calls: `sqlite3_total_changes `__ .. index:: sqlite3_wal_autocheckpoint .. method:: Connection.wal_autocheckpoint(n) Sets how often the :ref:`wal` checkpointing is run. :param n: A number representing the checkpointing interval or zero/negative to disable auto checkpointing. Calls: `sqlite3_wal_autocheckpoint `__ .. index:: sqlite3_wal_checkpoint_v2 .. method:: Connection.wal_checkpoint(dbname=None, mode=apsw.SQLITE_CHECKPOINT_PASSIVE) -> ( int, int ) Does a WAL checkpoint. Has no effect if the database(s) are not in WAL mode. :param dbname: The name of the database or all databases if None :param mode: One of the `checkpoint modes `__. :return: A tuple of the size of the WAL log in frames and the number of frames checkpointed as described in the `documentation `__. Calls: `sqlite3_wal_checkpoint_v2 `__ apsw-3.8.11.1-r1/doc/_sources/cursor.txt0000664000000000000000000003031012565422266016411 0ustar rootroot.. Automatically generated by code2rst.py code2rst.py src/cursor.c doc/cursor.rst Edit src/cursor.c not this file! .. currentmodule:: apsw .. _cursors: Cursors (executing SQL) *********************** A cursor encapsulates a SQL query and returning results. To make a new cursor you should call :meth:`~Connection.cursor` on your database:: db=apsw.Connection("databasefilename") cursor=db.cursor() A cursor executes SQL:: cursor.execute("create table example(title, isbn)") You can also read data back. The row is returned as a tuple of the column values:: for row in cursor.execute("select * from example"): print row There are two ways of supplying data to a query. The **really bad** way is to compose a string:: sql="insert into example values('%s', %d)" % ("string", 8390823904) cursor.execute(sql) If there were any single quotes in string then you would have invalid syntax. Additionally this is how `SQL injection attacks `_ happen. Instead you should use bindings:: sql="insert into example values(?, ?)" cursor.execute(sql, ("string", 8390823904)) # You can also use dictionaries sql="insert into example values(:title, :isbn)" cursor.execute(sql, {"title": "string", "isbn": 8390823904}) # You can use local variables as the dictionary title="..." isbn="...." cursor.execute(sql, locals()) Cursors are cheap. Use as many as you need. It is safe to use them across threads, such as calling :meth:`~Cursor.execute` in one thread, passing the cursor to another thread that then calls :meth:`Cursor.next`. The only thing you can't do is call methods at exactly the same time on the same cursor in two different threads - eg trying to call :meth:`~Cursor.execute` in both at the same time, or :meth:`~Cursor.execute` in one and :meth:`Cursor.next` in another. (If you do attempt this, it will be detected and :exc:`ThreadingViolationError` will be raised.) Behind the scenes a :class:`Cursor` maps to a `SQLite statement `_. APSW maintains a :ref:`cache ` so that the mapping is very fast, and the SQLite objects are reused when possible. A unique feature of APSW is that your query can be multiple semi-colon separated statements. For example:: cursor.execute("select ... ; insert into ... ; update ... ; select ...") .. note:: SQLite fetches data as it is needed. If table *example* had 10 million rows it would only get the next row as requested (the for loop effectively calls :meth:`~Cursor.next` to get each row). This code would not work as expected:: for row in cursor.execute("select * from example"): cursor.execute("insert .....") The nested :meth:`~Cursor.execute` would start a new query abandoning any remaining results from the ``SELECT`` cursor. There are two ways to work around this. Use a different cursor:: for row in cursor1.execute("select * from example"): cursor2.execute("insert ...") You can also get all the rows immediately by filling in a list:: rows=list( cursor.execute("select * from example") ) for row in rows: cursor.execute("insert ...") This last approach is recommended since you don't have to worry about the database changing while doing the ``select``. You should also understand transactions and where to put the transaction boundaries. .. note:: Cursors on the same :ref:`Connection ` are not isolated from each other. Anything done on one cursor is immediately visible to all other Cursors on the same connection. This still applies if you start transactions. Connections are isolated from each other with cursors on other connections not seeing changes until they are committed. .. seealso:: * `SQLite transactions `_ * `Atomic commit `_ * `Example of changing the database while running a query problem `_ * :ref:`Benchmarking` Cursor class ============ .. class:: Cursor You obtain cursors by calling :meth:`Connection.cursor`. .. method:: Cursor.close(force=False) It is very unlikely you will need to call this method. It exists because older versions of SQLite required all Connection/Cursor activity to be confined to the same thread. That is no longer the case. Cursors are automatically garbage collected and when there are none left will allow the connection to be garbage collected if it has no other references. A cursor is open if there are remaining statements to execute (if your query included multiple statements), or if you called :meth:`~Cursor.executemany` and not all of the *sequenceofbindings* have been used yet. :param force: If False then you will get exceptions if there is remaining work to do be in the Cursor such as more statements to execute, more data from the executemany binding sequence etc. If force is True then all remaining work and state information will be silently discarded. .. attribute:: Cursor.description Based on the `DB-API cursor property `__, this returns the same as :meth:`getdescription` but with 5 Nones appended. See also :issue:`131`. .. index:: sqlite3_prepare_v2, sqlite3_step, sqlite3_bind_int64, sqlite3_bind_null, sqlite3_bind_text, sqlite3_bind_double, sqlite3_bind_blob, sqlite3_bind_zeroblob .. method:: Cursor.execute(statements[, bindings]) -> iterator Executes the statements using the supplied bindings. Execution returns when the first row is available or all statements have completed. :param statements: One or more SQL statements such as ``select * from books`` or ``begin; insert into books ...; select last_insert_rowid(); end``. :param bindings: If supplied should either be a sequence or a dictionary. Each item must be one of the :ref:`supported types ` If you use numbered bindings in the query then supply a sequence. Any sequence will work including lists and iterators. For example:: cursor.execute("insert into books values(?,?)", ("title", "number")) .. note:: A common gotcha is wanting to insert a single string but not putting it in a tuple:: cursor.execute("insert into books values(?)", "a title") The string is a sequence of 8 characters and so it will look like you are supplying 8 bindings when only one is needed. Use a one item tuple with a trailing comma like this:: cursor.execute("insert into books values(?)", ("a title",) ) If you used names in the statement then supply a dictionary as the binding. It is ok to be missing entries from the dictionary - None/null will be used. For example:: cursor.execute("insert into books values(:title, :isbn, :rating)", {"title": "book title", "isbn": 908908908}) The return is the cursor object itself which is also an iterator. This allows you to write:: for row in cursor.execute("select * from books"): print row :raises TypeError: The bindings supplied were neither a dict nor a sequence :raises BindingsError: You supplied too many or too few bindings for the statements :raises IncompleteExecutionError: There are remaining unexecuted queries from your last execute .. seealso:: * :ref:`executionmodel` * :ref:`Example ` Calls: * `sqlite3_prepare_v2 `__ * `sqlite3_step `__ * `sqlite3_bind_int64 `__ * `sqlite3_bind_null `__ * `sqlite3_bind_text `__ * `sqlite3_bind_double `__ * `sqlite3_bind_blob `__ * `sqlite3_bind_zeroblob `__ .. method:: Cursor.executemany(statements, sequenceofbindings) -> iterator This method is for when you want to execute the same statements over a sequence of bindings. Conceptually it does this:: for binding in sequenceofbindings: cursor.execute(statements, binding) Example:: rows=( (1, 7), (2, 23), (4, 92), (12, 12) ) cursor.executemany("insert into nums values(?,?)", rows) The return is the cursor itself which acts as an iterator. Your statements can return data. See :meth:`~Cursor.execute` for more information. .. method:: Cursor.fetchall() -> list Returns all remaining result rows as a list. This method is defined in DBAPI. It is a longer way of doing ``list(cursor)``. .. method:: Cursor.fetchone() -> row or None Returns the next row of data or None if there are no more rows. .. method:: Cursor.getconnection() -> Connection Returns the :class:`Connection` this cursor belongs to. An example usage is to get another cursor:: def func(cursor): # I don't want to alter existing cursor, so make a new one mycursor=cursor.getconnection().cursor() mycursor.execute("....") .. index:: sqlite3_column_name, sqlite3_column_decltype .. method:: Cursor.getdescription() -> tuple Returns a tuple describing each column in the result row. The return is identical for every row of the results. You can only call this method once you have started executing a statement and before you have finished:: # This will error cursor.getdescription() for row in cursor.execute("select ....."): # this works print cursor.getdescription() print row The information about each column is a tuple of ``(column_name, declared_column_type)``. The type is what was declared in the ``CREATE TABLE`` statement - the value returned in the row will be whatever type you put in for that row and column. (This is known as `manifest typing `_ which is also the way that Python works. The variable ``a`` could contain an integer, and then you could put a string in it. Other static languages such as C or other SQL databases only let you put one type in - eg ``a`` could only contain an integer or a string, but never both.) Example:: cursor.execute("create table books(title string, isbn number, wibbly wobbly zebra)") cursor.execute("insert into books values(?,?,?)", (97, "fjfjfj", 3.7)) cursor.execute("insert into books values(?,?,?)", ("fjfjfj", 3.7, 97)) for row in cursor.execute("select * from books"): print cursor.getdescription() print row Output:: # row 0 - description (('title', 'string'), ('isbn', 'number'), ('wibbly', 'wobbly zebra')) # row 0 - values (97, 'fjfjfj', 3.7) # row 1 - description (('title', 'string'), ('isbn', 'number'), ('wibbly', 'wobbly zebra')) # row 1 - values ('fjfjfj', 3.7, 97) Calls: * `sqlite3_column_name `__ * `sqlite3_column_decltype `__ .. method:: Cursor.getexectrace() -> callable or None Returns the currently installed (via :meth:`~Cursor.setexectrace`) execution tracer. .. seealso:: * :ref:`tracing` .. method:: Cursor.getrowtrace() -> callable or None Returns the currently installed (via :meth:`~Cursor.setrowtrace`) row tracer. .. seealso:: * :ref:`tracing` .. method:: Cursor.setexectrace(callable) *callable* is called with the cursor, statement and bindings for each :meth:`~Cursor.execute` or :meth:`~Cursor.executemany` on this cursor. If *callable* is :const:`None` then any existing execution tracer is removed. .. seealso:: * :ref:`tracing` * :ref:`executiontracer` * :meth:`Connection.setexectrace` .. method:: Cursor.setrowtrace(callable) *callable* is called with cursor and row being returned. You can change the data that is returned or cause the row to be skipped altogether. If *callable* is :const:`None` then any existing row tracer is removed. .. seealso:: * :ref:`tracing` * :ref:`rowtracer` * :meth:`Connection.setexectrace` apsw-3.8.11.1-r1/doc/_sources/execution.txt0000664000000000000000000003774212257627544017123 0ustar rootroot********************* Execution and tracing ********************* .. currentmodule:: apsw .. _executionmodel: Execution model =============== This section only matters if you give multiple SQL statements in one go to :exc:`Cursor.execute`. (Statements are seperated by semi-colons.) SQLite does execution in two steps. First a statement is prepared, which verifies the syntax, tables and fields and converts the statement into an internal representation. The prepared statement is then run. Execution stops when a row is available, there is an error or the statement is complete. The :meth:`Cursor.execute` method automatically does the preparing and starts execution. If none of the statements return rows then execution will go to the end. If a row is returned then you need to call :meth:`Cursor.next` to get the row values or use the cursor as an iterator. Execution will resume as necessary to satisfy :meth:`~Cursor.next` calls. However this means that if you don't read the rows returned then the rest of your statements won't be executed. APSW will detect unexecuted previous statements and generate an exception. For example:: >>> cursor.execute("select * from foo ; create table bar(x,y,z)") >>> cursor.execute("create table bam(x,y,z)") Traceback (most recent call last): File "", line 1, in ? apsw.IncompleteExecutionError: Error: there are still remaining sql statements to execute Because I didn't read the results of ``select * from foo`` then the following create table command didn't have a chance to get executed. On the next execute that condition is detected and an exception raised. Multi-threading and re-entrancy =============================== ASPW lets you use SQLite in multi-threaded programs and will let other threads execute while SQLite is working. (Technically the `GIL `_ is released when `sqlite3_prepare_v2 `_, `sqlite3_step `_ or `sqlite3_open_v2 `_ are running, as well as all other functions that could take more than a trivial amount of time or use the SQLite mutex. The GIL is re-acquired while user defined functions, collations and the various hooks/handlers run.) Note that you cannot use the same cursor object in multiple threads concurrently to execute statements. APSW will detect this and throw an exception. It is safe to use the object serially (eg calling :meth:`Cursor.execute` in one thread and :meth:`Cursor.next` in another. You also can't do things like try to :meth:`~Connection.close` a Connection concurrently in two threads. If you have multiple threads and/or multiple programs accessing the same database then there may be contention for the file. SQLite will return SQLITE_BUSY which will be raised as BusyError. You can call :meth:`Connection.setbusytimeout` to set how long SQLite will retry for or :meth:`Connection.setbusyhandler` to install your own busy handler. Note that SQLite won't call the busy handler or timeout if it believes a deadlock has arisen. SQLite's locking and concurrency is described `here `_. A cursor object can only be executing one query at a time. You cannot issue a new query from inside a trace function or from a user defined function or collation since these are called while executing a query. You can however make new cursors and use those without issue. You may want to remember the Connection object when you set your trace or user defined functions. .. _x64bitpy25: 64 bit hosts, Python 2.5+ ========================= Prior to Python 2.5, you were limited to 32 bit quantities for items in Python such as the length of strings, number of items in a sequence etc. Python 2.5 and above use 64 bit limits on 64 bit hosts. APSW will work correctly with those items in Python 2.5 and above that use 64 bits. Unfortunately SQLite is limited to 32 bit quantities for strings, blobs, number of columns etc even when compiled for 64 bit. Consequently you will get a TooBig exception from APSW which checks if strings/buffers longer than 1GB or 2GB (depends on internal storage) are used. See :cvstrac:`2125` and :cvstrac:`3246` for more details. .. _statementcache: Statement Cache =============== Each :class:`Connection` maintains a cache mapping SQL queries to a `prepared statement `_ to avoid the overhead of `repreparing `_ queries that are executed multiple times. This is a classic tradeoff using more memory to reduce CPU consumption. By default there are up to 100 entries in the cache. Once the cache is full, the least recently used item is discarded to make space for new items. You should pick a larger cache size if you have more than 100 unique queries that you run. For example if you have 101 different queries you run in order then the cache will not help. You can also :class:`specify zero ` which will disable the statement cache. If you are using :meth:`authorizers ` then you should disable the statement cache. This is because the authorizer callback is only called while statements are being prepared. .. _tracing: Tracing ======= You can install tracers on :class:`cursors ` or :class:`connections ` as an easy way of seeing exactly what gets executed and what is returned. The tracers can also abort execution and cause different values to be returned. This is very useful for diagnostics and testing without having to modify your main code. .. Note:: You cannot issue new execute statements against the cursor your tracer was called from. If you would like to make more queries in the tracer then do them from a new cursor object. For example:: def exectracer(cursor, sql, bindings): cursor.getconnection().cursor("insert into log values(?,?)", (sql,str(bindings))) return True .. _executiontracer: Execution Tracer ---------------- The execution tracer is called after an SQL statement has been prepared. (ie syntax errors will have caused an exception during preparation so you won't see them with a tracer). It is called with three arguments. cursor The cursor executing the statement sql The SQL text being executed bindings The bindings being used. This may be :const:`None`, a dictionary or a tuple. If the tracer return value evaluates to False/None then execution is aborted with an :exc:`ExecTraceAbort` exception. See the :ref:`example `. Execution tracers can be installed on a specific cursor by calling :meth:`Cursor.setexectrace` or for all cursors by calling :meth:`Connection.setexectrace`, with the cursor tracer taking priority. If you use the Connection :meth:`with ` statement and have a Connection execution tracer then your callback will also be called when APSW creates and releases/rollbacks savepoints. Instead of the first argument being a cursor, it will be the connection itself since there is no cursor involved. .. _rowtracer: Row Tracer ---------- The row tracer is called before each row is returned. It is called with two arguments. cursor The cursor returning the row row A tuple of the values about to be returned Whatever you return from the tracer is what is actually returned to the caller of :meth:`~Cursor.execute`. If you return None then the whole row is skipped. See the :ref:`example `. Row tracers can be installed on a specific cursor by calling :meth:`Cursor.setrowtrace` or for all cursors by calling :meth:`Connection.setrowtrace`, with the cursor tracer taking priority. .. _apswtrace: APSW Trace ========== APSW includes a tracing script as part of the :ref:`source distribution ` named :file:`apswtrace.py`, or you can get a copy directly from :source:`source control ` (choose "Raw File"). This script lets you easily trace SQL execution as well as providing a summary report without modifying your code. If it is installed anywhere on your :envvar:`PYTHONPATH` then you can invoke it with ``-m``:: $ python -m apswtrace [apswtrace options] yourscript.py [your options] You can also invoke it directly:: $ python /path/to/apswtrace.py [apswtrace options] yourscript.py [your options] All output is UTF-8 encoded. The following options are available: .. code-block:: text $ python apswtrace.py --help Usage: apswtrace.py [options] pythonscript.py [pythonscriptoptions] This script runs a Python program that uses APSW and reports on SQL queries without modifying the program. This is done by using connection_hooks and registering row and execution tracers. See APSW documentation for more details on the output. Options: -h, --help show this help message and exit -o OUTPUT, --output=OUTPUT Where to send the output. Use a filename, a single dash for stdout, or the words stdout and stderr. [stdout] -s, --sql Log SQL statements as they are executed. [False] -r, --rows Log returned rows as they are returned (turns on sql). [False] -t, --timestamps Include timestamps in logging -i, --thread Include thread id in logging -l LENGTH, --length=LENGTH Max amount of a string to print [30] --no-report A summary report is normally generated at program exit. This turns off the report and saves memory. --report-items=N How many items to report in top lists [15] --reports=REPORTS Which reports to show [summary,popular,aggregate,individual] This is sample output with the following options: :option:`--sql`, :option:`--rows`, :option:`--timestamps`, :option:`--thread` .. code-block:: text 1e0e5a0 0.152 7fccea8456e0 OPEN: ":memory:" unix READWRITE|CREATE 1f72ac0 0.161 7fccea8456e0 OPEN: "testdb" unix READWRITE|CREATE 1f6b8d0 0.162 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb" 1f6b8d0 0.162 7fccea8456e0 SQL: create table foo(x,y,z) 1f6b8d0 0.239 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb" 1f6b8d0 0.239 7fccea8456e0 SQL: insert into foo values(?,?,?) BINDINGS: ("kjfhgk", "gkjlfdhgjkhsdfkjg", "gklsdfjgkldfjhnbnvc,mnxb,mnxcv..") 1f6b8d0 0.242 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb" 1f6b8d0 0.242 7fccea8456e0 SQL: insert into foo values(?,?,?) BINDINGS: ("gdfklhj", ":gjkhgfdsgfd", "gjkfhgjkhdfkjh") 1f6b8d0 0.244 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb" 1f6b8d0 0.245 7fccea8456e0 SQL: insert into foo values(?,?,?) BINDINGS: ("gdfjkhg", "gkjlfd", "") 1f6b8d0 0.247 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb" 1f6b8d0 0.247 7fccea8456e0 SQL: insert into foo values(?,?,?) BINDINGS: (1, 2, 30) 1f6b8d0 0.257 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb" 1f6b8d0 0.257 7fccea8456e0 SQL: select longest(x,y,z) from foo 1f6b8d0 0.257 7fccea8456e0 ROW: ("gklsdfjgkldfjhnbnvc,mnxb,mnxcv..") Each row starts with the following fields: id This is the `id `_ of the :class:`Cursor` or :class:`Connection`. You can easily `filter `_ the log if you just want to find out what happened on a specific cursor or connection. timestamp This is time since the program started in seconds threadid The unique `thread identifier `_ The remainder of the line has one of the following forms: OPEN: "dbname" vfs open_flags A :class:`Connection` has been opened. The *dbname* is the filename exactly as given in the call to :class:`Connection`. *vfs* is the name of the :ref:`VFS ` used to open the database. *open_flags* is the set of :data:`flags ` supplied with the leading *SQLITE_OPEN* prefix omitted. CURSORFROM: connectionid DB: "dbname" A cursor has been allocated. The *id* at the begining of this row is of the new cursor. *connectionid* is the id of the Connection it was created from. The *dbname* is provided for convenience. This message is logged the first time a cursor issues a query. SQL: query BINDINGS: bindings A query was issued on a cursor. ROW: row A result row was returned by a cursor. A report is also generated by default. This is example output from running the test suite. When calculating time for queries, your code execution time is included as well. For example if your query returned 10 rows and you slept for 1 second on reading each row then the time for the query will be recorded as 10 seconds. Because you can have multiple queries active at the same time, as well as across multiple threads, the total processing time can be larger than the program run time. The processing time is only recorded for queries that have no results or where you read all the result rows. Processing time also includes waiting time on busy connections. .. code-block:: text APSW TRACE SUMMARY REPORT Program run time 83.073 seconds Total connections 1308 Total cursors 3082 Number of threads used for queries 21 Total queries 127973 Number of distinct queries 578 Number of rows returned 2369 Time spent processing queries 120.530 seconds This shows how many times each query was run. .. code-block:: text MOST POPULAR QUERIES 121451 insert into foo values(?) 1220 insert into abc values(1,2,?) 1118 select x from foo 909 select timesten(x) from foo where x=? order by x 654 select * from foo 426 update t1 set b=b||a||b 146 begin 88 create table foo(x,y) 79 insert into foo values(1,2) 76 rollback 71 pragma locking_mode=exclusive 71 insert into t1 values(2, 'abcdefghijklmnopqrstuvwxyz') 71 insert into t1 values(1, 'abcdefghijklmnopqrstuvwxyz') 71 insert into t1 select 4-a, b from t2 71 insert into foo values(date('now'), date('now')) This shows how many times a query was run and the sum of the processing times in seconds. The ``begin immediate`` query illustrates how time spent busy waiting is included. .. code-block:: text LONGEST RUNNING - AGGREGATE 413 94.305 select timesten(x) from foo where x=? order by x 120637 12.941 select * from foo 12 4.115 begin immediate 121449 2.179 insert into foo values(?) 1220 1.509 insert into abc values(1,2,?) 3 1.380 create index foo_x on foo(x) 426 0.715 update t1 set b=b||a||b 38 0.420 insert into foo values(?,?) 71 0.241 create table t1(a unique, b) 88 0.206 create table foo(x,y) 61 0.170 create table abc(a,b,c) 27 0.165 insert into foo values(?,?,?) 1 0.158 select row,x,snap(x) from foo 80 0.150 insert into foo values(1,2) 71 0.127 insert into foo values(date('now'), date('now')) This shows the longest running queries with time in seconds. .. code-block:: text LONGEST RUNNING - INDIVIDUAL 3.001 begin immediate 1.377 create index foo_x on foo(x) 1.102 begin immediate 0.944 select timesten(x) from foo where x=? order by x 0.893 select timesten(x) from foo where x=? order by x 0.817 select timesten(x) from foo where x=? order by x 0.816 select timesten(x) from foo where x=? order by x 0.786 select timesten(x) from foo where x=? order by x 0.783 select timesten(x) from foo where x=? order by x 0.713 select timesten(x) from foo where x=? order by x 0.701 select timesten(x) from foo where x=? order by x 0.651 select timesten(x) from foo where x=? order by x 0.646 select timesten(x) from foo where x=? order by x 0.631 select timesten(x) from foo where x=? order by x 0.620 select timesten(x) from foo where x=? order by x apsw-3.8.11.1-r1/doc/_sources/tips.txt0000664000000000000000000002307212457234622016057 0ustar rootrootTips **** .. currentmodule:: apsw These tips are based on mailing list postings. You are recommended to read all the documentation as well. SQLite is different =================== While SQLite provides a SQL database like many others out there, it is also unique in many ways. Read about the unique features at the `SQLite website `__. Cursors ======= SQLite only calculates each result row as you request it. For example if your query returns 10 million rows SQLite will not calculate all 10 million up front. Instead the next row will be calculated as you ask for it. Cursors on the same :ref:`Connection ` are not isolated from each other. Anything done on one cursor is immediately visible to all other Cursors on the same connection. This still applies if you start transactions. Connections are isolated from each other. Read more about :ref:`Cursors `. Bindings ======== When using a cursor, always use bindings. `String interpolation `_ may seem more convenient but you will encounter difficulties. You may feel that you have complete control over all data accessed but if your code is at all useful then you will find it being used more and more widely. The computer will always be better than you at parsing SQL and the bad guys have years of experience finding and using `SQL injection attacks `_ in ways you never even thought possible. The :ref:`documentation ` gives many examples of how to use various forms of bindings. Unicode ======= SQLite only stores text as Unicode. However it relies on SQLite API users to provide valid UTF-8 and does not double check. (APSW only provides valid UTF-8). It is possible using other wrappers and tools to cause invalid UTF-8 to appear in the database which will then cause retrieval errors. You can work around this by using the SQL *CAST* operator. For example:: SELECT id, CAST(label AS blob) from table Then proceed to give the `Joel Unicode article `_ to all people involved. .. _diagnostics_tips: Diagnostics =========== Both SQLite and APSW provide detailed diagnostic information. Errors will be signalled via an :doc:`exception `. APSW ensures you have :ref:`detailed information ` both in the stack trace as well as what data APSW/SQLite was operating on. SQLite has a `warning/error logging facility `__. To set your own logger use:: def handler(errcode, message): errstr=apsw.mapping_result_codes[errcode & 255] extended=errcode & ~ 255 print "SQLITE_LOG: %s (%d) %s %s" % (message, errcode, errstr, apsw.mapping_extended_result_codes.get(extended, "")) apsw.config(apsw.SQLITE_CONFIG_LOG, handler) .. note:: The handler **must** be set before any other calls to SQLite. Once SQLite is initialised you cannot change the logger - a :exc:`MisuseError` will happen (this restriction is in SQLite not APSW). This is an example of what gets printed when I use ``/dev/null`` as the database name in the :class:`Connection` and then tried to create a table.:: SQLITE_LOG: cannot open file at line 28729 of [7dd4968f23] (14) SQLITE_CANTOPEN SQLITE_LOG: os_unix.c:28729: (2) open(/dev/null-journal) - No such file or directory (14) SQLITE_CANTOPEN SQLITE_LOG: statement aborts at 38: [create table foo(x,y);] unable to open database file (14) SQLITE_CANTOPEN Parsing SQL =========== Sometimes you want to know what a particular SQL statement does. The SQLite query parser directly generates VDBE byte code and cannot be hooked into. There is however an easier way. Make a new :class:`Connection` object making sure the statement cache is disabled (size zero). Install an :ref:`execution tracer ` that returns ``apsw.SQLITE_DENY`` which will prevent any queries from running. Install an :meth:`authorizer `. Then call :meth:`Cursor.execute` on your query. Your authorizer will then be called (multiple times if necessary) with details of what the query does including expanding views and triggers that fire. Finally the execution tracer will fire. If the query string had multiple statements then the execution tracer lets you know how long the first statement was. Unexpected behaviour ==================== Occasionally you may get different results than you expected. Before littering your code with *print*, try :ref:`apswtrace ` with all options turned on to see exactly what is going on. You can also use the :ref:`SQLite shell ` to dump the contents of your database to a text file. For example you could dump it before and after a run to see what changed. One fairly common gotcha is using double quotes instead of single quotes. (This wouldn't be a problem if you use bindings!) SQL strings use single quotes. If you use double quotes then it will mostly appear to work, but they are intended to be used for identifiers such as column names. For example if you have a column named ``a b`` (a space b) then you would need to use:: SELECT "a b" from table If you use double quotes and happen to use a string whose contents are the same as a table, alias, column etc then unexpected results will occur. Customizing cursors =================== Some developers want to customize the behaviour of cursors. An example would be wanting a :ref:`rowcount ` or batching returned rows. (These don't make any sense with SQLite but the desire may be to make the code source compatible with other database drivers). APSW does not provide a way to subclass the cursor class or any other form of factory. Consequently you will have to subclass the :class:`Connection` and provide an alternate implementation of :meth:`Connection.cursor`. You should encapsulate the APSW cursor - ie store it as a member of your cursor class and forward calls as appropriate. The cursor only has two important methods - :meth:`Cursor.execute` and :meth:`Cursor.executemany`. If you want to change the rows returned then use a :ref:`row tracer `. For example you could call :meth:`Cursor.getdescription` and return a dictionary instead of a tuple:: def row_factory(cursor, row): return {k[0]: row[i] for i, k in enumerate(cursor.getdescription())} # You can also set this on just a cursor connection.setrowtrace(row_factory) .. _busyhandling: Busy handling ============= SQLite uses locks to coordinate access to the database by multiple connections (within the same process or in a different process). The general goal is to have the locks be as lax as possible (allowing concurrency) and when using more restrictive locks to keep them for as short a time as possible. See the `SQLite documentation `__ for more details. By default you will get a :exc:`BusyError` if a lock cannot be acquired. You can set a :meth:`timeout ` which will keep retrying or a :meth:`callback ` where you decide what to do. Database schema =============== When starting a new database, it can be quite difficult to decide what tables and fields to have and how to link them. The technique used to design SQL schemas is called `normalization `_. The page also shows common pitfalls if you don't normalize your schema. .. _sharedcache: Shared Cache Mode ================= SQLite supports a `shared cache mode `__ where multiple connections to the same database can share a cache instead of having their own. It is not recommended that you use this mode. A big issue is that :ref:`busy handling ` is not done the same way. The timeouts and handlers are ignored and instead :const:`SQLITE_LOCKED_SHAREDCACHE` extended error is returned. Consequently you will have to do your own busy handling. (`SQLite ticket `__, :issue:`59`) The amount of memory and I/O saved is trivial compared to Python's overal memory and I/O consumption. You may also need to tune the shared cache's memory back up to what it would have been with seperate connections to get the same performance. The shared cache mode is targetted at embedded systems where every byte of memory and I/O matters. For example an MP3 player may only have kilobytes of memory available for SQLite. .. _wal: Write Ahead Logging =================== SQLite 3.7 introduces `write ahead logging `__ which has several benefits, but also some drawbacks as the page documents. WAL mode is off by default. In addition to turning it on manually for each database, you can also turn it on for all opened databases by using :attr:`connection_hooks`:: def setwal(db): db.cursor().execute("pragma journal_mode=wal") # custom auto checkpoint interval (use zero to disable) db.wal_autocheckpoint(10) apsw.connection_hooks.append(setwal) Note that if wal mode can't be set (eg the database is in memory or temporary) then the attempt to set wal mode will be ignored. The pragma will return the mode in effect. It is also harmless to call functions like :meth:`Connection.wal_autocheckpoint` on connections that are not in wal mode. If you write your own VFS, then inheriting from an existing VFS that supports WAL will make your VFS support the extra WAL methods too. (Your VFS will point directly to the base methods - there is no indirect call via Python.) apsw-3.8.11.1-r1/doc/_sources/copyright.txt0000664000000000000000000000217212257607024017104 0ustar rootrootCopyright and License ===================== Copyright (C) 2004-2012 `Roger Binns `_ See src/traceback.c for code by Greg Ewing. This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: * The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. * Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. * This notice may not be removed or altered from any source distribution. Alternatively you may strike the license above and use it under any OSI approved open source license such as those listed at http://opensource.org/licenses/alphabetical apsw-3.8.11.1-r1/doc/_sources/example.txt0000664000000000000000000005143212565422310016525 0ustar rootroot .. Automatically generated by example2rst.py. Edit that file not this one! Example ======= This code demonstrates usage of the APSW api. It gives you a good overview of all the things that can be done. Also included is output so you can see what gets printed when you run the code. .. code-block:: python import os, sys, time import apsw ### ### Check we have the expected version of apsw and sqlite ### print " Using APSW file",apsw.__file__ # from the extension module print " APSW version",apsw.apswversion() # from the extension module print " SQLite lib version",apsw.sqlitelibversion() # from the sqlite library code print "SQLite header version",apsw.SQLITE_VERSION_NUMBER # from the sqlite header file at compile time .. code-block:: text | Using APSW file /space/apsw/apsw.so | APSW version 3.8.11.1-r1 | SQLite lib version 3.8.11.1 | SQLite header version 3008011 .. code-block:: python ### ### Opening/creating database ### connection=apsw.Connection("dbfile") cursor=connection.cursor() .. _example-cursor: .. code-block:: python ### ### simple statement ### cursor.execute("create table foo(x,y,z)") ### ### using different types ### cursor.execute("insert into foo values(?,?,?)", (1, 1.1, None)) # integer, float/real, Null cursor.execute("insert into foo(x) values(?)", ("abc", )) # string (note trailing comma to ensure tuple!) cursor.execute("insert into foo(x) values(?)", # a blob (binary data) (buffer("abc\xff\xfe"), )) # Use b"abc\xff\xfe" for Python 3 ### ### multiple statements ### cursor.execute("delete from foo; insert into foo values(1,2,3); create table bar(a,b,c) ; insert into foo values(4, 'five', 6.0)") ### ### iterator ### for x,y,z in cursor.execute("select x,y,z from foo"): print cursor.getdescription() # shows column names and declared types print x,y,z ### ### iterator - multiple statements ### for m,n,o in cursor.execute("select x,y,z from foo ; select a,b,c from bar"): print m,n,o ### ### bindings - sequence ### cursor.execute("insert into foo values(?,?,?)", (7, 'eight', False)) cursor.execute("insert into foo values(?,?,?1)", ('one', 'two')) # nb sqlite does the numbers from 1 ### ### bindings - dictionary ### cursor.execute("insert into foo values(:alpha, :beta, :gamma)", {'alpha': 1, 'beta': 2, 'gamma': 'three'}) .. _example-exectrace: .. code-block:: python ### ### tracing execution ### def mytrace(cursor, statement, bindings): "Called just before executing each statement" print "SQL:",statement if bindings: print "Bindings:",bindings return True # if you return False then execution is aborted cursor.setexectrace(mytrace) cursor.execute("drop table bar ; create table bar(x,y,z); select * from foo where x=?", (3,)) .. code-block:: text | SQL: drop table bar ; | SQL: create table bar(x,y,z); | SQL: select * from foo where x=? | Bindings: (3,) .. _example-rowtrace: .. code-block:: python ### ### tracing results ### def rowtrace(cursor, row): """Called with each row of results before they are handed off. You can return None to cause the row to be skipped or a different set of values to return""" print "Row:", row return row cursor.setrowtrace(rowtrace) for row in cursor.execute("select x,y from foo where x>3"): pass .. code-block:: text | SQL: select x,y from foo where x>3 | Row: (4, 'five') | Row: (7, 'eight') | Row: ('one', 'two') .. code-block:: python # Clear tracers cursor.setrowtrace(None) cursor.setexectrace(None) ### ### executemany ### # (This will work correctly with multiple statements, as well as statements that # return data. The second argument can be anything that is iterable.) cursor.executemany("insert into foo (x) values(?)", ( [1], [2], [3] ) ) # You can also use it for statements that return data for row in cursor.executemany("select * from foo where x=?", ( [1], [2], [3] ) ): print row .. _scalar-example: .. code-block:: python ### ### defining your own functions ### def ilove7(*args): "a scalar function" print "ilove7 got",args,"but I love 7" return 7 connection.createscalarfunction("seven", ilove7) for row in cursor.execute("select seven(x,y) from foo"): print row .. code-block:: text | ilove7 got (1, 2) but I love 7 | (7,) | ilove7 got (4, 'five') but I love 7 | (7,) | ilove7 got (7, 'eight') but I love 7 | (7,) | ilove7 got ('one', 'two') but I love 7 | (7,) | ilove7 got (1, 2) but I love 7 | (7,) | ilove7 got (1, None) but I love 7 | (7,) | ilove7 got (2, None) but I love 7 | (7,) | ilove7 got (3, None) but I love 7 | (7,) .. _aggregate-example: .. code-block:: python ### ### aggregate functions are more complex ### # Here we return the longest item when represented as a string. class longest: def __init__(self): self.longest="" def step(self, *args): for arg in args: if len( str(arg) ) > len (self.longest): self.longest=str(arg) def final(self): return self.longest # Under Python 2.3 remove the following line and add # factory=classmethod(factory) at the end @classmethod def factory(cls): return cls(), cls.step, cls.final connection.createaggregatefunction("longest", longest.factory) for row in cursor.execute("select longest(x,y) from foo"): print row .. code-block:: text | ('eight',) .. _collation-example: .. code-block:: python ### ### Defining collations. ### # The default sorting mechanisms don't understand numbers at the end of strings # so here we define a collation that does cursor.execute("create table s(str)") cursor.executemany("insert into s values(?)", ( ["file1"], ["file7"], ["file17"], ["file20"], ["file3"] ) ) for row in cursor.execute("select * from s order by str"): print row .. code-block:: text | ('file1',) | ('file17',) | ('file20',) | ('file3',) | ('file7',) .. code-block:: python def strnumcollate(s1, s2): # return -1 if s1s2 else 0 # split values into two parts - the head and the numeric tail values=[s1, s2] for vn,v in enumerate(values): for i in range(len(v), 0, -1): if v[i-1] not in "01234567890": break try: v=( v[:i], int(v[i:]) ) except ValueError: v=( v[:i], None ) values[vn]=v # compare if values[0]values[1]: return 1 return 0 connection.createcollation("strnum", strnumcollate) for row in cursor.execute("select * from s order by str collate strnum"): print row .. code-block:: text | ('file1',) | ('file3',) | ('file7',) | ('file17',) | ('file20',) .. _authorizer-example: .. code-block:: python ### ### Authorizer (eg if you want to control what user supplied SQL can do) ### def authorizer(operation, paramone, paramtwo, databasename, triggerorview): """Called when each operation is prepared. We can return SQLITE_OK, SQLITE_DENY or SQLITE_IGNORE""" # find the operation name print apsw.mapping_authorizer_function[operation], print paramone, paramtwo, databasename, triggerorview if operation==apsw.SQLITE_CREATE_TABLE and paramone.startswith("private"): return apsw.SQLITE_DENY # not allowed to create tables whose names start with private return apsw.SQLITE_OK # always allow connection.setauthorizer(authorizer) cursor.execute("insert into s values('foo')") cursor.execute("select str from s limit 1") .. code-block:: text | SQLITE_INSERT s None main None | SQLITE_SELECT None None None None | SQLITE_READ s str main None .. code-block:: python # Cancel authorizer connection.setauthorizer(None) .. _example-progress-handler: .. code-block:: python ### ### progress handler (SQLite 3 experimental feature) ### # something to give us large numbers of random numbers import random def randomintegers(howmany): for i in xrange(howmany): yield (random.randint(0,9999999999),) # create a table with 100 random numbers cursor.execute("begin ; create table bigone(x)") cursor.executemany("insert into bigone values(?)", randomintegers(100)) cursor.execute("commit") # display an ascii spinner _phcount=0 _phspinner="|/-\\" def progresshandler(): global _phcount sys.stdout.write(_phspinner[_phcount%len(_phspinner)]+chr(8)) # chr(8) is backspace sys.stdout.flush() _phcount+=1 time.sleep(0.1) # deliberate delay so we can see the spinner (SQLite is too fast otherwise!) return 0 # returning non-zero aborts # register progresshandler every 20 instructions connection.setprogresshandler(progresshandler, 20) # see it in action - sorting 100 numbers to find the biggest takes a while print "spinny thing -> ", for i in cursor.execute("select max(x) from bigone"): print # newline print i # and the maximum number connection.setprogresshandler(None) .. _example-commithook: .. code-block:: python ### ### commit hook (SQLite3 experimental feature) ### def mycommithook(): print "in commit hook" hour=time.localtime()[3] if hour<8 or hour>17: print "no commits out of hours" return 1 # abort commits outside of 8am through 6pm print "commits okay at this time" return 0 # let commit go ahead connection.setcommithook(mycommithook) try: cursor.execute("begin; create table example(x,y,z); insert into example values (3,4,5) ; commit") except apsw.ConstraintError: print "commit was not allowed" connection.setcommithook(None) .. code-block:: text | in commit hook | no commits out of hours | commit was not allowed .. _example-updatehook: .. code-block:: python ### ### update hook ### def myupdatehook(type, databasename, tablename, rowid): print "Updated: %s database %s, table %s, row %d" % ( apsw.mapping_authorizer_function[type], databasename, tablename, rowid) connection.setupdatehook(myupdatehook) cursor.execute("insert into s values(?)", ("file93",)) cursor.execute("update s set str=? where str=?", ("file94", "file93")) cursor.execute("delete from s where str=?", ("file94",)) connection.setupdatehook(None) .. code-block:: text | Updated: SQLITE_INSERT database main, table s, row 7 | Updated: SQLITE_UPDATE database main, table s, row 7 | Updated: SQLITE_DELETE database main, table s, row 7 .. _example-blobio: .. code-block:: python ### ### Blob I/O ### cursor.execute("create table blobby(x,y)") # Add a blob we will fill in later cursor.execute("insert into blobby values(1,zeroblob(10000))") # Or as a binding cursor.execute("insert into blobby values(2,?)", (apsw.zeroblob(20000),)) # Open a blob for writing. We need to know the rowid rowid=cursor.execute("select ROWID from blobby where x=1").next()[0] blob=connection.blobopen("main", "blobby", "y", rowid, 1) # 1 is for read/write blob.write("hello world") blob.seek(2000) blob.write("hello world, again") blob.close() .. _example-vtable: .. code-block:: python ### ### Virtual tables ### # This virtual table stores information about files in a set of # directories so you can execute SQL queries def getfiledata(directories): columns=None data=[] counter=1 for directory in directories: for f in os.listdir(directory): if not os.path.isfile(os.path.join(directory,f)): continue counter+=1 st=os.stat(os.path.join(directory,f)) if columns is None: columns=["rowid", "name", "directory"]+[x for x in dir(st) if x.startswith("st_")] data.append( [counter, f, directory] + [getattr(st,x) for x in columns[3:]] ) return columns, data # This gets registered with the Connection class Source: def Create(self, db, modulename, dbname, tablename, *args): columns,data=getfiledata([eval(a.replace("\\", "\\\\")) for a in args]) # eval strips off layer of quotes schema="create table foo("+','.join(["'%s'" % (x,) for x in columns[1:]])+")" return schema,Table(columns,data) Connect=Create # Represents a table class Table: def __init__(self, columns, data): self.columns=columns self.data=data def BestIndex(self, *args): return None def Open(self): return Cursor(self) def Disconnect(self): pass Destroy=Disconnect # Represents a cursor class Cursor: def __init__(self, table): self.table=table def Filter(self, *args): self.pos=0 def Eof(self): return self.pos>=len(self.table.data) def Rowid(self): return self.table.data[self.pos][0] def Column(self, col): return self.table.data[self.pos][1+col] def Next(self): self.pos+=1 def Close(self): pass # Register the module as filesource connection.createmodule("filesource", Source()) # Arguments to module - all directories in sys.path sysdirs=",".join(["'%s'" % (x,) for x in sys.path[1:] if len(x) and os.path.isdir(x)]) cursor.execute("create virtual table sysfiles using filesource("+sysdirs+")") # Which 3 files are the biggest? for size,directory,file in cursor.execute("select st_size,directory,name from sysfiles order by st_size desc limit 3"): print size,file,directory .. code-block:: text | 24882176 testdb /space/apsw | 4045200 apsw.so /space/apsw | 915015 _yaml.so /home/rogerb/.local/lib/python2.7/site-packages .. code-block:: python # Which 3 files are the oldest? for ctime,directory,file in cursor.execute("select st_ctime,directory,name from sysfiles order by st_ctime limit 3"): print ctime,file,directory .. code-block:: text | 1388285092.49 setup.cfg /space/apsw | 1391490876.74 .gitignore /space/apsw | 1391584463.54 README.rst /space/apsw .. _example-vfs: .. code-block:: python ### ### A VFS that "obfuscates" the database file contents. The scheme ### used is to xor all bytes with 0xa5. This scheme honours that used ### for MAPI and SQL Server. ### def encryptme(data): if not data: return data return "".join([chr(ord(x)^0xa5) for x in data]) # Inheriting from a base of "" means the default vfs class ObfuscatedVFS(apsw.VFS): def __init__(self, vfsname="obfu", basevfs=""): self.vfsname=vfsname self.basevfs=basevfs apsw.VFS.__init__(self, self.vfsname, self.basevfs) # We want to return our own file implmentation, but also # want it to inherit def xOpen(self, name, flags): # We can look at uri parameters if isinstance(name, apsw.URIFilename): print "fast is", name.uri_parameter("fast") print "level is", name.uri_int("level", 3) print "warp is", name.uri_boolean("warp", False) print "notpresent is", name.uri_parameter("notpresent") .. code-block:: text | fast is speed | level is 7 | warp is True | notpresent is None .. code-block:: python return ObfuscatedVFSFile(self.basevfs, name, flags) # The file implementation where we override xRead and xWrite to call our # encryption routine class ObfuscatedVFSFile(apsw.VFSFile): def __init__(self, inheritfromvfsname, filename, flags): apsw.VFSFile.__init__(self, inheritfromvfsname, filename, flags) def xRead(self, amount, offset): return encryptme(super(ObfuscatedVFSFile, self).xRead(amount, offset)) def xWrite(self, data, offset): super(ObfuscatedVFSFile, self).xWrite(encryptme(data), offset) # To register the VFS we just instantiate it obfuvfs=ObfuscatedVFS() # Lets see what vfs are now available? print apsw.vfsnames() .. code-block:: text | ['unix', 'obfu', 'unix-excl', 'unix-dotfile', 'unix-none'] .. code-block:: python # Make an obfuscated db, passing in some URI parameters obfudb=apsw.Connection("file:myobfudb?fast=speed&level=7&warp=on", flags=apsw.SQLITE_OPEN_READWRITE | apsw.SQLITE_OPEN_CREATE | apsw.SQLITE_OPEN_URI, vfs=obfuvfs.vfsname) # Check it works obfudb.cursor().execute("create table foo(x,y); insert into foo values(1,2)") # Check it really is obfuscated on disk print `open("myobfudb", "rb").read()[:20]` .. code-block:: text | '\xf6\xf4\xe9\xcc\xd1\xc0\x85\xc3\xca\xd7\xc8\xc4\xd1\x85\x96\xa5\xa1\xa5\xa4\xa4' .. code-block:: python # And unobfuscating it print `encryptme(open("myobfudb", "rb").read()[:20])` .. code-block:: text | 'SQLite format 3\x00\x04\x00\x01\x01' .. code-block:: python # Tidy up obfudb.close() os.remove("myobfudb") .. _example-limit: .. code-block:: python ### ### Limits ### # Print some limits for limit in ("LENGTH", "COLUMN", "ATTACHED"): name="SQLITE_LIMIT_"+limit maxname="SQLITE_MAX_"+limit # compile time orig=connection.limit(getattr(apsw, name)) print name, orig # To get the maximum, set to 0x7fffffff and then read value back connection.limit(getattr(apsw, name), 0x7fffffff) max=connection.limit(getattr(apsw, name)) print maxname, max # Set limit for size of a string cursor.execute("create table testlimit(s)") cursor.execute("insert into testlimit values(?)", ( "x"*1024, )) # 1024 char string connection.limit(apsw.SQLITE_LIMIT_LENGTH, 1023) # limit is now 1023 try: cursor.execute("insert into testlimit values(?)", ( "y"*1024, )) print "string exceeding limit was inserted" except apsw.TooBigError: print "Caught toobig exception" connection.limit(apsw.SQLITE_LIMIT_LENGTH, 0x7fffffff) .. code-block:: text | SQLITE_LIMIT_LENGTH 1000000000 | SQLITE_MAX_LENGTH 1000000000 | SQLITE_LIMIT_COLUMN 2000 | SQLITE_MAX_COLUMN 2000 | SQLITE_LIMIT_ATTACHED 10 | SQLITE_MAX_ATTACHED 10 | Caught toobig exception .. _example-backup: .. code-block:: python ### ### Backup to memory ### # We will copy the disk database into a memory database memcon=apsw.Connection(":memory:") # Copy into memory with memcon.backup("main", connection, "main") as backup: backup.step() # copy whole database in one go # There will be no disk accesses for this query for row in memcon.cursor().execute("select * from s"): pass .. _example-shell: .. code-block:: python ### ### Shell ### # Here we use the shell to do a csv export providing the existing db # connection # Export to a StringIO import StringIO as io # use io in Python 3 output=io.StringIO() shell=apsw.Shell(stdout=output, db=connection) # How to execute a dot command shell.process_command(".mode csv") shell.process_command(".headers on") # How to execute SQL shell.process_sql("create table csvtest(col1,col2); insert into csvtest values(3,4); insert into csvtest values('a b', NULL)") # Let the shell figure out SQL vs dot command shell.process_complete_line("select * from csvtest") # Verify output print output.getvalue() .. code-block:: text | col1,col2 | 3,4 | a b, | .. _example-status: .. code-block:: python ### ### Statistics ### print "SQLite memory usage current %d max %d" % apsw.status(apsw.SQLITE_STATUS_MEMORY_USED) .. code-block:: text | SQLite memory usage current 420072 max 494080 .. code-block:: python ### ### Cleanup ### # We can close connections manually (useful if you want to catch exceptions) # but you don't have to connection.close(True) # force it since we want to exit # Delete database - we don't need it any more os.remove("dbfile") apsw-3.8.11.1-r1/doc/_sources/pysqlite.txt0000664000000000000000000002061212257607024016745 0ustar rootroot.. _pysqlitediffs: pysqlite differences ******************** .. currentmodule:: apsw pysqlite and APSW approached the problem of providing access to SQLite from Python from fundamentally different directions. APSW only wraps version 3 of SQLite and provides access in whatever way is normal for SQLite. It makes no effort to hide how SQLite is different from other databases. pysqlite tries to provide a DBAPI compliant wrapper for SQLite and in doing so needs to make it have the same behaviour as other databases. Consequently it does hide some of SQLite's nuances. .. note:: I suggest using APSW when you want to directly use SQLite and its functionality or are using your own code to deal with database independence rather than DBAPI. Use pysqlite and DBAPI if your needs are simple, and you don't want to use SQLite features. What APSW does better ===================== APSW has the following enhancements/differences over pysqlite 2 (wrapping SQLite 3): * APSW stays up to date with SQLite. As features are added and functionality changed in SQLite, APSW tracks them. * APSW gives all functionality of SQLite including :ref:`virtual tables `, :ref:`VFS`, :ref:`BLOB I/O `, :ref:`backups ` and :meth:`file control `. * You can use the same :class:`Connection` across threads with APSW without needing any additional level of locking. pysqlite requires that the :class:`Connection` and any :class:`cursors ` are used in the same thread. You can disable its checking, but unless you are very careful with your own mutexes you will have a crash or a deadlock. * APSW is a single file for the extension, :file:`apsw.pyd` on Windows and :file:`apsw.so` on Unix/Mac (Note :pep:`3149`). There are no other files needed and the :ref:`build instructions ` show you how to include SQLite statically in this file. You can put this file anywhere your Python session can reach. pysqlite is one binary file and several .py files, all of which need to be available. * **Nothing** happens behind your back. By default pysqlite tries to manage transactions by parsing your SQL for you, but you can turn it off. This can result in very unexpected behaviour with pysqlite. * When using a :class:`Connection` as a :meth:`context manager ` APSW uses SQLite's ability to have `nested transactions `__. pysqlite only deals with one transaction at a time and cannot nest them. (Savepoints were introduced in SQLite 3.6.8 - another illustration of the benefits of keeping up to date with SQLite.) * APSW **always** handles Unicode correctly (this was one of the major reasons for writing it in the first place). pysqlite has since fixed many of its issues but you are still stuck with some. * You can use semi-colons at the end of commands and you can have multiple commands in the execute string in APSW. There are no restrictions on the type of commands used. For example this will work fine in APSW but is not allowed in pysqlite:: import apsw con=apsw.Connection(":memory:") cur=con.cursor() for row in cur.execute("create table foo(x,y,z);insert into foo values (?,?,?);" "insert into foo values(?,?,?);select * from foo;drop table foo;" "create table bar(x,y);insert into bar values(?,?);" "insert into bar values(?,?);select * from bar;", (1,2,3,4,5,6,7,8,9,10)): print row And the output as you would expect:: (1, 2, 3) (4, 5, 6) (7, 8) (9, 10) * :meth:`Cursor.executemany` also works with statements that return data such as selects, and you can have multiple statements. pysqlite's :meth:`executescript` method doesn't allow any form of data being returned (it silently ignores any returned data). * pysqlite swallows exceptions in your callbacks making it far harder to debug problems. That also prevents you from raising exceptions in your callbacks to be handled in your code that called SQLite. pysqlite does let you turn on `printing of tracebacks `_, but that is a poor substitute. apsw does the right thing as demonstrated by this example. Source:: def badfunc(t): return 1/0 # pysqlite from pysqlite2 import dbapi2 as sqlite con = sqlite.connect(":memory:") con.create_function("badfunc", 1, badfunc) cur = con.cursor() cur.execute("select badfunc(3)") # apsw import apsw con = apsw.Connection(":memory:") con.createscalarfunction("badfunc", badfunc, 1) cur = con.cursor() cur.execute("select badfunc(3)") Exceptions:: # pysqlite Traceback (most recent call last): File "func.py", line 8, in ? cur.execute("select badfunc(3)") pysqlite2.dbapi2.OperationalError: user-defined function raised exception # apsw Traceback (most recent call last): File "t.py", line 8, in ? cur.execute("select badfunc(3)") File "apsw.c", line 3660, in resetcursor File "apsw.c", line 1871, in user-defined-scalar-badfunc File "t.py", line 3, in badfunc return 1/0 * APSW has significantly enhanced debuggability. More details are available than just what is printed out when exceptions happen like above. See :ref:`augmented stack traces ` * APSW has :ref:`execution and row tracers `. pysqlite has no equivalent to :ref:`execution tracers ` and does have data adaptors which aren't the same thing as a :ref:`row tracer ` (for example you can't skip rows or add a new column to each row returned). pysqlite does have a `row factory `_ but you can easily emulate that with the row tracer and :meth:`Cursor.getdescription`. * APSW has an :ref:`apswtrace ` utility script that traces execution and results in your code without having to modify it in any way. It also outputs summary reports making it easy to see what your most time consuming queries are, which are most popular etc. * APSW has an exception corresponding to each SQLite error code and provides the extended error code. pysqlite combines several SQLite error codes into corresponding DBAPI exceptions. This is a good example of the difference in approach of the two wrappers. * The APSW test suite is larger and tests more functionality. Code coverage by the test suite is 99.6%. pysqlite is good at 81% for C code although there are several places that coverage can be improved. I haven't measured code coverage for pysqlite's Python code. The consequences of this are that APSW catches issues earlier and gives far better diagnostics. As an example try returning an unsupported type from a registered scalar function. * APSW is faster than pysqlite in my testing. Try the :ref:`speedtest` benchmark. What pysqlite does better ========================= * pysqlite has an `adaptor system `_ that lets you pretend SQLite stores and returns more types than it really supports. Note that the database won't be useful in a non-pysqlite context (eg PHP code looking at the same database isn't going to recognise your Point class). You can implement something similar in APSW by intercepting :meth:`Cursor.execute` calls that suitably mangles the bindings going to SQLite and does something similar to the rows the iterator returns. * pysqlite lets you work with a database that contains invalid Unicode data by setting a `text factory `_ that deals with the text data. APSW does not let you put non-Unicode data into the database in the first place and it will be considered invalid by other tools reading the data (eg Java, PHP). If you somehow do manage to get non-Unicode data as a SQLite string, you can cast it to a blob:: for row in cursor.execute("select CAST(column as BLOB) from table"): # row[0] is buffer (py2) or bytes (py3) here deal_with_binary_data(row[0]) apsw-3.8.11.1-r1/doc/_sources/build.txt0000664000000000000000000005352612462364404016204 0ustar rootroot.. _building: Building ******** setup.py ======== Short story: You run :file:`setup.py` but you should ideally follow the :ref:`recommended way ` which will also fetch needed components for you. +-------------------------------------------------------------+-------------------------------------------------------------------------+ | Command | Result | +=============================================================+=========================================================================+ | | python setup.py install test | Compiles APSW with default Python compiler, installs it into Python | | | site library directory and then runs the test suite. | +-------------------------------------------------------------+-------------------------------------------------------------------------+ | | python setup.py install :option:`--user` | (Python 2.6+, 3). Compiles APSW with default Python | | | compiler and installs it into a subdirectory of your home directory. | | | See :pep:`370` for more details. | +-------------------------------------------------------------+-------------------------------------------------------------------------+ | | python setup.py build :option:`--compile=mingw32` install | On Windows this will use the | | | `free `_ | | | `MinGW compiler `_ instead of the | | | Microsoft compilers. | +-------------------------------------------------------------+-------------------------------------------------------------------------+ | | python setup.py build_ext :option:`--force` | Compiles the extension but doesn't install it. The resulting file | | :option:`--inplace` test | will be in the current directory named apsw.so (Unix/Mac) or | | | apsw.pyd (Windows). The test suite is then run. (Note on recent versions| | | of CPython the extension filenames may be more complicated due to | | | :pep:`3149`.) | +-------------------------------------------------------------+-------------------------------------------------------------------------+ | | python setup.py build :option:`--debug` install | Compiles APSW with debug information. This also turns on `assertions | | | `_ | | | in APSW that double check the code assumptions. If you are using the | | | SQLite amalgamation then assertions are turned on in that too. Note | | | that this will considerably slow down APSW and SQLite. | +-------------------------------------------------------------+-------------------------------------------------------------------------+ .. _setup_py_flags: Additional :file:`setup.py` flags ================================= There are a number of APSW specific flags to commands you can specify. fetch ----- :file:`setup.py` can automatically fetch SQLite and other optional components. You can set the environment variable :const:`http_proxy` to control proxy usage for the download. **Note** the files downloaded are modified from their originals to ensure various names do not clash, adjust them to the download platform and to graft them cleanly into the APSW module. You should not commit them to source code control systems (download seperately if you need clean files). If any files are downloaded then the build step will automatically use them. This still applies when you do later builds without re-fetching. | python setup.py fetch *options* +----------------------------------------+--------------------------------------------------------------------------------------+ | fetch flag | Result | +========================================+======================================================================================+ | | :option:`--version=VERSION` | By default the `SQLite download page | | | `__ is | | | consulted to find the current SQLite version | | | which you can override using this flag. | | | | | | .. note:: | | | | | | You can also specify `fossil` as the version | | | and the current development version from `SQLite's source tracking system | | | `__ will be used. (The system is named | | | `Fossil `__.) Note that checksums can't be checked | | | for fossil. You will also need TCL and make installed for the amalgamation to | | | build as well as several other common Unix tools. (ie this is very unlikely to | | | work on Windows.) | +----------------------------------------+--------------------------------------------------------------------------------------+ | | :option:`--missing-checksum-ok` | Allows setup to continue if the :ref:`checksum ` is missing. | +----------------------------------------+--------------------------------------------------------------------------------------+ | | :option:`--all` | Gets all components listed below. | +----------------------------------------+--------------------------------------------------------------------------------------+ | | :option:`--sqlite` | Automatically downloads the `SQLite amalgamation | | | `__. The amalgamation is the | | | preferred way to use SQLite as you have total control over what components are | | | included or excluded (see below) and have no dependencies on any existing | | | libraries on your developer or deployment machines. The amalgamation includes the | | | fts3, rtree and icu extensions. On non-Windows platforms, any existing | | | :file:`sqlite3/` directory will be erased and the downloaded code placed in a newly | | | created :file:`sqlite3/` directory. | +----------------------------------------+--------------------------------------------------------------------------------------+ | | :option:`--asyncvfs` | Downloads the :ref:`Asynchronous VFS ` | +----------------------------------------+--------------------------------------------------------------------------------------+ .. _fetch_checksums: .. note:: The SQLite downloads are not `digitally signed `__ which means you have no way of verifying they were produced by the SQLite team or were not modified between the SQLite servers and your computer. Consequently APSW ships with a :source:`checksums file ` that includes checksums for the various SQLite downloads. If the download does not match the checksum then it is rejected and an error occurs. The SQLite download page is not checksummed, so in theory a bad guy could modify it to point at a malicious download version instead. (setup only uses the page to determine the current version number - the SQLite download site URL is hard coded.) If the URL is not listed in the checksums file then setup aborts. You can use :option:`--missing-checksum-ok` to continue. You are recommended instead to update the checksums file with the correct information. .. _fetch_configure: .. note:: (This note only applies to non-Windows platforms.) By default the amalgamation will work on your platform. It detects the operating system (and compiler if relevant) and uses the appropriate APIs. However it then only uses the oldest known working APIs. For example it will use the *sleep* system call. More recent APIs may exist but the amalgamation needs to be told they exist. As an example *sleep* can only sleep in increments of one second while the *usleep* system call can sleep in increments of one microsecond. The default SQLite busy handler does small sleeps (eg 1/50th of a second) backing off as needed. If *sleep* is used then those will all be a minimum of a second. A second example is that the traditional APIs for getting time information are not re-entrant and cannot be used concurrently from multiple threads. Consequently SQLite has mutexes to ensure that concurrent calls do not happen. However you can tell it you have more recent re-entrant versions of the calls and it won't need to bother with the mutexes. After fetching the amalgamation, setup automatically determines what new APIs you have by running the :file:`configure` script that comes with SQLite and noting the output. The information is placed in :file:`sqlite3/sqlite3config.h`. The build stage will automatically take note of this as needed. If you get the fossil version then the configure script does not work. Instead the fetch will save and re-use any pre-existing :file:`sqlite3/sqlite3config.h`. .. _setup_build_flags: build/build_ext --------------- You can enable or omit certain functionality by specifying flags to the build and/or build_ext commands of :file:`setup.py`. | python setup.py build *options* Note that the options do not accumulate. If you want to specify multiple enables or omits then you need to give the flag once and giving a comma seperated list. For example: | python setup.py build :option:`--enable=fts3,fts3_parenthesis,rtree,icu` +----------------------------------------+--------------------------------------------------------------------------------------+ | build/build_ext flag | Result | +========================================+======================================================================================+ | | :option:`--enable-all-extensions` | Enables the STAT4, FTS3/4, RTree, and ICU extensions if *icu-config* is on your path | +----------------------------------------+--------------------------------------------------------------------------------------+ | | :option:`--enable=fts3` | Enables the :ref:`full text search extension `. | | | :option:`--enable=fts4` | This flag only helps when using the amalgamation. If not using the | | | amalgamation then you need to seperately ensure fts3/4 is enabled in the SQLite | | | install. You are likely to want the `parenthesis option | | | `__ on unless you have | | | legacy code (`--enable-all-extensions` turns it on). | +----------------------------------------+--------------------------------------------------------------------------------------+ | | :option:`--enable=rtree` | Enables the :ref:`spatial table extension `. | | | This flag only helps when using the amalgamation. If not using the | | | amalgamation then you need to seperately ensure rtree is enabled in the SQLite | | | install. | +----------------------------------------+--------------------------------------------------------------------------------------+ | | :option:`--enable=icu` | Enables the :ref:`International Components for Unicode extension `. | | | Note that you must have the ICU libraries on your machine which setup will | | | automatically try to find using :file:`icu-config`. | | | This flag only helps when using the amalgamation. If not using the | | | amalgamation then you need to seperately ensure ICU is enabled in the SQLite | | | install. | +----------------------------------------+--------------------------------------------------------------------------------------+ | | :option:`--omit=ITEM` | Causes various functionality to be omitted. For example | | | :option:`--omit=load_extension` will omit code to do with loading extensions. If | | | using the amalgamation then this will omit the functionality from APSW and | | | SQLite, otherwise the functionality will only be omitted from APSW (ie the code | | | will still be in SQLite, APSW just won't call it). In almost all cases you will need | | | to regenerate the SQLite source because the omits also alter the generated SQL | | | parser. See `the relevant SQLite documentation | | | `_. | +----------------------------------------+--------------------------------------------------------------------------------------+ .. note:: Extension loading is enabled by default when using the amalgamation and disabled when using existing libraries as this most closely matches current practise. Use :option:`--omit=load_extension` or :option:`--enable=load_extension` to explicity disable/enable the extension loading code. Finding SQLite 3 ================ SQLite 3 is needed during the build process. If you specify :option:`fetch --sqlite` to the :file:`setup.py` command line then it will automatically fetch the current version of the SQLite amalgamation. (The current version is determined by parsing the `SQLite download page `_). You can manually specify the version, for example :option:`fetch --sqlite --version=3.7.4`. These methods are tried in order: `Amalgamation `__ The file :file:`sqlite3.c` and then :file:`sqlite3/sqlite3.c` is looked for. The SQLite code is then statically compiled into the APSW extension and is invisible to the rest of the process. There are no runtime library dependencies on SQLite as a result. When you use :option:`fetch` this is where it places the downloaded amalgamation. Local build The header :file:`sqlite3/sqlite3.h` and library :file:`sqlite3/libsqlite3.{a,so,dll}` is looked for. User directories If you are using Python 2.6+ or Python 3 and specified :option:`--user` then your user directory is searched first. See :pep:`370` for more details. System directories The default compiler include path (eg :file:`/usr/include`) and library path (eg :file:`/usr/lib`) are used. .. note:: If you compiled SQLite with any OMIT flags (eg :const:`SQLITE_OMIT_LOAD_EXTENSION`) then you must include them in the :file:`setup.py` command or file. For this example you could use :option:`setup.py build --omit=load_extension` to add the same flags. .. _recommended_build: Recommended =========== These instructions show how to build automatically downloading and using the amalgamation plus other :ref:`extensions`. Any existing SQLite on your system is ignored at build time and runtime. (Note that you can even use APSW in the same process as a different SQLite is used by other libraries - this happens a lot on Mac.) You should follow these instructions with your current directory being where you extracted the APSW source to. Windows:: # Leave out --compile=mingw32 flag if using Microsoft compiler > python setup.py fetch --all build --enable-all-extensions --compile=mingw32 install test Mac/Linux etc:: $ python setup.py fetch --all build --enable-all-extensions install test .. note:: There will be some warnings during the compilation step about sqlite3.c, `but they are harmless `_ The extension just turns into a single file apsw.so (Linux/Mac) or apsw.pyd (Windows). (More complicated name on Pythons implementing :pep:`3149`). You don't need to install it and can drop it into any directory that is more convenient for you and that your code can reach. To just do the build and not install, leave out *install* from the lines above. (Use *build_ext --inplace* to have the extension put in the main directory.) The test suite will be run. It will print the APSW file used, APSW and SQLite versions and then run lots of tests all of which should pass. Source distribution (advanced) ============================== If you want to make a source distribution or a binary distribution that creates an intermediate source distribution such as `bdist_rpm` then you can have the SQLite amalgamation automatically included as part of it. If you specify the fetch command as part of the same command line then everything fetched is included in the source distribution. For example this will fetch all components, include them in the source distribution and build a rpm using those components:: $ python setup.py fetch --all bdist_rpm Testing ======= SQLite itself is `extensively tested `__. It has considerably more code dedicated to testing than makes up the actual database functionality. APSW includes a :file:`tests.py` file which uses the standard Python testing modules to verify correct operation. New code is developed alongside the tests. Reported issues also have test cases to ensure the issue doesn't happen or doesn't happen again.:: $ python setup.py test Python /usr/bin/python (2, 6, 6, 'final', 0) Testing with APSW file /space/apsw/apsw.so APSW version 3.7.4-r1 SQLite lib version 3.7.4 SQLite headers version 3007004 Using amalgamation True ............................................................................ ---------------------------------------------------------------------- Ran 76 tests in 404.557s OK The tests also ensure that as much APSW code as possible is executed including alternate paths through the code. 95.5% of the APSW code is executed by the tests. If you checkout the APSW source then there is a script :source:`tools/coverage.sh` that enables extra code that deliberately induces extra conditions such as memory allocation failures, SQLite returning undocumented error codes etc. That brings coverage up to 99.6% of the code. A memory checker `Valgrind `_ is used while running the test suite. The test suite is run multiple times to make any memory leaks or similar issues stand out. A checking version of Python is also used. See :source:`tools/valgrind.sh` in the source. To ensure compatibility with the various Python versions, a script downloads and compiles all supported Python versions in both 2 byte and 4 byte Unicode character configurations against the APSW and SQLite supported versions running the tests. See :source:`tools/megatest.py` in the source. In short both SQLite and APSW have a lot of testing! apsw-3.8.11.1-r1/doc/_sources/exceptions.txt0000664000000000000000000003315212257607024017257 0ustar rootroot.. _exceptions: Exceptions ********** .. currentmodule:: apsw :exc:`apsw.Error` is the base for APSW exceptions. .. exception:: Error .. attribute:: Error.result For exceptions corresponding to `SQLite error codes `_ codes this attribute is the numeric error code. .. attribute:: Error.extendedresult APSW runs with `extended result codes `_ turned on. This attribute includes the detailed code. As an example, if SQLite issued a read request and the system returned less data than expected then :attr:`~Error.result` would have the value :const:`SQLITE_IOERR` while :attr:`~Error.extendedresult` would have the value :const:`SQLITE_IOERR_SHORT_READ`. APSW specific exceptions ======================== The following exceptions happen when APSW detects various problems. .. exception:: ThreadingViolationError You have used an object concurrently in two threads. For example you may try to use the same cursor in two different threads at the same time, or tried to close the same connection in two threads at the same time. You can also get this exception by using a cursor as an argument to itself (eg as the input data for :meth:`Cursor.executemany`). Cursors can only be used for one thing at a time. .. exception:: ForkingViolationError See :meth:`apsw.fork_checker`. .. exception:: IncompleteExecutionError You have tried to start a new SQL execute call before executing all the previous ones. See the :ref:`execution model ` for more details. .. exception:: ConnectionNotClosedError This exception is no longer generated. It was required in earlier releases due to constraints in threading usage with SQLite. .. exception:: ConnectionClosedError You have called :meth:`Connection.close` and then continued to use the :class:`Connection` or associated :class:`cursors `. .. exception:: CursorClosedError You have called :meth:`Cursor.close` and then tried to use the cursor. .. exception:: BindingsError There are several causes for this exception. When using tuples, an incorrect number of bindings where supplied:: cursor.execute("select ?,?,?", (1,2)) # too few bindings cursor.execute("select ?,?,?", (1,2,3,4)) # too many bindings You are using named bindings, but not all bindings are named. You should either use entirely the named style or entirely numeric (unnamed) style:: cursor.execute("select * from foo where x=:name and y=?") .. note:: It is not considered an error to have missing keys in a dictionary. For example this is perfectly valid:: cursor.execute("insert into foo values($a,:b,$c)", {'a': 1}) *b* and *c* are not in the dict. For missing keys, None/NULL will be used. This is so you don't have to add lots of spurious values to the supplied dict. If your schema requires every column have a value, then SQLite will generate an error due to some values being None/NULL so that case will be caught. .. exception:: ExecutionCompleteError A statement is complete but you try to run it more anyway! .. exception:: ExecTraceAbort The :ref:`execution tracer ` returned False so execution was aborted. .. exception:: ExtensionLoadingError An error happened loading an `extension `_. .. exception:: VFSNotImplementedError A call cannot be made to an inherited :ref:`VFS` method as the VFS does not implement the method. .. exception:: VFSFileClosedError The VFS file is closed so the operation cannot be performed. SQLite Exceptions ================= The following lists which Exception classes correspond to which `SQLite error codes `_. General Errors ^^^^^^^^^^^^^^ .. exception:: SQLError :const:`SQLITE_ERROR`. This error is documented as a bad SQL query or missing database, but is also returned for a lot of other situations. It is the default error code unless there is a more specific one. .. exception:: MismatchError :const:`SQLITE_MISMATCH`. Data type mismatch. For example a rowid or integer primary key must be an integer. .. exception:: NotFoundError :const:`SQLITE_NOTFOUND`. Returned when various internal items were not found such as requests for non-existent system calls or file controls. Internal Errors ^^^^^^^^^^^^^^^ .. exception:: InternalError :const:`SQLITE_INTERNAL`. (No longer used) Internal logic error in SQLite. .. exception:: ProtocolError :const:`SQLITE_PROTOCOL`. (No longer used) Database lock protocol error. .. exception:: MisuseError :const:`SQLITE_MISUSE`. SQLite library used incorrectly. .. exception:: RangeError :const:`SQLITE_RANGE`. (Cannot be generated using APSW). 2nd parameter to `sqlite3_bind `_ out of range Permissions Etc ^^^^^^^^^^^^^^^ .. exception:: PermissionsError :const:`SQLITE_PERM`. Access permission denied by the operating system, or parts of the database are readonly such as a cursor. .. exception:: ReadOnlyError :const:`SQLITE_READONLY`. Attempt to write to a readonly database. .. exception:: CantOpenError :const:`SQLITE_CANTOPEN`. Unable to open the database file. .. exception:: AuthError :const:`SQLITE_AUTH`. :meth:`Authorization ` denied. Abort/Busy Etc ^^^^^^^^^^^^^^ .. exception:: AbortError :const:`SQLITE_ABORT`. Callback routine requested an abort. .. exception:: BusyError :const:`SQLITE_BUSY`. The database file is locked. Use :meth:`Connection.setbusytimeout` to change how long SQLite waits for the database to be unlocked or :meth:`Connection.setbusyhandler` to use your own handler. .. exception:: LockedError :const:`SQLITE_LOCKED`. A table in the database is locked. .. exception:: InterruptError :const:`SQLITE_INTERRUPT`. Operation terminated by `sqlite3_interrupt `_ - use :meth:`Connection.interrupt`. .. exception:: SchemaChangeError :const:`SQLITE_SCHEMA`. The database schema changed. A :meth:`prepared statement ` becomes invalid if the database schema was changed. Behind the scenes SQLite reprepares the statement. Another or the same :class:`Connection` may change the schema again before the statement runs. SQLite will attempt up to 5 times before giving up and returning this error. .. exception:: ConstraintError :const:`SQLITE_CONSTRAINT`. Abort due to `constraint `_ violation. This would happen if the schema required a column to be within a specific range. If you have multiple constraints, you `can't tell `__ which one was the cause. Memory/Disk ^^^^^^^^^^^ .. exception:: NoMemError :const:`SQLITE_NOMEM`. A memory allocation failed. .. exception:: IOError :const:`SQLITE_IOERR`. Some kind of disk I/O error occurred. The :ref:`extended error code ` will give more detail. .. exception:: CorruptError :const:`SQLITE_CORRUPT`. The database disk image appears to be a SQLite database but the values inside are inconsistent. .. exception:: FullError :const:`SQLITE_FULL`. The disk appears to be full. .. exception:: TooBigError :const:`SQLITE_TOOBIG`. String or BLOB exceeds size limit. You can change the limits using :meth:`Connection.limit`. .. exception:: NoLFSError :const:`SQLITE_NOLFS`. SQLite has attempted to use a feature not supported by the operating system such as `large file support `_. .. exception:: EmptyError :const:`SQLITE_EMPTY`. Database is completely empty. .. exception:: FormatError :const:`SQLITE_FORMAT`. (No longer used) `Auxiliary database `_ format error. .. exception:: NotADBError :const:`SQLITE_NOTADB`. File opened that is not a database file. SQLite has a header on database files to verify they are indeed SQLite databases. .. _augmentedstacktraces: Augmented stack traces ====================== When an exception occurs, Python does not include frames from non-Python code (ie the C code called from Python). This can make it more difficult to work out what was going on when an exception occurred for example when there are callbacks to collations, functions or virtual tables, triggers firing etc. This is an example showing the difference between the tracebacks you would have got with earlier versions of apsw and the augmented traceback:: import apsw def myfunc(x): 1/0 con=apsw.Connection(":memory:") con.createscalarfunction("foo", myfunc) con.createscalarfunction("fam", myfunc) cursor=con.cursor() cursor.execute("create table bar(x,y,z);insert into bar values(1,2,3)") cursor.execute("select foo(1) from bar") +-----------------------------------------------------------+----------------------------------------------------------+ | Original Traceback | Augmented Traceback | +===========================================================+==========================================================+ | :: | :: | | | | | Traceback (most recent call last): | Traceback (most recent call last): | | File "t.py", line 11, in | File "t.py", line 11, in | | cursor.execute("select foo(1) from bar") | cursor.execute("select foo(1) from bar") | | File "t.py", line 4, in myfunc | File "apsw.c", line 3412, in resetcursor | | 1/0 | File "apsw.c", line 1597, in user-defined-scalar-foo | | ZeroDivisionError: integer division or modulo by zero | File "t.py", line 4, in myfunc | | | 1/0 | | | ZeroDivisionError: integer division or modulo by zero | +-----------------------------------------------------------+----------------------------------------------------------+ In the original traceback you can't even see that code in apsw was involved. The augmented traceback shows that there were indeed two function calls within apsw and gives you line numbers should you need to examine the code. Also note how you are told that the call was in `user-defined-scalar-foo` (ie you can tell which function was called.) *But wait, there is more!!!* In order to further aid troubleshooting, the augmented stack traces make additional information available. Each frame in the traceback has local variables defined with more information. You can print out the variables using `ASPN recipe 52215 `_ In the recipe, the initial code in :func:`print_exc_plus` is far more complicated than need be, and also won't work correctly with all tracebacks (it depends on :attr:`f_prev` being set which isn't always the case). Change the function to start like this:: tb = sys.exc_info()[2] stack = [] while tb: stack.append(tb.tb_frame) tb = tb.tb_next traceback.print_exc() print "Locals by frame, innermost last" Here is a far more complex example from some :ref:`virtual tables ` code I was writing. The BestIndex method in my code had returned an incorrect value. The augmented traceback includes local variables using recipe 52215. I can see what was passed in to my method, what I returned and which item was erroneous. The original traceback is almost completely useless. Original traceback:: Traceback (most recent call last): File "tests.py", line 1387, in testVtables cursor.execute(allconstraints) TypeError: Bad constraint (#2) - it should be one of None, an integer or a tuple of an integer and a boolean Augmented traceback with local variables:: Traceback (most recent call last): File "tests.py", line 1387, in testVtables cursor.execute(allconstraints) VTable = __main__.VTable cur = i = 10 self = testVtables (__main__.APSW) allconstraints = select rowid,* from foo where rowid>-1000 .... File "apsw.c", line 4050, in Cursor_execute.sqlite3_prepare Connection = statement = select rowid,* from foo where rowid>-1000 .... File "apsw.c", line 2681, in VirtualTable.xBestIndex self = <__main__.VTable instance at 0x98d8c0> args = (((-1, 4), (0, 32), (1, 8), (2, 4), (3, 64)), ((2, False),)) result = ([4, (3,), [2, False], [1], [0]], 997, u'\xea', False) File "apsw.c", line 2559, in VirtualTable.xBestIndex.result_constraint indices = [4, (3,), [2, False], [1], [0]] self = <__main__.VTable instance at 0x98d8c0> result = ([4, (3,), [2, False], [1], [0]], 997, u'\xea', False) constraint = (3,) TypeError: Bad constraint (#2) - it should be one of None, an integer or a tuple of an integer and a boolean apsw-3.8.11.1-r1/doc/_sources/apsw.txt0000664000000000000000000010673112565422266016061 0ustar rootroot.. Automatically generated by code2rst.py code2rst.py src/apsw.c doc/apsw.rst Edit src/apsw.c not this file! .. module:: apsw :synopsis: Python access to SQLite database library APSW Module *********** The module is the main interface to SQLite. Methods and data on the module have process wide effects. You can instantiate the :class:`Connection` and :class:`zeroblob` objects using :meth:`Connection` and :meth:`zeroblob` respectively. API Reference ============= .. data:: SQLITE_VERSION_NUMBER The integer version number of SQLite that APSW was compiled against. For example SQLite 3.6.4 will have the value *3006004*. This number may be different than the actual library in use if the library is shared and has been updated. Call :meth:`sqlitelibversion` to get the actual library version. .. method:: apswversion() -> string Returns the APSW version. .. index:: sqlite3_compileoption_get .. attribute:: compile_options A tuple of the options used to compile SQLite. For example it will be something like this:: ('ENABLE_LOCKING_STYLE=0', 'TEMP_STORE=1', 'THREADSAFE=1') Calls: `sqlite3_compileoption_get `__ .. index:: sqlite3_complete .. method:: complete(statement) -> bool Returns True if the input string comprises one or more complete SQL statements by looking for an unquoted trailing semi-colon. An example use would be if you were prompting the user for SQL statements and needed to know if you had a whole statement, or needed to ask for another line:: statement=raw_input("SQL> ") while not apsw.complete(statement): more=raw_input(" .. ") statement=statement+"\n"+more Calls: `sqlite3_complete `__ .. index:: sqlite3_config .. method:: config(op[, *args]) :param op: A `configuration operation `_ :param args: Zero or more arguments as appropriate for *op* Many operations don't make sense from a Python program. The following configuration operations are supported: SQLITE_CONFIG_LOG, SQLITE_CONFIG_SINGLETHREAD, SQLITE_CONFIG_MULTITHREAD, SQLITE_CONFIG_SERIALIZED, SQLITE_CONFIG_URI, SQLITE_CONFIG_MEMSTATUS, SQLITE_CONFIG_COVERING_INDEX_SCAN, SQLITE_CONFIG_PCACHE_HDRSZ, and SQLITE_CONFIG_PMASZ. See :ref:`tips ` for an example of how to receive log messages (SQLITE_CONFIG_LOG) Calls: `sqlite3_config `__ .. attribute:: connection_hooks The purpose of the hooks is to allow the easy registration of :meth:`functions `, :ref:`virtual tables ` or similar items with each :class:`Connection` as it is created. The default value is an empty list. Whenever a Connection is created, each item in apsw.connection_hooks is invoked with a single parameter being the new Connection object. If the hook raises an exception then the creation of the Connection fails. If you wanted to store your own defined functions in the database then you could define a hook that looked in the relevant tables, got the Python text and turned it into the functions. .. index:: sqlite3_enable_shared_cache .. method:: enablesharedcache(bool) If you use the same :class:`Connection` across threads or use multiple :class:`connections ` accessing the same file, then SQLite can `share the cache between them `_. It is :ref:`not recommended ` that you use this. Calls: `sqlite3_enable_shared_cache `__ .. method:: exceptionfor(int) -> Exception If you would like to raise an exception that corresponds to a particular SQLite `error code `_ then call this function. It also understands `extended error codes `_. For example to raise `SQLITE_IOERR_ACCESS `_:: raise apsw.exceptionfor(apsw.SQLITE_IOERR_ACCESS) .. method:: fork_checker() **Note** This method is not available on Windows as it does not support the fork system call. SQLite does not allow the use of database connections across `forked `__ processes (see the `SQLite FAQ Q6 `__). (Forking creates a child process that is a duplicate of the parent including the state of all data structures in the program. If you do this to SQLite then parent and child would both consider themselves owners of open databases and silently corrupt each other's work and interfere with each other's locks.) One example of how you may end up using fork is if you use the `multiprocessing module `__ which uses fork to make child processes. If you do use fork or multiprocessing on a platform that supports fork then you **must** ensure database connections and their objects (cursors, backup, blobs etc) are not used in the parent process, or are all closed before calling fork or starting a `Process `__. (Note you must call close to ensure the underlying SQLite objects are closed. It is also a good idea to call `gc.collect(2) `__ to ensure anything you may have missed is also deallocated.) Once you run this method, extra checking code is inserted into SQLite's mutex operations (at a very small performance penalty) that verifies objects are not used across processes. You will get a :exc:`ForkingViolationError` if you do so. Note that due to the way Python's internals work, the exception will be delivered to `sys.excepthook` in addition to the normal exception mechanisms and may be reported by Python after the line where the issue actually arose. (Destructors of objects you didn't close also run between lines.) You should only call this method as the first line after importing APSW, as it has to shutdown and re-initialize SQLite. If you have any SQLite objects already allocated when calling the method then the program will later crash. The recommended use is to use the fork checking as part of your test suite. .. method:: format_sql_value(value) -> string Returns a Python string (unicode) representing the supplied value in SQL syntax. Python 2 note: You must supply unicode strings not plain strings. .. index:: sqlite3_initialize .. method:: initialize() It is unlikely you will want to call this method as SQLite automatically initializes. Calls: `sqlite3_initialize `__ .. index:: sqlite3_log .. method:: log(level, message) Calls the SQLite logging interface. Note that you must format the message before passing it to this method:: apsw.log(apsw.SQLITE_NOMEM, "Need %d bytes of memory" % (1234,)) See :ref:`tips ` for an example of how to receive log messages. Calls: `sqlite3_log `__ .. method:: main() Call this to run the interactive shell. It automatically passes in sys.argv[1:] and exits Python when done. .. index:: sqlite3_memory_highwater .. method:: memoryhighwater(reset=False) -> int Returns the maximum amount of memory SQLite has used. If *reset* is True then the high water mark is reset to the current value. .. seealso:: :meth:`status` Calls: `sqlite3_memory_highwater `__ .. index:: sqlite3_memory_used .. method:: memoryused() -> int Returns the amount of memory SQLite is currently using. .. seealso:: :meth:`status` Calls: `sqlite3_memory_used `__ .. index:: sqlite3_randomness .. method:: randomness(bytes) -> data Gets random data from SQLite's random number generator. :param bytes: How many bytes to return :rtype: (Python 2) string, (Python 3) bytes Calls: `sqlite3_randomness `__ .. index:: sqlite3_release_memory .. method:: releasememory(bytes) -> int Requests SQLite try to free *bytes* bytes of memory. Returns how many bytes were freed. Calls: `sqlite3_release_memory `__ .. index:: sqlite3_shutdown .. method:: shutdown() It is unlikely you will want to call this method and there is no need to do so. It is a **really** bad idea to call it unless you are absolutely sure all :class:`connections `, :class:`blobs `, :class:`cursors `, :class:`vfs ` etc have been closed, deleted and garbage collected. Calls: `sqlite3_shutdown `__ .. index:: sqlite3_soft_heap_limit64 .. method:: softheaplimit(bytes) -> oldlimit Requests SQLite try to keep memory usage below *bytes* bytes and returns the previous setting. Calls: `sqlite3_soft_heap_limit64 `__ .. index:: sqlite3_sourceid .. method:: sqlite3_sourceid() -> string Returns the exact checkin information for the SQLite 3 source being used. Calls: `sqlite3_sourceid `__ .. index:: sqlite3_libversion .. method:: sqlitelibversion() -> string Returns the version of the SQLite library. This value is queried at run time from the library so if you use shared libraries it will be the version in the shared library. Calls: `sqlite3_libversion `__ .. index:: sqlite3_status64 .. method:: status(op, reset=False) -> (int, int) Returns current and highwater measurements. :param op: A `status parameter `_ :param reset: If *True* then the highwater is set to the current value :returns: A tuple of current value and highwater value .. seealso:: * :ref:`Status example ` Calls: `sqlite3_status64 `__ .. attribute:: using_amalgamation If True then `SQLite amalgamation `__ is in use (statically compiled into APSW). Using the amalgamation means that SQLite shared libraries are not used and will not affect your code. .. method:: vfsnames() -> list(string) Returns a list of the currently installed :ref:`vfs `. The first item in the list is the default vfs. .. _sqliteconstants: SQLite constants ================ SQLite has `many constants `_ used in various interfaces. To use a constant such as :const:`SQLITE_OK`, just use ``apsw.SQLITE_OK``. The same values can be used in different contexts. For example :const:`SQLITE_OK` and :const:`SQLITE_CREATE_INDEX` both have a value of zero. For each group of constants there is also a mapping (dict) available that you can supply a string to and get the corresponding numeric value, or supply a numeric value and get the corresponding string. These can help improve diagnostics/logging, calling other modules etc. For example:: apsw.mapping_authorizer_function["SQLITE_READ"] == 20 apsw.mapping_authorizer_function[20] == "SQLITE_READ" mapping_access `Flags for the xAccess VFS method `__ `SQLITE_ACCESS_EXISTS `__, `SQLITE_ACCESS_READ `__, `SQLITE_ACCESS_READWRITE `__ mapping_authorizer_function `Authorizer Action Codes `__ `SQLITE_ALTER_TABLE `__, `SQLITE_ANALYZE `__, `SQLITE_ATTACH `__, `SQLITE_COPY `__, `SQLITE_CREATE_INDEX `__, `SQLITE_CREATE_TABLE `__, `SQLITE_CREATE_TEMP_INDEX `__, `SQLITE_CREATE_TEMP_TABLE `__, `SQLITE_CREATE_TEMP_TRIGGER `__, `SQLITE_CREATE_TEMP_VIEW `__, `SQLITE_CREATE_TRIGGER `__, `SQLITE_CREATE_VIEW `__, `SQLITE_CREATE_VTABLE `__, `SQLITE_DELETE `__, `SQLITE_DETACH `__, `SQLITE_DROP_INDEX `__, `SQLITE_DROP_TABLE `__, `SQLITE_DROP_TEMP_INDEX `__, `SQLITE_DROP_TEMP_TABLE `__, `SQLITE_DROP_TEMP_TRIGGER `__, `SQLITE_DROP_TEMP_VIEW `__, `SQLITE_DROP_TRIGGER `__, `SQLITE_DROP_VIEW `__, `SQLITE_DROP_VTABLE `__, `SQLITE_FUNCTION `__, `SQLITE_INSERT `__, `SQLITE_PRAGMA `__, `SQLITE_READ `__, `SQLITE_RECURSIVE `__, `SQLITE_REINDEX `__, `SQLITE_SAVEPOINT `__, `SQLITE_SELECT `__, `SQLITE_TRANSACTION `__, `SQLITE_UPDATE `__ mapping_authorizer_return `Authorizer Return Codes `__ `SQLITE_DENY `__, `SQLITE_IGNORE `__, `SQLITE_OK `__ mapping_bestindex_constraints `Virtual Table Constraint Operator Codes `__ `SQLITE_INDEX_CONSTRAINT_EQ `__, `SQLITE_INDEX_CONSTRAINT_GE `__, `SQLITE_INDEX_CONSTRAINT_GT `__, `SQLITE_INDEX_CONSTRAINT_LE `__, `SQLITE_INDEX_CONSTRAINT_LT `__, `SQLITE_INDEX_CONSTRAINT_MATCH `__ mapping_config `Configuration Options `__ `SQLITE_CONFIG_COVERING_INDEX_SCAN `__, `SQLITE_CONFIG_GETMALLOC `__, `SQLITE_CONFIG_GETMUTEX `__, `SQLITE_CONFIG_GETPCACHE `__, `SQLITE_CONFIG_GETPCACHE2 `__, `SQLITE_CONFIG_HEAP `__, `SQLITE_CONFIG_LOG `__, `SQLITE_CONFIG_LOOKASIDE `__, `SQLITE_CONFIG_MALLOC `__, `SQLITE_CONFIG_MEMSTATUS `__, `SQLITE_CONFIG_MMAP_SIZE `__, `SQLITE_CONFIG_MULTITHREAD `__, `SQLITE_CONFIG_MUTEX `__, `SQLITE_CONFIG_PAGECACHE `__, `SQLITE_CONFIG_PCACHE `__, `SQLITE_CONFIG_PCACHE2 `__, `SQLITE_CONFIG_PCACHE_HDRSZ `__, `SQLITE_CONFIG_PMASZ `__, `SQLITE_CONFIG_SCRATCH `__, `SQLITE_CONFIG_SERIALIZED `__, `SQLITE_CONFIG_SINGLETHREAD `__, `SQLITE_CONFIG_SQLLOG `__, `SQLITE_CONFIG_URI `__, `SQLITE_CONFIG_WIN32_HEAPSIZE `__ mapping_conflict_resolution_modes `Conflict resolution modes `__ `SQLITE_ABORT `__, `SQLITE_FAIL `__, `SQLITE_IGNORE `__, `SQLITE_REPLACE `__, `SQLITE_ROLLBACK `__ mapping_db_config `Database Connection Configuration Options `__ `SQLITE_DBCONFIG_ENABLE_FKEY `__, `SQLITE_DBCONFIG_ENABLE_TRIGGER `__, `SQLITE_DBCONFIG_LOOKASIDE `__ mapping_db_status `Status Parameters for database connections `__ `SQLITE_DBSTATUS_CACHE_HIT `__, `SQLITE_DBSTATUS_CACHE_MISS `__, `SQLITE_DBSTATUS_CACHE_USED `__, `SQLITE_DBSTATUS_CACHE_WRITE `__, `SQLITE_DBSTATUS_DEFERRED_FKS `__, `SQLITE_DBSTATUS_LOOKASIDE_HIT `__, `SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL `__, `SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE `__, `SQLITE_DBSTATUS_LOOKASIDE_USED `__, `SQLITE_DBSTATUS_MAX `__, `SQLITE_DBSTATUS_SCHEMA_USED `__, `SQLITE_DBSTATUS_STMT_USED `__ mapping_device_characteristics `Device Characteristics `__ `SQLITE_IOCAP_ATOMIC `__, `SQLITE_IOCAP_ATOMIC16K `__, `SQLITE_IOCAP_ATOMIC1K `__, `SQLITE_IOCAP_ATOMIC2K `__, `SQLITE_IOCAP_ATOMIC32K `__, `SQLITE_IOCAP_ATOMIC4K `__, `SQLITE_IOCAP_ATOMIC512 `__, `SQLITE_IOCAP_ATOMIC64K `__, `SQLITE_IOCAP_ATOMIC8K `__, `SQLITE_IOCAP_IMMUTABLE `__, `SQLITE_IOCAP_POWERSAFE_OVERWRITE `__, `SQLITE_IOCAP_SAFE_APPEND `__, `SQLITE_IOCAP_SEQUENTIAL `__, `SQLITE_IOCAP_UNDELETABLE_WHEN_OPEN `__ mapping_extended_result_codes `Extended Result Codes `__ `SQLITE_ABORT_ROLLBACK `__, `SQLITE_AUTH_USER `__, `SQLITE_BUSY_RECOVERY `__, `SQLITE_BUSY_SNAPSHOT `__, `SQLITE_CANTOPEN_CONVPATH `__, `SQLITE_CANTOPEN_FULLPATH `__, `SQLITE_CANTOPEN_ISDIR `__, `SQLITE_CANTOPEN_NOTEMPDIR `__, `SQLITE_CONSTRAINT_CHECK `__, `SQLITE_CONSTRAINT_COMMITHOOK `__, `SQLITE_CONSTRAINT_FOREIGNKEY `__, `SQLITE_CONSTRAINT_FUNCTION `__, `SQLITE_CONSTRAINT_NOTNULL `__, `SQLITE_CONSTRAINT_PRIMARYKEY `__, `SQLITE_CONSTRAINT_ROWID `__, `SQLITE_CONSTRAINT_TRIGGER `__, `SQLITE_CONSTRAINT_UNIQUE `__, `SQLITE_CONSTRAINT_VTAB `__, `SQLITE_CORRUPT_VTAB `__, `SQLITE_IOERR_ACCESS `__, `SQLITE_IOERR_BLOCKED `__, `SQLITE_IOERR_CHECKRESERVEDLOCK `__, `SQLITE_IOERR_CLOSE `__, `SQLITE_IOERR_CONVPATH `__, `SQLITE_IOERR_DELETE `__, `SQLITE_IOERR_DELETE_NOENT `__, `SQLITE_IOERR_DIR_CLOSE `__, `SQLITE_IOERR_DIR_FSYNC `__, `SQLITE_IOERR_FSTAT `__, `SQLITE_IOERR_FSYNC `__, `SQLITE_IOERR_GETTEMPPATH `__, `SQLITE_IOERR_LOCK `__, `SQLITE_IOERR_MMAP `__, `SQLITE_IOERR_NOMEM `__, `SQLITE_IOERR_RDLOCK `__, `SQLITE_IOERR_READ `__, `SQLITE_IOERR_SEEK `__, `SQLITE_IOERR_SHMLOCK `__, `SQLITE_IOERR_SHMMAP `__, `SQLITE_IOERR_SHMOPEN `__, `SQLITE_IOERR_SHMSIZE `__, `SQLITE_IOERR_SHORT_READ `__, `SQLITE_IOERR_TRUNCATE `__, `SQLITE_IOERR_UNLOCK `__, `SQLITE_IOERR_WRITE `__, `SQLITE_LOCKED_SHAREDCACHE `__, `SQLITE_NOTICE_RECOVER_ROLLBACK `__, `SQLITE_NOTICE_RECOVER_WAL `__, `SQLITE_READONLY_CANTLOCK `__, `SQLITE_READONLY_DBMOVED `__, `SQLITE_READONLY_RECOVERY `__, `SQLITE_READONLY_ROLLBACK `__, `SQLITE_WARNING_AUTOINDEX `__ mapping_file_control `Standard File Control Opcodes `__ `SQLITE_FCNTL_BUSYHANDLER `__, `SQLITE_FCNTL_CHUNK_SIZE `__, `SQLITE_FCNTL_COMMIT_PHASETWO `__, `SQLITE_FCNTL_FILE_POINTER `__, `SQLITE_FCNTL_GET_LOCKPROXYFILE `__, `SQLITE_FCNTL_HAS_MOVED `__, `SQLITE_FCNTL_LAST_ERRNO `__, `SQLITE_FCNTL_LOCKSTATE `__, `SQLITE_FCNTL_MMAP_SIZE `__, `SQLITE_FCNTL_OVERWRITE `__, `SQLITE_FCNTL_PERSIST_WAL `__, `SQLITE_FCNTL_POWERSAFE_OVERWRITE `__, `SQLITE_FCNTL_PRAGMA `__, `SQLITE_FCNTL_RBU `__, `SQLITE_FCNTL_SET_LOCKPROXYFILE `__, `SQLITE_FCNTL_SIZE_HINT `__, `SQLITE_FCNTL_SYNC `__, `SQLITE_FCNTL_SYNC_OMITTED `__, `SQLITE_FCNTL_TEMPFILENAME `__, `SQLITE_FCNTL_TRACE `__, `SQLITE_FCNTL_VFSNAME `__, `SQLITE_FCNTL_WAL_BLOCK `__, `SQLITE_FCNTL_WIN32_AV_RETRY `__, `SQLITE_FCNTL_WIN32_SET_HANDLE `__, `SQLITE_FCNTL_ZIPVFS `__ mapping_limits `Run-Time Limit Categories `__ `SQLITE_LIMIT_ATTACHED `__, `SQLITE_LIMIT_COLUMN `__, `SQLITE_LIMIT_COMPOUND_SELECT `__, `SQLITE_LIMIT_EXPR_DEPTH `__, `SQLITE_LIMIT_FUNCTION_ARG `__, `SQLITE_LIMIT_LENGTH `__, `SQLITE_LIMIT_LIKE_PATTERN_LENGTH `__, `SQLITE_LIMIT_SQL_LENGTH `__, `SQLITE_LIMIT_TRIGGER_DEPTH `__, `SQLITE_LIMIT_VARIABLE_NUMBER `__, `SQLITE_LIMIT_VDBE_OP `__, `SQLITE_LIMIT_WORKER_THREADS `__ mapping_locking_level `File Locking Levels `__ `SQLITE_LOCK_EXCLUSIVE `__, `SQLITE_LOCK_NONE `__, `SQLITE_LOCK_PENDING `__, `SQLITE_LOCK_RESERVED `__, `SQLITE_LOCK_SHARED `__ mapping_open_flags `Flags For File Open Operations `__ `SQLITE_OPEN_AUTOPROXY `__, `SQLITE_OPEN_CREATE `__, `SQLITE_OPEN_DELETEONCLOSE `__, `SQLITE_OPEN_EXCLUSIVE `__, `SQLITE_OPEN_FULLMUTEX `__, `SQLITE_OPEN_MAIN_DB `__, `SQLITE_OPEN_MAIN_JOURNAL `__, `SQLITE_OPEN_MASTER_JOURNAL `__, `SQLITE_OPEN_MEMORY `__, `SQLITE_OPEN_NOMUTEX `__, `SQLITE_OPEN_PRIVATECACHE `__, `SQLITE_OPEN_READONLY `__, `SQLITE_OPEN_READWRITE `__, `SQLITE_OPEN_SHAREDCACHE `__, `SQLITE_OPEN_SUBJOURNAL `__, `SQLITE_OPEN_TEMP_DB `__, `SQLITE_OPEN_TEMP_JOURNAL `__, `SQLITE_OPEN_TRANSIENT_DB `__, `SQLITE_OPEN_URI `__, `SQLITE_OPEN_WAL `__ mapping_result_codes `Result Codes `__ `SQLITE_ABORT `__, `SQLITE_AUTH `__, `SQLITE_BUSY `__, `SQLITE_CANTOPEN `__, `SQLITE_CONSTRAINT `__, `SQLITE_CORRUPT `__, `SQLITE_DONE `__, `SQLITE_EMPTY `__, `SQLITE_ERROR `__, `SQLITE_FORMAT `__, `SQLITE_FULL `__, `SQLITE_INTERNAL `__, `SQLITE_INTERRUPT `__, `SQLITE_IOERR `__, `SQLITE_LOCKED `__, `SQLITE_MISMATCH `__, `SQLITE_MISUSE `__, `SQLITE_NOLFS `__, `SQLITE_NOMEM `__, `SQLITE_NOTADB `__, `SQLITE_NOTFOUND `__, `SQLITE_NOTICE `__, `SQLITE_OK `__, `SQLITE_PERM `__, `SQLITE_PROTOCOL `__, `SQLITE_RANGE `__, `SQLITE_READONLY `__, `SQLITE_ROW `__, `SQLITE_SCHEMA `__, `SQLITE_TOOBIG `__, `SQLITE_WARNING `__ mapping_status `Status Parameters `__ `SQLITE_STATUS_MALLOC_COUNT `__, `SQLITE_STATUS_MALLOC_SIZE `__, `SQLITE_STATUS_MEMORY_USED `__, `SQLITE_STATUS_PAGECACHE_OVERFLOW `__, `SQLITE_STATUS_PAGECACHE_SIZE `__, `SQLITE_STATUS_PAGECACHE_USED `__, `SQLITE_STATUS_PARSER_STACK `__, `SQLITE_STATUS_SCRATCH_OVERFLOW `__, `SQLITE_STATUS_SCRATCH_SIZE `__, `SQLITE_STATUS_SCRATCH_USED `__ mapping_sync `Synchronization Type Flags `__ `SQLITE_SYNC_DATAONLY `__, `SQLITE_SYNC_FULL `__, `SQLITE_SYNC_NORMAL `__ mapping_virtual_table_configuration_options `Virtual Table Configuration Options `__ `SQLITE_VTAB_CONSTRAINT_SUPPORT `__ mapping_wal_checkpoint `Checkpoint Mode Values `__ `SQLITE_CHECKPOINT_FULL `__, `SQLITE_CHECKPOINT_PASSIVE `__, `SQLITE_CHECKPOINT_RESTART `__, `SQLITE_CHECKPOINT_TRUNCATE `__ mapping_xshmlock_flags `Flags for the xShmLock VFS method `__ `SQLITE_SHM_EXCLUSIVE `__, `SQLITE_SHM_LOCK `__, `SQLITE_SHM_SHARED `__, `SQLITE_SHM_UNLOCK `__ apsw-3.8.11.1-r1/doc/_sources/backup.txt0000664000000000000000000001123212565422270016336 0ustar rootroot.. Automatically generated by code2rst.py code2rst.py src/backup.c doc/backup.rst Edit src/backup.c not this file! .. currentmodule:: apsw .. _backup: Backup ****** A backup object encapsulates copying one database to another. You call :meth:`Connection.backup` on the destination database to get the backup object. Call :meth:`~backup.step` to copy some pages repeatedly dealing with errors as appropriate. Finally :meth:`~backup.finish` cleans up committing or rolling back and releasing locks. Here is an example usage using the **with** statement to ensure :meth:`~backup.finish` is called:: # copies source.main into db with db.backup("main", source, "main") as b: while not b.done: b.step(100) print b.remaining, b.pagecount, "\r", If you are not using **with** then you'll need to ensure :meth:`~backup.finish` is called:: # copies source.main into db b=db.backup("main", source, "main") try: while not b.done: b.step(100) print b.remaining, b.pagecount, "\r", finally: b.finish() Important details ================= The database is copied page by page. This means that there is not a round trip via SQL. All pages are copied including free ones. The destination database is locked during the copy. You will get a :exc:`ThreadingViolationError` if you attempt to use it. backup class ============ .. class:: backup You create a backup instance by calling :meth:`Connection.backup`. .. method:: backup.__enter__() -> self You can use the backup object as a `context manager `_ as defined in :pep:`0343`. The :meth:`~backup.__exit__` method ensures that backup is :meth:`finished `. .. method:: backup.__exit__() -> False Implements context manager in conjunction with :meth:`~backup.__enter__` ensuring that the copy is :meth:`finished `. .. method:: backup.close([force=False]) Does the same thing as :meth:`~backup.finish`. This extra api is provided to give the same api as other APSW objects such as :meth:`Connection.close`, :meth:`blob.close` and :meth:`Cursor.close`. It is safe to call this method multiple times. :param force: If true then any exceptions are ignored. .. attribute:: backup.done A boolean that is True if the copy completed in the last call to :meth:`~backup.step`. .. index:: sqlite3_backup_finish .. method:: backup.finish() Completes the copy process. If all pages have been copied then the transaction is committed on the destination database, otherwise it is rolled back. This method must be called for your backup to take effect. The backup object will always be finished even if there is an exception. It is safe to call this method multiple times. Calls: `sqlite3_backup_finish `__ .. index:: sqlite3_backup_pagecount .. attribute:: backup.pagecount Read only. How many pages were in the source database after the last step. If you haven't called :meth:`~backup.step` or the backup object has been :meth:`finished ` then zero is returned. Calls: `sqlite3_backup_pagecount `__ .. index:: sqlite3_backup_remaining .. attribute:: backup.remaining Read only. How many pages were remaining to be copied after the last step. If you haven't called :meth:`~backup.step` or the backup object has been :meth:`finished ` then zero is returned. Calls: `sqlite3_backup_remaining `__ .. index:: sqlite3_backup_step .. method:: backup.step([npages=All]) -> bool Copies *npages* pages from the source to destination database. The source database is locked during the copy so using smaller values allows other access to the source database. The destination database is always locked until the backup object is :meth:`finished `. :param npages: How many pages to copy. If the parameter is omitted or negative then all remaining pages are copied. The default page size is 1024 bytes (1kb) which can be changed before database creation using a `pragma `_. This method may throw a :exc:`BusyError` or :exc:`LockedError` if unable to lock the source database. You can catch those and try again. :returns: True if this copied the last remaining outstanding pages, else false. This is the same value as :attr:`~backup.done` Calls: `sqlite3_backup_step `__ apsw-3.8.11.1-r1/doc/_sources/blob.txt0000664000000000000000000001523112565422264016015 0ustar rootroot.. Automatically generated by code2rst.py code2rst.py src/blob.c doc/blob.rst Edit src/blob.c not this file! .. currentmodule:: apsw .. _blobio: Blob Input/Output ***************** A `blob `_ is a SQLite `datatype `_ representing a sequence of bytes. It can be zero or more bytes in size. SQLite blobs have an absolute maximum size of 2GB and a `default maximum size `_ of 1GB. An alternate approach to using blobs is to store the data in files and store the filename in the database. Doing so loses the `ACID `_ properties of SQLite. zeroblob class ============== .. class:: zeroblob(size) If you want to insert a blob into a row, you previously needed to supply the entire blob in one go. To read just one byte also required retrieving the blob in its entireity. For example to insert a 100MB file you would have done:: largedata=open("largefile", "rb").read() cur.execute("insert into foo values(?)", (buffer(largedata),)) SQLite 3.5 allowed for incremental Blob I/O so you can read and write blobs in small amounts. You cannot change the size of a blob so you need to reserve space which you do through zeroblob which creates a blob of the specified size but full of zero bytes. For example you would reserve space for your 100MB one of these two ways:: cur.execute("insert into foo values(zeroblob(100000000))") cur.execute("insert into foo values(?), (apsw.zeroblob(100000000),)) This class is used for the second way. Once a blob exists in the database, you then use the :class:`blob` class to read and write its contents. .. method:: zeroblob.length() -> int Size of zero blob in bytes. blob class ========== .. class:: blob This object is created by :meth:`Connection.blobopen` and provides access to a blob in the database. It behaves like a Python file. At the C level it wraps a `sqlite3_blob `_. .. note:: You cannot change the size of a blob using this object. You should create it with the correct size in advance either by using :class:`zeroblob` or the `zeroblob() `_ function. See the :ref:`example `. .. method:: blob.__enter__() -> context You can use a blob as a `context manager `_ as defined in :pep:`0343`. When you use *with* statement, the blob is always :meth:`closed <~blob.close>` on exit from the block, even if an exception occurred in the block. For example:: with connection.blobopen() as blob: blob.write("...") res=blob.read(1024) .. method:: blob.__exit__() -> False Implements context manager in conjunction with :meth:`~blob.__enter__`. Any exception that happened in the *with* block is raised after closing the blob. .. index:: sqlite3_blob_close .. method:: blob.close([force=False]) Closes the blob. Note that even if an error occurs the blob is still closed. .. note:: In some cases errors that technically occurred in the :meth:`~blob.read` and :meth:`~blob.write` routines may not be reported until close is called. Similarly errors that occurred in those methods (eg calling :meth:`~blob.write` on a read-only blob) may also be re-reported in :meth:`~blob.close`. (This behaviour is what the underlying SQLite APIs do - it is not APSW doing it.) It is okay to call :meth:`~blob.close` multiple times. :param force: Ignores any errors during close. Calls: `sqlite3_blob_close `__ .. index:: sqlite3_blob_bytes .. method:: blob.length() -> int Returns the size of the blob in bytes. Calls: `sqlite3_blob_bytes `__ .. index:: sqlite3_blob_read .. method:: blob.read([nbytes]) -> bytes Reads amount of data requested, or till end of file, whichever is earlier. Attempting to read beyond the end of the blob returns the empty string/bytes, in the same manner as end of file on normal file objects. :rtype: (Python 2) string (Python 3) bytes Calls: `sqlite3_blob_read `__ .. index:: sqlite3_blob_read .. method:: blob.readinto(buffer[, offset=0, length=remaining-buffer]) -> None Reads from the blob into a buffer you have supplied. This method is useful if you already have a buffer like object that data is being assembled in, and avoids allocating results in :meth:`blob.read` and then copying into buffer. :param buffer: A writable buffer like object. In Python 2.6 onwards there is a bytearray type that is very useful. :class:`array.array` also works. :param offset: The position to start writing into the buffer defaulting to the beginning. :param length: How much of the blob to read. The default is the remaining space left in the buffer. Note that if there is more space available than blob left then you will get a :exc:`ValueError` exception. Calls: `sqlite3_blob_read `__ .. index:: sqlite3_blob_reopen .. method:: blob.reopen(rowid) Change this blob object to point to a different row. It can be faster than closing an existing blob an opening a new one. Calls: `sqlite3_blob_reopen `__ .. method:: blob.seek(offset[, whence=0]) -> None Changes current position to *offset* biased by *whence*. :param offset: New position to seek to. Can be positive or negative number. :param whence: Use 0 if *offset* is relative to the begining of the blob, 1 if *offset* is relative to the current position, and 2 if *offset* is relative to the end of the blob. :raises ValueError: If the resulting offset is before the begining (less than zero) or beyond the end of the blob. .. method:: blob.tell() -> int Returns the current offset. .. index:: sqlite3_blob_write .. method:: blob.write(data) -> None Writes the data to the blob. :param data: (Python 2) buffer or string. (Python 3) buffer or bytes. :raises TypeError: Wrong data type :raises ValueError: If the data would go beyond the end of the blob. You cannot increase the size of a blob by writing beyond the end. You need to use :class:`zeroblob` to set the desired size first when inserting the blob. Calls: `sqlite3_blob_write `__ apsw-3.8.11.1-r1/doc/_sources/index.txt0000664000000000000000000000215612557433352016211 0ustar rootrootAPSW documentation ================== .. centered:: APSW |version| released |today| Use with SQLite 3.8.11 or later, CPython 2.3 or later including CPython 3. APSW provides an SQLite 3 wrapper that provides the thinnest layer over the `SQLite `_ database library possible. Everything you can do from the `SQLite C API `_, you can do from Python. Although APSW looks vaguely similar to the :pep:`249` (DBAPI), it is :ref:`not compliant ` with that API because instead it works the way SQLite 3 does. (`pysqlite `_ is DBAPI compliant - see the :ref:`differences between apsw and pysqlite 2 `). APSW is hosted at https://github.com/rogerbinns/apsw Contents: .. toctree:: :maxdepth: 2 tips example download build extensions apsw connection cursor blob backup vtable vfs shell exceptions types execution dbapi pysqlite benchmarking copyright changes Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` apsw-3.8.11.1-r1/doc/_sources/vfs.txt0000664000000000000000000006122312565422266015701 0ustar rootroot.. Automatically generated by code2rst.py code2rst.py src/vfs.c doc/vfs.rst Edit src/vfs.c not this file! .. currentmodule:: apsw .. _vfs: Virtual File System (VFS) ************************* SQLite 3.6 has new `VFS functionality `_ which defines the interface between the SQLite core and the underlying operating system. The majority of the functionality deals with files. APSW exposes this functionality letting you provide your own routines. You can also *inherit* from an existing vfs making it easy to augment or override specific routines. For example you could obfuscate your database by XORing the data implemented by augmenting the read and write methods. The method names are exactly the same as SQLite uses making it easier to read the SQLite documentation, trouble tickets, web searches or mailing lists. The SQLite convention results in names like xAccess, xCurrentTime and xWrite. You specify which VFS to use as a parameter to the :class:`Connection` constructor. .. code-block:: python db=apsw.Connection("file", vfs="myvfs") The easiest way to get started is to make a :class:`VFS` derived class that inherits from the default vfs. Then override methods you want to change behaviour of. If you want to just change how file operations are done then you have to override :meth:`VFS.xOpen` to return a file instance that has your overridden :class:`VFSFile` methods. The :ref:`example ` demonstrates obfuscating the database file contents. .. note:: All strings supplied and returned to :class:`VFS`/:class:`VFSFile` routines are treated as Unicode. Exceptions and errors ===================== To return an error from any routine you should raise an exception. The exception will be translated into the appropriate SQLite error code for SQLite. To return a specific SQLite error code use :meth:`exceptionfor`. If the exception does not map to any specific error code then :const:`SQLITE_ERROR` which corresponds to :exc:`SQLError` is returned to SQLite. The SQLite code that deals with VFS errors behaves in varying ways. Some routines have no way to return an error (eg `xDlOpen `_ just returns zero/NULL on being unable to load a library, `xSleep `_ has no error return parameter), others are unified (eg almost any error in xWrite will be returned to the user as disk full error). Sometimes errors are ignored as they are harmless such as when a journal can't be deleted after a commit (the journal is marked as obsolete before being deleted). Simple operations such as opening a database can result in many different VFS function calls such as hot journals being detected, locking, and read/writes for playback/rollback. To avoid confusion with exceptions being raised in the VFS and exceptions from normal code to open Connections or execute SQL queries, VFS exceptions are not raised in the normal way. (If they were, only one could be raised and it would obscure whatever exceptions the :class:`Connection` open or SQL query execute wanted to raise.) Instead the :meth:`VFS.excepthook` or :meth:`VFSFile.excepthook` method is called with a tuple of exception type, exception value and exception traceback. The default implementation of ``excepthook`` calls ``sys.excepthook()`` which under Python 2 shows the stack trace and under Python 3 merely prints the exception value. (If ``sys.excepthook`` fails then ``PyErr_Display()`` is called.) In normal VFS usage there will be no exceptions raised, or specific expected ones which APSW clears after noting them and returning the appropriate value back to SQLite. The exception hooking behaviour helps you find issues in your code or unexpected behaviour of the external environment. Remember that :ref:`augmented stack traces ` are available which significantly increase detail about the exceptions. As an example, lets say you have a divide by zero error in your xWrite routine. The table below shows what happens with time going down and across. +----------------------------------------------+--------------------------------+---------------------------------------------+ | Python Query Code | SQLite and APSW C code | Python VFS code | +==============================================+================================+=============================================+ | ``cursor.execute("update table set foo=3")`` | | | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | SQLite starts executing query | | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | | Your VFS routines are called | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | | Your xWrite divides by zero | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | | :meth:`VFSFile.excepthook` is called with | | | | ZeroDivision exception | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | :const:`SQLITE_ERROR` (closest | | | | matching SQLite error code) is | | | | returned to SQLite by APSW | | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | SQLite error handling and | More VFS routines are called. Any | | | recovery operates which calls | exceptions in these routines will result in | | | more VFS routines. | :meth:`VFSFile.excepthook` being called with| | | | them. | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | SQLite returns | | | | :const:`SQLITE_FULL` to APSW | | +----------------------------------------------+--------------------------------+---------------------------------------------+ | APSW returns :class:`apsw.FullError` | | | +----------------------------------------------+--------------------------------+---------------------------------------------+ VFS class ========= .. index:: sqlite3_vfs_register, sqlite3_vfs_find .. class:: VFS(name[, base=None, makedefault=False, maxpathname=1024]) Provides operating system access. You can get an overview in the `SQLite documentation `_. To create a VFS your Python class must inherit from :class:`VFS`. :param name: The name to register this vfs under. If the name already exists then this vfs will replace the prior one of the same name. Use :meth:`apsw.vfsnames` to get a list of registered vfs names. :param base: If you would like to inherit behaviour from an already registered vfs then give their name. To inherit from the default vfs, use a zero length string ``""`` as the name. :param makedefault: If true then this vfs will be registered as the default, and will be used by any opens that don't specify a vfs. :param maxpathname: The maximum length of database name in bytes when represented in UTF-8. If a pathname is passed in longer than this value then SQLite will not `be able to open it `__. :raises ValueError: If *base* is not :const:`None` and the named vfs is not currently registered. Calls: * `sqlite3_vfs_register `__ * `sqlite3_vfs_find `__ .. method:: VFS.excepthook(etype, evalue, etraceback) Called when there has been an exception in a :class:`VFS` routine. The default implementation calls ``sys.excepthook`` and if that fails then ``PyErr_Display``. The three arguments correspond to what ``sys.exc_info()`` would return. :param etype: The exception type :param evalue: The exception value :param etraceback: The exception traceback. Note this includes all frames all the way up to the thread being started. .. index:: sqlite3_vfs_unregister .. method:: VFS.unregister() Unregisters the VFS making it unavailable to future database opens. You do not need to call this as the VFS is automatically unregistered by when the VFS has no more references or open datatabases using it. It is however useful to call if you have made your VFS be the default and wish to immediately make it be unavailable. It is safe to call this routine multiple times. Calls: `sqlite3_vfs_unregister `__ .. method:: VFS.xAccess(pathname, flags) -> bool SQLite wants to check access permissions. Return True or False accordingly. :param pathname: File or directory to check :param flags: One of the `access flags `_ .. method:: VFS.xCurrentTime() -> float Return the `Julian Day Number `_ as a floating point number where the integer portion is the day and the fractional part is the time. Do not adjust for timezone (ie use `UTC `_). .. method:: VFS.xDelete(filename, syncdir) Delete the named file. .. note:: SQLite has 3 different behaviours depending on version for how to handle missing files. +----------------------------------------+-------------------------------------------------+ | SQLite < 3.7.8 |Raise an :exc:`IOError` if the file does not | | |exist. | +----------------------------------------+-------------------------------------------------+ | SQLite >= 3.7.8 and SQLite < 3.7.15 |Do not raise an exception | +----------------------------------------+-------------------------------------------------+ | SQLite >= 3.7.15 |Raise an :exc:`IOError` exception with | | |extendedresult :const:`SQLITE_IOERR_DELETE_NOENT`| +----------------------------------------+-------------------------------------------------+ :param filename: File to delete :param syncdir: If True then the directory should be synced ensuring that the file deletion has been recorded on the disk platters. ie if there was an immediate power failure after this call returns, on a reboot the file would still be deleted. .. method:: VFS.xDlClose(handle) Close and unload the library corresponding to the handle you returned from :meth:`~VFS.xDlOpen`. You can use ctypes to do this:: def xDlClose(handle): # Note leading underscore in _ctypes _ctypes.dlclose(handle) # Linux/Mac/Unix _ctypes.FreeLibrary(handle) # Windows .. method:: VFS.xDlError() -> string Return an error string describing the last error of :meth:`~VFS.xDlOpen` or :meth:`~VFS.xDlSym` (ie they returned zero/NULL). If you do not supply this routine then SQLite provides a generic message. To implement this method, catch exceptions in :meth:`~VFS.xDlOpen` or :meth:`~VFS.xDlSym`, turn them into strings, save them, and return them in this routine. If you have an error in this routine or return None then SQLite's generic message will be used. .. method:: VFS.xDlOpen(filename) -> number Load the shared library. You should return a number which will be treated as a void pointer at the C level. On error you should return 0 (NULL). The number is passed as is to :meth:`~VFS.xDlSym`/:meth:`~VFS.xDlClose` so it can represent anything that is convenient for you (eg an index into an array). You can use ctypes to load a library:: def xDlOpen(name): return ctypes.cdll.LoadLibrary(name)._handle .. method:: VFS.xDlSym(handle, symbol) -> address Returns the address of the named symbol which will be called by SQLite. On error you should return 0 (NULL). You can use ctypes:: def xDlSym(ptr, name): return _ctypes.dlsym (ptr, name) # Linux/Unix/Mac etc (note leading underscore) return ctypes.win32.kernel32.GetProcAddress (ptr, name) # Windows :param handle: The value returned from an earlier :meth:`~VFS.xDlOpen` call :param symbol: A string :rtype: An int/long with the symbol address .. method:: VFS.xFullPathname(name) -> string Return the absolute pathname for name. You can use ``os.path.abspath`` to do this. .. method:: VFS.xGetLastError() -> string This method is to return text describing the last error that happened in this thread. If not implemented SQLite's more generic message is used. However the method is :cvstrac:`never called <3337>` by SQLite. .. method:: VFS.xGetSystemCall(name) -> int Returns a pointer for the current method implementing the named system call. Return None if the call does not exist. .. method:: VFS.xNextSystemCall(name) -> String or None This method is repeatedly called to iterate over all of the system calls in the vfs. When called with None you should return the name of the first system call. In subsequent calls return the name after the one passed in. If name is the last system call then return None. .. note:: Because of internal SQLite implementation semantics memory will be leaked on each call to this function. Consequently you should build up the list of call names once rather than repeatedly doing it. .. method:: VFS.xOpen(name, flags) -> VFSFile or similar object This method should return a new file object based on name. You can return a :class:`VFSFile` from a completely different VFS. :param name: File to open. Note that *name* may be :const:`None` in which case you should open a temporary file with a name of your choosing. May be an instance of :class:`URIFilename`. :param flags: A list of two integers ``[inputflags, outputflags]``. Each integer is one or more of the `open flags `_ binary orred together. The ``inputflags`` tells you what SQLite wants. For example :const:`SQLITE_OPEN_DELETEONCLOSE` means the file should be automatically deleted when closed. The ``outputflags`` describes how you actually did open the file. For example if you opened it read only then :const:`SQLITE_OPEN_READONLY` should be set. .. method:: VFS.xRandomness(numbytes) -> bytes This method is called once when SQLite needs to seed the random number generator. It is called on the default VFS only. It is not called again, even across :meth:`apsw.shutdown` calls. You can return less than the number of bytes requested including None. If you return more then the surplus is ignored. :rtype: (Python 2) string, buffer (Python 3) bytes, buffer .. method:: VFS.xSetSystemCall(name, pointer) -> bool Change a system call used by the VFS. This is useful for testing and some other scenarios such as sandboxing. :param name: The string name of the system call :param pointer: A pointer provided as an int/long. There is no reference counting or other memory tracking of the pointer. If you provide one you need to ensure it is around for the lifetime of this and any other related VFS. Raise an exception to return an error. If the system call does not exist then raise :exc:`NotFoundError`. :returns: True if the system call was set. False if the system call is not known. .. method:: VFS.xSleep(microseconds) -> integer Pause exection of the thread for at least the specified number of microseconds (millionths of a second). This routine is typically called from the busy handler. :returns: How many microseconds you actually requested the operating system to sleep for. For example if your operating system sleep call only takes seconds then you would have to have rounded the microseconds number up to the nearest second and should return that rounded up value. VFSFile class ============= .. class:: VFSFile(vfs, name, flags) Wraps access to a file. You only need to derive from this class if you want the file object returned from :meth:`VFS.xOpen` to inherit from an existing VFS implementation. .. note:: All file sizes and offsets are 64 bit quantities even on 32 bit operating systems. :param vfs: The vfs you want to inherit behaviour from. You can use an empty string ``""`` to inherit from the default vfs. :param name: The name of the file being opened. May be an instance of :class:`URIFilename`. :param flags: A two list ``[inflags, outflags]`` as detailed in :meth:`VFS.xOpen`. :raises ValueError: If the named VFS is not registered. .. note:: If the VFS that you inherit from supports :ref:`write ahead logging ` then your :class:`VFSFile` will also support the xShm methods necessary to implement wal. .. seealso:: :meth:`VFS.xOpen` .. method:: VFSFile.excepthook(etype, evalue, etraceback) Called when there has been an exception in a :class:`VFSFile` routine. The default implementation calls ``sys.excepthook`` and if that fails then ``PyErr_Display``. The three arguments correspond to what ``sys.exc_info()`` would return. :param etype: The exception type :param evalue: The exception value :param etraceback: The exception traceback. Note this includes all frames all the way up to the thread being started. .. method:: VFSFile.xCheckReservedLock() Returns True if any database connection (in this or another process) has a lock other than `SQLITE_LOCK_NONE or SQLITE_LOCK_SHARED `_. .. method:: VFSFile.xClose() Close the database. Note that even if you return an error you should still close the file. It is safe to call this method mutliple times. .. method:: VFSFile.xDeviceCharacteristics() -> int Return `I/O capabilities `_ (bitwise or of appropriate values). If you do not implement the function or have an error then 0 (the SQLite default) is returned. .. method:: VFSFile.xFileControl(op, ptr) -> bool Receives `file control `_ request typically issued by :meth:`Connection.filecontrol`. See :meth:`Connection.filecontrol` for an example of how to pass a Python object to this routine. :param op: A numeric code. Codes below 100 are reserved for SQLite internal use. :param ptr: An integer corresponding to a pointer at the C level. :returns: A boolean indicating if the op was understood As of SQLite 3.6.10, this method is called by SQLite if you have inherited from an underlying VFSFile. Consequently ensure you pass any unrecognised codes through to your super class. For example:: def xFileControl(self, op, ptr): if op==1027: process_quick(ptr) elif op==1028: obj=ctypes.py_object.from_address(ptr).value else: # this ensures superclass implementation is called return super(MyFile, self).xFileControl(op, ptr) # we understood the op return True .. method:: VFSFile.xFileSize() -> int Return the size of the file in bytes. Remember that file sizes are 64 bit quantities even on 32 bit operating systems. .. method:: VFSFile.xLock(level) Increase the lock to the level specified which is one of the `SQLITE_LOCK `_ family of constants. If you can't increase the lock level because someone else has locked it, then raise :exc:`BusyError`. .. method:: VFSFile.xRead(amount, offset) -> bytes Read the specified *amount* of data starting at *offset*. You should make every effort to read all the data requested, or return an error. If you have the file open for non-blocking I/O or if signals happen then it is possible for the underlying operating system to do a partial read. You will need to request the remaining data. Except for empty files SQLite considers short reads to be a fatal error. :param amount: Number of bytes to read :param offset: Where to start reading. This number may be 64 bit once the database is larger than 2GB. :rtype: (Python 2) string, buffer. (Python 3) bytes, buffer .. method:: VFSFile.xSectorSize() -> int Return the native underlying sector size. SQLite uses the value returned in determining the default database page size. If you do not implement the function or have an error then 4096 (the SQLite default) is returned. .. method:: VFSFile.xSync(flags) Ensure data is on the disk platters (ie could survive a power failure immediately after the call returns) with the `sync flags `_ detailing what needs to be synced. You can sync more than what is requested. .. method:: VFSFile.xTruncate(newsize) Set the file length to *newsize* (which may be more or less than the current length). .. method:: VFSFile.xUnlock(level) Decrease the lock to the level specified which is one of the `SQLITE_LOCK `_ family of constants. .. method:: VFSFile.xWrite(data, offset) Write the *data* starting at absolute *offset*. You must write all the data requested, or return an error. If you have the file open for non-blocking I/O or if signals happen then it is possible for the underlying operating system to do a partial write. You will need to write the remaining data. :param offset: Where to start writing. This number may be 64 bit once the database is larger than 2GB. :param data: (Python 2) string, (Python 3) bytes URIFilename class ================= .. class:: URIFilename SQLite uses a convoluted method of storing `uri parameters `__ after the filename binding the C filename representation and parameters together. This class encapsulates that binding. The :ref:`example ` shows usage of this class. Your :meth:`VFS.xOpen` method will generally be passed one of these instead of a string as the filename if the URI flag was used or the main database flag is set. You can safely pass it on to the :class:`VFSFile` constructor which knows how to get the name back out. .. method:: URIFilename.filename() -> str Returns the filename. .. index:: sqlite3_uri_boolean .. method:: URIFilename.uri_boolean(name, default) -> bool Returns the boolean value for parameter `name` or `default` if not present. Calls: `sqlite3_uri_boolean `__ .. index:: sqlite3_uri_int64 .. method:: URIFilename.uri_int(name, default) -> int Returns the integer value for parameter `name` or `default` if not present. Calls: `sqlite3_uri_int64 `__ .. index:: sqlite3_uri_parameter .. method:: URIFilename.uri_parameter(name) -> str Returns the value of parameter `name` or None. Calls: `sqlite3_uri_parameter `__ apsw-3.8.11.1-r1/doc/_sources/extensions.txt0000664000000000000000000000604612257607024017277 0ustar rootroot.. currentmodule:: apsw .. _extensions: Extensions ********** SQLite includes a number of extensions providing additional functionality. All extensions are disabled by default and you need to take steps to have them available at compilation time, to enable them and then to use them. .. _ext-asyncvfs: Asynchronous VFS ================ This extension does SQLite I/O in a background thread processing a queue of requests. To enable it you must have used :option:`fetch --asyncvfs` to :file:`setup.py` at some point. It is enabled by the downloaded file :file:`sqlite3async.c` being present in the same directory as :file:`setup.py`. See the `SQLite documentation `__. To use you must first call :func:`apsw.async_initialize` which will register the VFS. If you didn't make it the default VFS then you need to specify it when opening your database. From this point on, any operations you perform that require writing to the database will be queued and the database locked. You should start a background thread to perform the write operations calling :func:`apsw.async_run` to do the work. You can call :func:`apsw.async_control` to set and get various things (eg adding delays, controlling locking) as well as telling it when the writer code should exit. This is a simple example:: # Inherit from default vfs, do not make this the new default asyncvfsname=apsw.async_initialize("", False) # Open database db=apsw.Connection("database", vfs=asyncvfsname) # Make a worker thread t=threading.Thread(target=apsw.async_run) t.start() # do some work cur=db.cursor() cur.execute("..") # Close db db.close() # Tell worker to quit when queue is empty apsw.async_control(apsw.SQLITEASYNC_HALT, apsw.SQLITEASYNC_HALT_IDLE) .. _ext-fts3: FTS3/4 ====== This is the third version of the `full text search `__ extension. It makes it easy to find words in multi-word text fields. You must enable the extension via :ref:`setup.py build flags ` before it will work. There are no additional APIs and the `documented SQL `__ works as is. Note that FTS4 is some augmentations to FTS3 and are enabled whenever FTS3 is enabled as described in the `documentation `__ .. _ext-icu: ICU === The ICU extension provides an `International Components for Unicode `__ interface, in particular enabling you do sorting and regular expressions in a locale aware way. The `documentation `__ shows how to use it. .. _ext-rtree: RTree ===== The RTree extension provides a `spatial table `_ - see the `documentation `__. You must enable the extension via :ref:`setup.py build flags ` before it will work. There are no additional APIs and the `documented SQL `__ works as is. apsw-3.8.11.1-r1/doc/_sources/benchmarking.txt0000664000000000000000000001126612257607024017530 0ustar rootroot.. _benchmarking: Benchmarking ============ Before you do any benchmarking with APSW or other ways of accessing SQLite, you must understand how and when SQLite does transactions. See `transaction control `_. **APSW does not alter SQLite's behaviour with transactions.** Some access layers try to interpret your SQL and manage transactions behind your back, which may or may not work well with SQLite also doing its own transactions. You should always manage your transactions yourself. For example to insert 1,000 rows wrap it in a single transaction else you will have 1,000 transactions. The best clue that you have one transaction per statement is having a maximum of 60 statements per second. You need two drive rotations to do a transaction - the data has to be committed to the main file and the journal - and 7200 RPM drives do 120 rotations a second. On the other hand if you don't put in the transaction boundaries yourself and get more than 60 statements a second, then your access mechanism is silently starting transactions for you. This topic also comes up fairly frequently in the SQLite mailing list archives. .. _speedtest: speedtest --------- APSW includes a speed testing script as part of the :ref:`source distribution `. You can use the script to compare SQLite performance across different versions of SQLite, different host systems (hard drives and controllers matter) as well as between pysqlite and APSW. The underlying queries are based on `SQLite's speed test `_. .. speedtest-begin .. code-block:: text $ python speedtest.py --help Usage: speedtest.py [options] Options: -h, --help show this help message and exit --apsw Include apsw in testing (False) --pysqlite Include pysqlite in testing (False) --correctness Do a correctness test --scale=SCALE How many statements to execute. Each unit takes about 2 seconds per test on memory only databases. [Default 10] --database=DATABASE The database file to use [Default :memory:] --tests=TESTS What tests to run [Default bigstmt,statements,statements_nobindings] --iterations=N How many times to run the tests [Default 4] --tests-detail Print details of what the tests do. (Does not run the tests) --dump-sql=FILENAME Name of file to dump SQL to. This is useful for feeding into the SQLite command line shell. --sc-size=N Size of the statement cache. APSW will disable cache with value of zero. Pysqlite ensures a minimum of 5 [Default 100] --unicode=UNICODE Percentage of text that is unicode characters [Default 0] --data-size=SIZE Maximum size in characters of data items - keep this number small unless you are on 64 bits and have lots of memory with a small scale - you can easily consume multiple gigabytes [Default same as original TCL speedtest] $ python speedtest.py --tests-detail bigstmt: Supplies the SQL as a single string consisting of multiple statements. apsw handles this normally via cursor.execute while pysqlite requires that cursor.executescript is called. The string will be several kilobytes and with a factor of 50 will be in the megabyte range. This is the kind of query you would run if you were restoring a database from a dump. (Note that pysqlite silently ignores returned data which also makes it execute faster). statements: Runs the SQL queries but uses bindings (? parameters). eg:: for i in range(3): cursor.execute("insert into table foo values(?)", (i,)) This test has many hits of the statement cache. statements_nobindings: Runs the SQL queries but doesn't use bindings. eg:: cursor.execute("insert into table foo values(0)") cursor.execute("insert into table foo values(1)") cursor.execute("insert into table foo values(2)") This test has no statement cache hits and shows the overhead of having a statement cache. In theory all the tests above should run in almost identical time as well as when using the SQLite command line shell. This tool shows you what happens in practise. .. speedtest-endapsw-3.8.11.1-r1/doc/_sources/dbapi.txt0000664000000000000000000001151112257607024016150 0ustar rootroot.. _dbapinotes: DBAPI notes *********** .. currentmodule:: apsw DBAPI is defined in :pep:`249`. This section desribes how APSW complies or differs from it. Module Interface ================ There is no connect method. Use the :class:`Connection` constructor instead. The Connection object and any cursors can be used in any thread. As an extreme example, you could call :meth:`Cursor.next` in seperate threads each thread getting the next row. You cannot use the cursor concurrently in multiple threads for example calling :meth:`Cursor.execute` at the same time. If you attempt to do so then an :exc:`exception ` will be raised. The Python Global Interpreter Lock (GIL) is released during all SQLite API calls allowing for maximum concurrency. Three different paramstyles are supported. Note that SQLite starts parameter numbers from one not zero when using *qmark/numeric* style. +-----------------+---------------------------------+ | qmark | ``... WHERE name=?`` | +-----------------+---------------------------------+ | numeric | ``... WHERE name=?4`` | +-----------------+---------------------------------+ | named | | ``... WHERE name=:name`` or | | | | ``... WHERE name=$name`` | +-----------------+---------------------------------+ The DBAPI exceptions are not used. The :ref:`exceptions ` used correspond to specific SQLite error codes. Connection Objects ================== There are no commit or rollback methods. You should use :meth:`Cursor.execute` with `BEGIN` and `COMMIT` or `ROLLBACK` as appropriate. The `SQLite documentation `_ has more details. In particular note that SQLite does not support nested transactions. You can only start one transaction and will get an error if you try to start another one. Several methods that are defined in DBAPI to be on the cursor are instead on the Connection object, since this is where SQLite actually stores the information. Doing operations in any other cursor attached to the same Connection object does update their values, and this makes you aware of that. Cursor Objects ============== Use :meth:`Cursor.getdescription` instead of description. This information is only obtained on request. .. _rowcount: There is no rowcount. Row counts don't make sense in SQLite any way. SQLite returns results one row at a time, not calculating the next result row until you ask for it. Consequently getting a rowcount would have to calculate all the result rows and would not reduce the amount of effort needed. callproc is not implemented as SQLite doesn't support stored procedures. :meth:`~Cursor.execute` returns the Cursor object and you can use it as an iterator to get the results (if any). :meth:`~Cursor.executemany` returns the Cursor object and you can use it as an iterator to get the results (if any). fetchone is not available. Use the cursor as an iterator, or call :meth:`~Cursor.next` to get the next row, or raises StopIteration when there are no more results. fetchmany is not available. Simply use the cursor as an iterator or call :meth:`~Cursor.next` for however many results you want. fetchall is available, but not too useful. Simply use the cursor as an iterator, call :meth:`~Cursor.next`, or use list which is less typing:: all=list(cursor.execute("....")) nextset is not applicable or implemented. arraysize is not available as fetchmany isn't. Neither setinputsizes or setoutputsize are applicable or implemented. Type objects ============ None of the date or time methods are available since SQLite 3 does not have a native date or time type. There are `functions `_ for manipulating dates and time which are represented as strings or `Julian days `_ (floating point number). Use the standard Python buffer class for BLOBs in Python 2 and the bytes type in Python 3. Optional DB API Extensions ========================== rownumber is not available. Exception classes are not available as attributes of Connection but instead are on the :mod:`apsw` module. See :ref:`exceptions` for more details. Use :meth:`Cursor.getconnection` to get the associated Connection object from a cursor. scroll and messages are not available. The Cursor object supports the iterator protocol and this is the only way of getting information back. To get the last inserted row id, call :meth:`Connection.last_insert_rowid`. That stores the id from the last insert on any Cursor associated with the the Connection. You can also add `select last_insert_rowid() `_ to the end of your execute statements:: for row in cursor.execute("BEGIN; INSERT ... ; INSERT ... ; SELECT last_insert_rowid(); COMMIT"): lastrowid=row[0] There is no errorhandler attribute. apsw-3.8.11.1-r1/doc/_sources/types.txt0000664000000000000000000000733612257607024016247 0ustar rootroot.. _types: Types ***** .. currentmodule:: apsw Read about `SQLite 3 types `_. ASPW always maintains the correct type for values, and never converts them to something else. Note however that SQLite may convert types based on column affinity as `described `_. ASPW requires that all values supplied are one of the corresponding Python/SQLite types (or a subclass). Mapping ======= * None in Python is NULL in SQLite * Python int or long is INTEGER in SQLite. The value represented must fit within a 64 bit signed quantity (long long at the C level) or an overflow exception is generated. * Python's float type is used for REAL in SQLite. (At the C level they are both 8 byte quantities and there is no loss of precision). * In Python 2, Python's string or unicode is used for TEXT supplied to SQLite and all text returned from SQLite is unicode. For Python 3 only unicode is used. * For Python 2 the buffer class is used for BLOB in SQLite. In Python 3 the bytes type is used, although you can still supply buffers. .. _unicode: Unicode ======= All SQLite strings are Unicode. The actual binary representations can be UTF8, or UTF16 in either byte order. ASPW uses the UTF8 interface to SQLite which results in the binary string representation in your database defaulting to UTF8 as well. All this is totally transparent to your Python code. Everywhere strings are used (eg as database values, SQL statements, bindings names, user defined functions) you can use Unicode strings, and in Python 3 must use Unicode. In Python 2, you can also use the bare Python string class, and ASPW will automatically call the unicode converter if any non-ascii characters are present. When returning text values from SQLite, ASPW always uses the Python unicode class. If you don't know much about Unicode then read `Joel's article `_. SQLite does not include conversion from random non-Unicode encodings to or from Unicode. (It does include conversion between 8 bit and 16 bit Unicode encodings). Python includes `codecs `_ for conversion to or from many different character sets. If you don't want to use Unicode and instead want a simple bytes in are the same bytes out then you should only use blobs. If you want to do manipulation of unicode text such as upper/lower casing or sorting then you need to know about locales. This is because the exact same sequence of characters sort, upper case, lower case etc differently depending on where you are. As an example Turkic languages have multiple letter i, German has ß which behaves like ss, various accents sort differently in different European countries. Fortunately there is a libary you can ask to do the right locale specific thing `ICU `_. A default SQLite compilation only deals with the 26 letter Roman alphabet. If you enable ICU with SQLite then you get `good stuff `_. See the :ref:`building` section on how to enable ICU for SQLite with APSW. Note that Python does not currently include ICU support and hence sorting, upper/lower casing etc are limited and do not take locales into account. In summary, never confuse bytes with strings (which C sadly treats as the same thing). Either always use bytes (and SQLite blobs) for everything or use strings (and SQLite strings) for everything. If you take the latter approach and have to deal with external input/output then you must know what encodings are being used and it is best to convert to Unicode as early as possible on input and late as possible on output. apsw-3.8.11.1-r1/doc/_sources/download.txt0000664000000000000000000001603212565423020016675 0ustar rootrootDownload ******** .. _source_and_binaries: Source and binaries =================== You can download this release as binaries for Windows. Just run the executable corresponding with the Python version you are using. The Windows binaries all include the :ref:`FTS ` and :ref:`RTree ` extensions. (`FTS3_PARENTHESIS `_ is on.) Download in source form for other platforms or if you want to compile yourself on Windows. See the :ref:`recommended ` way to build or all the :ref:`options available `. .. downloads-begin * `apsw-3.8.11.1-r1.zip `__ (Source, includes this HTML Help) * Windows Python 2.3 `32bit `__ * Windows Python 2.4 `32bit `__ * Windows Python 2.5 `32bit `__ * Windows Python 2.6 `32bit `__ `64bit `__ * Windows Python 2.7 `32bit `__ `64bit `__ * Windows Python 3.1 `32bit `__ `64bit `__ * Windows Python 3.2 `32bit `__ `64bit `__ * Windows Python 3.3 `32bit `__ `64bit `__ * Windows Python 3.4 `32bit `__ `64bit `__ * `apsw-3.8.11.1-r1-sigs.zip `__ GPG signatures for all files .. downloads-end Some Linux distributions also have packages which may trail the SQLite and APSW releases by a year, or more. It is also possible to build RPMs and DEB packages from the source, although this involves setting up package management tools and various dependencies on your build machine. +-------------------+----------------------------------------------------------------------------------+ | Debian | Install `python-apsw `__ | +-------------------+----------------------------------------------------------------------------------+ | Fedora | Install `python-apsw `__ | +-------------------+----------------------------------------------------------------------------------+ | Ubuntu | Install `python-apsw `__ | +-------------------+----------------------------------------------------------------------------------+ | Ubuntu PPA | PPA building has been broken for over two years because | | | `Canonical/Ubuntu add a broken flag to the PPA | | | `__. | +-------------------+----------------------------------------------------------------------------------+ | Gentoo | Install `dev-python/apsw `_ | +-------------------+----------------------------------------------------------------------------------+ | Arch Linux | Install `python-apsw `__ | +-------------------+----------------------------------------------------------------------------------+ .. _verifydownload: Verifying your download ======================= Downloads are digitally signed so you can verify they have not been tampered with. Download and extract the zip file of signatures listed above. These instructions are for `GNU Privacy Guard `__. (GPG is installed as standard on most Unix/Linux platforms and can be downloaded for Windows.) Verify To verify a file just use --verify specifying the corresponding ``.asc`` filename. This example verifies the source:: $ gpg --verify apsw-3.8.11.1-r1.zip.asc gpg: Signature made ... date ... using DSA key ID 0DFBD904 gpg: Good signature from "Roger Binns " If you get a "good signature" then the file has not been tampered with and you are good to go. Getting the signing key You may not have the signing key available in which case the last line will be something like this:: gpg: Can't check signature: public key not found You can get a copy of the key using this command:: $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0DFBD904 gpg: requesting key 0DFBD904 from hkp server keyserver.ubuntu.com gpg: /home/username/.gnupg/trustdb.gpg: trustdb created gpg: key 0DFBD904: public key "Roger Binns " imported gpg: Total number processed: 1 gpg: imported: 1 Repeat the verify step. Source code control =================== The source is controlled by Git - start at https://github.com/rogerbinns/apsw easy_install/pip/pypi ===================== APSW is **not** available at the Python Package Index (pypi) and hence cannot be installed using easy_install, pip or similar tools. The reason for this is that the tools do not provide a way of providing options to the setup.py included with APSW and hence there is no way for APSW to know if you want SQLite downloaded, a consistent version of SQLite or the latest, to use a system SQLite instead, error if an a system version is not available etc. I could pick a sensible default but everyone else using pypi would be disadvantaged or worse get undesired behaviour (eg different versions of SQLite depending on when a machine did an install). Additionally the world of Python packaging is going through another series of changes (distutils2 aka packaging) so some solution may come out of that. I'm happy to work with anyone who has a solution to this problem.apsw-3.8.11.1-r1/doc/_sources/vtable.txt0000664000000000000000000004616612565422266016371 0ustar rootroot.. Automatically generated by code2rst.py code2rst.py src/vtable.c doc/vtable.rst Edit src/vtable.c not this file! .. currentmodule:: apsw .. _virtualtables: Virtual Tables ************** `Virtual Tables `__ are a feature introduced in SQLite 3.3.7. They let a developer provide an underlying table implementations, while still presenting a normal SQL interface to the user. The person writing SQL doesn't need to know or care that some of the tables come from elsewhere. Some examples of how you might use this: * Translating to/from information stored in other formats (eg a csv/ini format file) * Accessing the data remotely (eg you could make a table that backends into Amazon's API) * Dynamic information (eg currently running processes, files and directories, objects in your program) * Information that needs reformatting (eg if you have complex rules about how to convert strings to/from Unicode in the dataset) * Information that isn't relationally correct (eg if you have data that has ended up with duplicate "unique" keys with code that dynamically corrects it) * There are other examples on the `SQLite page `__ You need to have 3 types of object. A :class:`module `, a :class:`virtual table ` and a :class:`cursor `. These are documented below. You can also read the `SQLite C method documentation `__. At the C level, they are just one set of methods. At the Python/APSW level, they are split over the 3 types of object. The leading **x** is omitted in Python. You can return SQLite error codes (eg :const:`SQLITE_READONLY`) by raising the appropriate exceptions (eg :exc:`ReadOnlyError`). :meth:`exceptionfor` is a useful helper function to do the mapping. VTModule class ============== .. class:: VTModule .. note:: There is no actual *VTModule* class - it is just shown this way for documentation convenience. Your module instance should implement all the methods documented here. A module instance is used to create the virtual tables. Once you have a module object, you register it with a connection by calling :meth:`Connection.createmodule`:: # make an instance mymod=MyModuleClass() # register the vtable on connection con con.createmodule("modulename", mymod) # tell SQLite about the table con.cursor().execute("create VIRTUAL table tablename USING modulename('arg1', 2)") The create step is to tell SQLite about the existence of the table. Any number of tables referring to the same module can be made this way. Note the (optional) arguments which are passed to the module. .. method:: VTModule.Connect(connection, modulename, databasename, tablename, *args) -> [ sql string, table object ] The parameters and return are identical to :meth:`~VTModule.Create`. This method is called when there are additional references to the table. :meth:`~VTModule.Create` will be called the first time and :meth:`~VTModule.Connect` after that. The advise is to create caches, generated data and other heavyweight processing on :meth:`~VTModule.Create` calls and then find and reuse that on the subsequent :meth:`~VTModule.Connect` calls. The corresponding call is :meth:`VTTable.Disconnect`. If you have a simple virtual table implemtation, then just set :meth:`~VTModule.Connect` to be the same as :meth:`~VTModule.Create`:: class MyModule: def Create(self, connection, modulename, databasename, tablename, *args): # do lots of hard work Connect=Create .. method:: VTModule.Create(connection, modulename, databasename, tablename, *args) -> [ sql string, table object ] Called when a table is first created on a :class:`connection `. :param connection: An instance of :class:`Connection` :param modulename: The string name under which the module was :meth:`registered ` :param databasename: The name of the database. This will be ``main`` for directly opened files and the name specified in `ATTACH `_ statements. :param tablename: Name of the table the user wants to create. :param args: Any arguments that were specified in the `create virtual table `_ statement. :returns: A list of two items. The first is a SQL `create table `_ statement. The columns are parsed so that SQLite knows what columns and declared types exist for the table. The second item is an object that implements the :class:`table ` methods. The corresponding call is :meth:`VTTable.Destroy`. VTTable class ============= .. class:: VTTable .. note:: There is no actual *VTTable* class - it is just shown this way for documentation convenience. Your table instance should implement the methods documented here. The :class:`VTTable` object contains knowledge of the indices, makes cursors and can perform transactions. .. _vtablestructure: A virtual table is structured as a series of rows, each of which has the same columns. The value in a column must be one of the `5 supported types `_, but the type can be different between rows for the same column. The virtual table routines identify the columns by number, starting at zero. Each row has a **unique** 64 bit integer `rowid `_ with the :class:`Cursor ` routines operating on this number, as well as some of the :class:`Table ` routines such as :meth:`UpdateChangeRow `. .. method:: VTTable.Begin() This function is used as part of transactions. You do not have to provide the method. .. method:: VTTable.BestIndex(constraints, orderbys) This is a complex method. To get going initially, just return :const:`None` and you will be fine. Implementing this method reduces the number of rows scanned in your table to satisfy queries, but only if you have an index or index like mechanism available. .. note:: The implementation of this method differs slightly from the `SQLite documentation `__ for the C API. You are not passed "unusable" constraints. The argv/constraintarg positions are not off by one. In the C api, you have to return position 1 to get something passed to :meth:`VTCursor.Filter` in position 0. With the APSW implementation, you return position 0 to get Filter arg 0, position 1 to get Filter arg 1 etc. The purpose of this method is to ask if you have the ability to determine if a row meets certain constraints that doesn't involve visiting every row. An example constraint is ``price > 74.99``. In a traditional SQL database, queries with constraints can be speeded up `with indices `_. If you return None, then SQLite will visit every row in your table and evaluate the constraint itself. Your index choice returned from BestIndex will also be passed to the :meth:`~VTCursor.Filter` method on your cursor object. Note that SQLite may call this method multiple times trying to find the most efficient way of answering a complex query. **constraints** You will be passed the contraints as a sequence of tuples containing two items. The first item is the column number and the second item is the operation. Example query: ``select * from foo where price > 74.99 and quantity<=10 and customer='Acme Widgets'`` If customer is column 0, price column 2 and quantity column 5 then the constraints will be:: (2, apsw.SQLITE_INDEX_CONSTRAINT_GT), (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ) Note that you do not get the value of the constraint (ie "Acme Widgets", 74.99 and 10 in this example). If you do have any suitable indices then you return a sequence the same length as constraints with the members mapping to the constraints in order. Each can be one of None, an integer or a tuple of an integer and a boolean. Conceptually SQLite is giving you a list of constraints and you are returning a list of the same length describing how you could satisfy each one. Each list item returned corresponding to a constraint is one of: None This means you have no index for that constraint. SQLite will have to iterate over every row for it. integer This is the argument number for the constraintargs being passed into the :meth:`~VTCursor.Filter` function of your :class:`cursor ` (the values "Acme Widgets", 74.99 and 10 in the example). (integer, boolean) By default SQLite will check what you return. For example if you said that you had an index on price, SQLite will still check that each row you returned is greater than 74.99. If you set the boolean to False then SQLite won't do that double checking. Example query: ``select * from foo where price > 74.99 and quantity<=10 and customer=='Acme Widgets'``. customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price. +----------------------------------------+--------------------------------+ | Constraints (in) | Constraints used (out) | +========================================+================================+ | :: | :: | | | | | (2, apsw.SQLITE_INDEX_CONSTRAINT_GT), | 1, | | (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), | None, | | (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ) | 0 | | | | +----------------------------------------+--------------------------------+ When your :class:`~VTCursor.Filter` method in the cursor is called, constraintarg[0] will be "Acme Widgets" (customer constraint value) and constraintarg[1] will be 74.99 (price constraint value). You can also return an index number (integer) and index string to use (SQLite attaches no significance to these values - they are passed as is to your :meth:`VTCursor.Filter` method as a way for the BestIndex method to let the :meth:`~VTCursor.Filter` method know which of your indices or similar mechanism to use. **orderbys** The second argument to BestIndex is a sequence of orderbys because the query requested the results in a certain order. If your data is already in that order then SQLite can give the results back as is. If not, then SQLite will have to sort the results first. Example query: ``select * from foo order by price desc, quantity asc`` Price is column 2, quantity column 5 so orderbys will be:: (2, True), # True means descending, False is ascending (5, False) **Return** You should return up to 5 items. Items not present in the return have a default value. 0: constraints used (default None) This must either be None or a sequence the same length as constraints passed in. Each item should be as specified above saying if that constraint is used, and if so which constraintarg to make the value be in your :meth:`VTCursor.Filter` function. 1: index number (default zero) This value is passed as is to :meth:`VTCursor.Filter` 2: index string (default None) This value is passed as is to :meth:`VTCursor.Filter` 3: orderby consumed (default False) Return True if your output will be in exactly the same order as the orderbys passed in 4: estimated cost (default a huge number) Approximately how many disk operations are needed to provide the results. SQLite uses the cost to optimise queries. For example if the query includes *A or B* and A has 2,000 operations and B has 100 then it is best to evaluate B before A. **A complete example** Query is ``select * from foo where price>74.99 and quantity<=10 and customer=="Acme Widgets" order by price desc, quantity asc``. Customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price. :: BestIndex(constraints, orderbys) constraints= ( (2, apsw.SQLITE_INDEX_CONSTRAINT_GT), (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ) ) orderbys= ( (2, True), (5, False) ) # You return ( (1, None, 0), # constraints used 27, # index number "idx_pr_cust", # index name False, # results are not in orderbys order 1000 # about 1000 disk operations to access index ) # Your Cursor.Filter method will be called with: 27, # index number you returned "idx_pr_cust", # index name you returned "Acme Widgets", # constraintarg[0] - customer 74.99 # constraintarg[1] - price .. method:: VTTable.Commit() This function is used as part of transactions. You do not have to provide the method. .. method:: VTTable.Destroy() The opposite of :meth:`VTModule.Create`. This method is called when the table is no longer used. Note that you must always release resources even if you intend to return an error, as it will not be called again on error. SQLite may also :cvstrac:`leak memory <2099>` if you return an error. .. method:: VTTable.Disconnect() The opposite of :meth:`VTModule.Connect`. This method is called when a reference to a virtual table is no longer used, but :meth:`VTTable.Destroy` will be called when the table is no longer used. .. method:: VTTable.FindFunction(name, nargs) Called to find if the virtual table has its own implementation of a particular scalar function. You should return the function if you have it, else return None. You do not have to provide this method. This method is called while SQLite is `preparing `_ a query. If a query is in the :ref:`statement cache ` then *FindFunction* won't be called again. If you want to return different implementations for the same function over time then you will need to disable the :ref:`statement cache `. :param name: The function name :param nargs: How many arguments the function takes .. seealso:: * :meth:`Connection.overloadfunction` .. method:: VTTable.Open() Returns a :class:`cursor ` object. .. method:: VTTable.Rename(newname) Notification that the table will be given a new name. If you return without raising an exception, then SQLite renames the table (you don't have to do anything). If you raise an exception then the renaming is prevented. You do not have to provide this method. .. method:: VTTable.Rollback() This function is used as part of transactions. You do not have to provide the method. .. method:: VTTable.Sync() This function is used as part of transactions. You do not have to provide the method. .. method:: VTTable.UpdateChangeRow(row, newrowid, fields) Change an existing row. You may also need to change the rowid - for example if the query was ``UPDATE table SET rowid=rowid+100 WHERE ...`` :param row: The existing 64 bit integer rowid :param newrowid: If not the same as *row* then also change the rowid to this. :param fields: A tuple of values the same length and order as columns in your table .. method:: VTTable.UpdateDeleteRow(rowid) Delete the row with the specified *rowid*. :param rowid: 64 bit integer .. method:: VTTable.UpdateInsertRow(rowid, fields) -> newrowid Insert a row with the specified *rowid*. :param rowid: :const:`None` if you should choose the rowid yourself, else a 64 bit integer :param fields: A tuple of values the same length and order as columns in your table :returns: If *rowid* was :const:`None` then return the id you assigned to the row. If *rowid* was not :const:`None` then the return value is ignored. VTCursor class ============== .. class:: VTCursor .. note:: There is no actual *VTCursor* class - it is just shown this way for documentation convenience. Your cursor instance should implement all the methods documented here. The :class:`VTCursor` object is used for iterating over a table. There may be many cursors simultaneously so each one needs to keep track of where it is. .. seealso:: :ref:`Virtual table structure ` .. method:: VTCursor.Close() This is the destructor for the cursor. Note that you must cleanup. The method will not be called again if you raise an exception. .. method:: VTCursor.Column(number) Requests the value of the specified column *number* of the current row. If *number* is -1 then return the rowid. :returns: Must be one one of the :ref:`5 supported types ` .. method:: VTCursor.Eof() -> bool Called to ask if we are at the end of the table. It is called after each call to Filter and Next. :returns: False if the cursor is at a valid row of data, else True .. note:: This method can only return True or False to SQLite. If you have an exception in the method or provide a non-boolean return then True (no more data) will be returned to SQLite. .. method:: VTCursor.Filter(indexnum, indexname, constraintargs) This method is always called first to initialize an iteration to the first row of the table. The arguments come from the :meth:`~VTTable.BestIndex` method in the :class:`table ` object with constraintargs being a tuple of the constraints you requested. If you always return None in BestIndex then indexnum will be zero, indexstring will be None and constraintargs will be empty). .. method:: VTCursor.Next() Move the cursor to the next row. Do not have an exception if there is no next row. Instead return False when :meth:`~VTCursor.Eof` is subsequently called. If you said you had indices in your :meth:`VTTable.BestIndex` return, and they were selected for use as provided in the parameters to :meth:`~VTCursor.Filter` then you should move to the next appropriate indexed and constrained row. .. method:: VTCursor.Rowid() -> 64 bit integer Return the current rowid. Troubleshooting virtual tables ============================== Virtual Tables are a relatively recent addition to SQLite and haven't been widely used yet. They do work well if all your routines work perfectly. A big help is using the local variables recipe as described in :ref:`augmented stack traces ` which will give you more details in errors, and shows an example with the complex :meth:`~VTTable.BestIndex` function. You may also find errors compounding. For example if you have an error in the Filter method of a cursor, SQLite then closes the cursor. If you also return an error in the Close method then the first error may mask the second or vice versa. .. note:: SQLite may ignore responses from your methods if they don't make sense. For example in BestIndex, if you set multiple arguments to have the same constraintargs position then your Filter won't receive any constraintargs at all. apsw-3.8.11.1-r1/doc/_sources/shell.txt0000664000000000000000000002213312257607024016202 0ustar rootroot.. currentmodule:: apsw .. _shell: Shell ***** The shell provides a convenient way for you to interact with SQLite, perform administration and supply SQL for execution. It is modelled after the `shell that comes with SQLite `__ which requires separate compilation and installation. A number of the quirks and bugs in the SQLite shell are also addressed. It provides command line editing and completion. You can easily include it into your own program to provide SQLite interaction and add your own commands. Commands ======== In addition to executing SQL, these are the commands available with their short help description. Use `.help *command*` eg (`.help autoimport`) to get more detailed information. .. help-begin: .. code-block:: text .autoimport FILENAME ?TABLE? Imports filename creating a table and automatically working out separators and data types (alternative to .import command) .backup ?DB? FILE Backup DB (default "main") to FILE .bail ON|OFF Stop after hitting an error (default OFF) .colour SCHEME Selects a colour scheme from default, off .databases Lists names and files of attached databases .dump ?TABLE? [TABLE...] Dumps all or specified tables in SQL text format .echo ON|OFF If ON then each SQL statement or command is printed before execution (default OFF) .encoding ENCODING Set the encoding used for new files opened via .output and imports .exceptions ON|OFF If ON then detailed tracebacks are shown on exceptions (default OFF) .exit Exit this program .explain ON|OFF Set output mode suitable for explain (default OFF) .find what ?TABLE? Searches all columns of all tables for a value .header(s) ON|OFF Display the column names in output (default OFF) .help ?COMMAND? Shows list of commands and their usage. If COMMAND is specified then shows detail about that COMMAND. ('.help all' will show detailed help about all commands.) .import FILE TABLE Imports separated data from FILE into TABLE .indices TABLE Lists all indices on table TABLE .load FILE ?ENTRY? Loads a SQLite extension library .mode MODE ?TABLE? Sets output mode to one of column csv html insert json line list python tabs tcl .nullvalue STRING Print STRING in place of null values .output FILENAME Send output to FILENAME (or stdout) .print STRING print the literal STRING .prompt MAIN ?CONTINUE? Changes the prompts for first line and continuation lines .quit Exit this program .read FILENAME Processes SQL and commands in FILENAME (or Python if FILENAME ends with .py) .restore ?DB? FILE Restore database from FILE into DB (default "main") .schema ?TABLE? [TABLE...] Shows SQL for table .separator STRING Change separator for output mode and .import .show Show the current values for various settings. .tables ?PATTERN? Lists names of tables matching LIKE pattern .timeout MS Try opening locked tables for MS milliseconds .timer ON|OFF Control printing of time and resource usage after each query .width NUM NUM ... Set the column widths for "column" mode .. help-end: Command Line Usage ================== You can use the shell directly from the command line. Invoke it like this:: $ python -c "import apsw;apsw.main()" [options and arguments] The following command line options are accepted: .. usage-begin: .. code-block:: text Usage: program [OPTIONS] FILENAME [SQL|CMD] [SQL|CMD]... FILENAME is the name of a SQLite database. A new database is created if the file does not exist. OPTIONS include: -init filename read/process named file -echo print commands before execution -[no]header turn headers on or off -bail stop after hitting an error -interactive force interactive I/O -batch force batch I/O -column set output mode to 'column' -csv set output mode to 'csv' -html set output mode to 'html' -line set output mode to 'line' -list set output mode to 'list' -python set output mode to 'python' -separator 'x' set output field separator (|) -nullvalue 'text' set text string for NULL values -version show SQLite version -encoding 'name' the encoding to use for files opened via .import, .read & .output -nocolour disables colour output to screen .. usage-end: Notes ===== To interrupt the shell press Control-C. (On Windows if you press Control-Break then the program will be instantly aborted.) For Windows users you won't have command line editing and completion unless you install a `readline module `__. Fortunately there is one at http://ipython.scipy.org/moin/PyReadline/Intro which works. However if this :class:`Shell` offers no completions it will start matching filenames even if they make no sense in the context. For Windows users you won't get colour output unless you install `colorama `__ Example ======= All examples of using the SQLite shell should work as is, plus you get extra features and functionality like colour, command line completion and better dumps. You can also use the shell programmatically (or even interactively and programmatically at the same time). See the :ref:`example ` for using the API. Unicode ======= SQLite only works with `Unicode `__ strings. All data supplied to it should be Unicode and all data retrieved is Unicode. (APSW functions the same way because of this.) At the technical level there is a difference between bytes and characters. Bytes are how data is stored in files and transmitted over the network. In order to turn bytes into characters and characters into bytes an encoding has to be used. Some example encodings are ASCII, UTF-8, ISO8859-1, SJIS etc. (With the exception of UTF-8/16/32, other encodings can only map a very small subset of Unicode.) If the shell reads data that is not valid for the input encoding or cannot convert Unicode to the output encoding then you will get an error. When the shell starts Python automatically detects the encodings to use for console input and output. (For example on Unix like systems the LC_CTYPE environment variable is sometimes used. On Windows it can find out the `code page `__.) You can override this autodetection by setting the PYTHONIOENCODING environment variable. There is also a .encoding command. This sets what encoding is used for any subsequent .read, .import and .output commands but does not affect existing open files and console. When other programs offer you a choice for encoding the best value to pick is UTF8 as it allows full representation of Unicode. In addition to specifying the encoding, you can also specify the error handling when a character needs to be output but is not present in the encoding. The default is 'strict' which results in an error. 'replace' will replace the character with '?' or something similar while 'xmlcharrefreplace' uses xml entities. To specify the error handling add a colon and error after the encoding - eg:: .encoding iso-8859-1:replace The same method is used when setting PYTHONIOENCODING. This `Joel on Software article `__ contains an excellent overview of character sets, code pages and Unicode. Shell class =========== This is the API should you want to integrate the code into your shell. Not shown here are the functions that implement various commands. They are named after the command. For example .exit is implemented by command_exit. You can add new commands by having your subclass have the relevant functions. The doc string of the function is used by the help command. Output modes work in a similar way. For example there is an output_html method and again doc strings are used by the help function and you add more by just implementing an appropriately named method. Note that in addition to extending the shell, you can also use the **.read** command supplying a filename with a **.py** extension. You can then `monkey patch `__ the shell as needed. .. autoclass:: apsw.Shell :members: :undoc-members:apsw-3.8.11.1-r1/doc/_sources/changes.txt0000664000000000000000000013472312565422736016523 0ustar rootrootChange History ************** .. currentmodule:: apsw 3.8.11.1-r1 =========== Added SQLITE_FCNTL_RBU and SQLITE_FCNTL_ZIPVFS constants. setup's fetch command can now get arbitrary fossil versions. For example specify ``fossil-e596a6b6``. Update tests due to a change in Python 3.5 (exception returned with invalid strings for system calls changed from TypeError to ValueError). Adjusted some internal detection related to the :func:`fork checker ` 3.8.10.1-r1 =========== Added deterministic parameter to :func:`Connection.createscalarfunction` (:issue:`187`) Switched to new SQLite API returning 64 bit values for :func:`status` (:issue:`191`) 3.8.9-r1 ======== Fixed column description caching which could be preserved between multiple statements in the same execution (:issue:`186`) Updated documentation building tool to use new database of information from the SQLite site. This is simpler and more reliable. (Previously used site scraping.) Added SQLITE_AUTH_USER, SQLITE_FCNTL_LAST_ERRNO, SQLITE_FCNTL_WAL_BLOCK, SQLITE_FCNTL_GET_LOCKPROXYFILE, and SQLITE_FCNTL_SET_LOCKPROXYFILE constants. Removed SQLITE_GET_LOCKPROXYFILE, SQLITE_SET_LOCKPROXYFILE, SQLITE_LAST_ERRNO file control constants. These are deprecated - use the versions with FCNTL in their name (eg SQLITE_FCNTL_GET_LOCKPROXYFILE). Added :ref:`mappings ` for conflict resolution modes, virtual table configuration options and xShmLock VFS flags. 3.8.8.2-r1 ========== No APSW changes. 3.8.8.1-r1 ========== The column description is now cached on first request during a query so getting it is quick if called for every row. Added SQLITE_CONFIG_PCACHE_HDRSZ and SQLITE_CONFIG_PMASZ constants, and support for them in :func:`config`. Added SQLITE_CHECKPOINT_TRUNCATE constant. Update year in various places to 2015. 3.8.7.3-r1 ========== No APSW changes. 3.8.7.2-r1 ========== Fixed parsing of icu-config flags 3.8.7.1-r1 ========== Added SQLITE_LIMIT_WORKER_THREADS constant 3.8.6-r1 ======== Updated test suite for Python 3.4 unittest garbage collection changes (:issue:`164` :issue:`169`) Using the :ref:`recommended ` build option of **--enable-all-extensions** turns on `STAT4 `__. Windows binaries include this too. 3.8.5-r1 ======== Added SQLITE_IOCAP_IMMUTABLE and SQLITE_FCNTL_WIN32_SET_HANDLE constants. 3.8.4.3-r1 ========== Added :meth:`Cursor.fetchone` 3.8.4.2-r1 ========== No APSW code changes. Rebuild due to updated SQLite version. 3.8.4.1-r1 ========== Windows 64 bit binary builds for Python 3.3+ are back - thanks to Mike C. Fletcher for `pointing the way `__ Correct detection of current SQLite version from download page for :doc:`setup.py ` fetch command Tested against Python 3.4 and binaries for Windows. 3.8.3.1-r1 ========== Updated :doc:`shell` completions for keywords, functions and pragmas. 3.8.3-r1 ======== APSW is now hosted at Github - https://github.com/rogerbinns/apsw Added SQLITE_RECURSIVE, SQLITE_READONLY_DBMOVED, SQLITE_FCNTL_COMMIT_PHASETWO, SQLITE_FCNTL_HAS_MOVED and SQLITE_FCNTL_SYNC constants. 3.8.2-r1 ======== Added SQLITE_CONFIG_WIN32_HEAPSIZE, SQLITE_CONSTRAINT_ROWID and SQLITE_FCNTL_TRACE constants. 3.8.1-r1 ======== Added SQLITE_CANTOPEN_CONVPATH and SQLITE_IOERR_CONVPATH extended error codes. Updated pysqlite urls to point to github. Various minor build/download documentation updates. 3.8.0.2-r1 ========== No APSW code changes. Rebuild due to updated SQLite version. Updated documentation tips to show how to :ref:`get detailed diagnostics `. 3.8.0.1-r1 ========== No APSW changes. Rebuild due to updated SQLite version. Windows binaries for Python 3.3 64 bit are no longer available as a Visual Studio update obliterated the ability to compile them, and I have no patience left to fight Microsoft's tools. 3.8.0-r2 ======== No APSW changes - updated checksums because SQLite changed the released archive to address an autoconf issue on some platforms 3.8.0-r1 ======== Windows binaries for Python 3.3 64 bit are now available after managing to get several pieces of Microsoft software to cooperate. Fixed shell dump issue when system routines (eg timestamp, username, hostname) couldn't automatically be promoted to unicode. They are used in comments in the output. (:issue:`142`) Added SQLITE_DBSTATUS_DEFERRED_FKS, SQLITE_IOERR_GETTEMPPATH, SQLITE_WARNING_AUTOINDEX and SQLITE_BUSY_SNAPSHOT constants. 3.7.17-r1 ========= Removed tests that checked directly calling VFS read/write with negative offsets or amounts returns errors. This version of SQLite no longer returns errors in those circumstances and typically crashes instead. Various new constants. 3.7.16.2-r1 =========== No APSW changes - just a binary rebuild. Windows users are recommended to upgrade their SQLite version. 3.7.16.1-r1 =========== Updated tables of functions and pragmas in the :doc:`shell` to match current SQLite version. 3.7.16-r1 ========= Adjust to different SQLite download URLs Added SQLITE_CONSTRAINT_* and SQLITE_READONLY_ROLLBACK `extended error codes `__ Removed CouchDB virtual table 3.7.15.2-r1 =========== No APSW changes - binary rebuild to pickup new SQLite version 3.7.15.1-r1 =========== Use https (SSL) for SQLite web site references (downloads and documentation links). On some platforms/versions/SSL libraries, Python's SSL module `doesn't work `__ with the SQLite website so a fallback to http is used - the downloads still have their checksum verified. 3.7.15-r1 ========= Work around changed semantics for error handling when the VFS xDelete method is asked to delete a file that does not exist. Completely removed all `AsyncVFS `__ related code. This extension is `no longer maintained nor supported `__ by the SQLite team. `WAL `__ is a good way of getting similar functionality. Added :func:`config` support for SQLITE_CONFIG_COVERING_INDEX_SCAN. Added several new constants: SQLITE_CONFIG_COVERING_INDEX_SCAN, SQLITE_CONFIG_SQLLOG, SQLITE_FCNTL_BUSYHANDLER, SQLITE_FCNTL_TEMPFILENAME, SQLITE_CANTOPEN_FULLPATH, SQLITE_IOERR_DELETE_NOENT 3.7.14.1-r1 =========== Updated setup and test suite so that all files are explicitly closed instead of relying on garbage collection. Added Windows binaries for Python 3.3. (Only 32 bit as Python doesn't provide a free way of making 64 bit Windows binaries.) Updated setup.py to work with changed SQLite download page formatting when detecting latest version. Due to a `Python 3.3.0 regression bug `__ using the csv ouput mode in the shell can result in bad data or Python crashing. The bug has been fixed for Python 3.3.1 which is due in November 2012. 3.7.14-r2 ========= Fixed an issue with the GIL in the destructor for functions. The bug would be encountered if you create a function with the same name as an existing function and are using an upcoming version of Python (eg 2.7.4). Thanks to Arfrever Frehtes Taifersar Arahesis for finding it (:issue:`134`). Added shell .print command to match upcoming SQLite shell changes. 3.7.14-r1 ========= Added support for :meth:`Connection.status` (calls `sqlite3_db_status `__). The legacy Windows `Compiled Help Format `__ documentation is no longer produced - the help compiler setup program can't cope with modern machines. 3.7.13-r1 ========= Do not free a structure on failure to register a virtual table module as SQLite does that anyway. Added SQLITE_OPEN_MEMORY constant. 3.7.12.1-r1 =========== No changes to APSW. Binary rebuilds due to SQLite bugfixes. 3.7.12-r1 ========= Re-enabled the asyncvfs. Added :attr:`Cursor.description` to make DB API interoperability a little easier (:issue:`131`). Added SQLITE_DBSTATUS_CACHE_WRITE and SQLITE_CANTOPEN_ISDIR constants. 3.7.11-r1 ========= Added SQLITE_ABORT_ROLLBACK and SQLITE_FCNTL_PRAGMA constants. Added :meth:`Connection.readonly`. Changed :attr:`Connection.filename` which used to return the string used to open the database and now returns the absolute pathname. Added :meth:`Connection.db_filename`. 3.7.10-r1 ========= The default sector size returned in VFS routines is 4,096 to match SQLite's new default. Several links to SQLite tickets and documentation were updated (:issue:`122`). The async vfs is disabled due to a bug in its code that leads to random memory reads when dealing with filenames. Added SQLITE_CONFIG_GETPCACHE2, SQLITE_CONFIG_GETPCACHE2, SQLITE_FCNTL_POWERSAFE_OVERWRITE, SQLITE_FCNTL_VFSNAME and SQLITE_IOCAP_POWERSAFE_OVERWRITE constants. Fix shell dumping when SQLite doesn't strip trailing comments from view declarations (`discussed here `__) Added a :class:`URIFilename` class to encapsulate how SQLite provides URI parameters to VFS routines (:issue:`124`). Compatibility break: Depending on flags your VFS xOpen method may get a :class:`URIFilename` or a string for the filename. You can still pass either to the :class:`VFSFile`. Compatibility break: The :doc:`vfs` code used to always run strings you provided through :meth:`VFS.xFullPathname`. This isn't possible with URI pathnames so that code has been removed. If you construct filenames for :meth:`VFS.xOpen` directly (ie bypassing the SQLite database open call) then you must call :meth:`VFS.xFullPathname` yourself first to ensure relative pathnames are turned into absolute pathnames. The SQLite API guarantees that filenames passed to :meth:`VFS.xOpen` are exactly what was returned from :meth:`VFS.xFullPathname`. 3.7.9-r1 ======== Added SQLITE_DBSTATUS_CACHE_HIT, SQLITE_DBSTATUS_CACHE_MISS and SQLITE_FCNTL_OVERWRITE constants. 3.7.8-r1 ======== Updated documentation and tests due to an undocumented change in VFS xDelete semantics. Added SQLITE3_FCNTL_PERSIST_WAL and SQLITE3_FCNTL_WIN32_AV_RETRY `file controls `__. Wrapped sqlite3_sourceid (:issue:`120`) 3.7.7.1-r1 ========== Added `SQLITE_CONFIG_URI `__ and support for it in :meth:`config`, and the open flag `SQLITE_OPEN_URI `__. This makes it easy to use `URI filenames `__. The :ref:`shell` now uses `URI filenames `__ by default. New `extended error constants `__: SQLITE_CORRUPT_VTAB, SQLITE_IOERR_SEEK, SQLITE_IOERR_SHMMAP, SQLITE_READONLY_CANTLOCK and SQLITE_READONLY_RECOVERY. 64 bit platforms (`LP64 - most non-Windows `__) and Python 2: The Python int type is returned for 64 bit integers instead of Python long type. 3.7.6.3-r1 ========== When invoking the shell by calling :func:`apsw.main` it will not become interactive if you supply SQL commands as command line arguments. This is to have the same behaviour as the SQLite shell (:issue:`115`). The shell has a *.find* command making it easy to search for values across all columns of some or all tables. The shell has a *.autoimport* command making it easy to import a data file automatically deducing separators, column names and data types. Detect attempted use of a cursor as input data for itself. 3.7.6.2-r1 ========== Fixed :issue:`117` where the shell could report an I/O error on changing output target for some operating systems. Thanks to Edzard Pasma for finding and diagnosing this. Added support for VFS version 3 which allows redirecting :meth:`system calls ` used by some VFS implementations (eg for testing or sandboxing). :exc:`NotFoundError` exception added. Added :meth:`Connection.config`. Updated :meth:`Connection.wal_checkpoint` to use `sqlite3_wal_checkpoint_v2 `__ which provides more fine grained control over checkpointing and returns useful information. 3.7.5-r1 ======== Backwards incompatible change in SQLite 3.7.5 for handling of :meth:`~VFSFile.xFileControl`. If you implement this method in a VFS then you must return True or False to indicate if the operation was understood. :meth:`Connection.filecontrol` now returns that value. (Previously you could not tell the difference between an op being understood and an error resulting, or the op not being understood at all.) Windows Python 3.2 binaries now available. 3.7.4-r1 ======== Binary downloads for Windows 64 bit Python versions 2.6 and above including Python 3 are now available. :meth:`apsw.softheaplimit` now uses `sqlite3_soft_heap_limit64 `__ so you can provide values larger than 2GB. It is now also able to return the previous value instead of None. Improve getting shell timer information for 64 bit Windows. :meth:`blob.reopen` is implemented. FTS4 is enabled and in the binary builds. Note that it is an augmentation of FTS3 rather than totally separate code and described in the `SQLite documentation `__. 3.7.3-r1 ======== You can read blobs into pre-existing buffers using :meth:`blob.readinto`. (This is more efficient than allocating new buffers as :meth:`blob.read` does and then copying.) (:issue:`109`). Fixed bug with unicode output in CSV mode in the shell. `sqlite_create_function_v2 `__ now means that some housekeeping APSW did can be pushed back onto SQLite and the consequent deletion of some code 3.7.2-r1 ======== No changes to APSW. Upgrading to this version of SQLite is `recommended `__. 3.7.1-r1 ======== Updated various constants including `SQLITE_FCNTL_CHUNK_SIZE `__ used with :meth:`Connection.filecontrol`. Fixed Unicode output with some file objects from the shell (:issue:`108`). With the shell, you can specify handling of characters not present in the output encoding (eg replace to use '?' or similar, ignore, xmlcharrefreplace etc). For example:: .encoding cp437:replace 3.7.0.1-r1 ========== Fixed issue when using a tracer and a context manager fails to commit. 3.7.0-r1 ======== Added several new constants. `Write Ahead Logging `__ is :ref:`supported `. You can make all databases automatically use WAL mode if available by using :ref:`connection hooks `. Added :meth:`format_sql_value` for generating a SQL syntax string from a value. This is implemented in C and is significantly faster than doing the same formatting in Python. Using the above function and other tweaks the :ref:`shell` dumper is now three to four times faster. Thanks to Nikolaus Rath for pointing out the problem and providing test data. The shell now does colour highlighting making it easy to visually distinguish prompts, errors, headers and value types when outputting to a terminal. See the :option:`--no-colour` argument and **.colour** command. Those of you in the two countries that have not adopted the metric system may also omit the 'u'. For Windows users you won't get colour output unless you install `colorama `__ When using the context manager (with statement) of a :class:`Connection` and the exit commit had an error, then the transaction is rolled back. This could occur if SQLite had buffered the transaction entirely in memory and a non-eager transaction lock had been obtained. Thanks to Geoff Ness for finding the problem. (:issue:`98`). Fixed bug when an error is returned creating an instance of a virtual table (eg an invalid column name). Before the fix you would get the previous error message or a crash. Thanks to Jose Gomes for finding the problem. :issue:`103` There is now a PPA for Ubuntu users that is kept up to date with APSW and SQLite at https://launchpad.net/~ubuntu-rogerbinns/+archive/apsw which has the latest SQLite embedded statically inside (ie system SQLite is ignored) and has all the extensions enabled: FTS3, RTree, ICU, asyncvfs If you open VFS files directly then the filename is always run through xFullPathname first. SQLite guarantees this behaviour but the existing VFS code was not doing that for direct opens. Opens from SQLite were doing it. Fixed error where :attr:`apsw.connection_hooks` were being run before the :ref:`statement cache ` was initialised which would result in a crash if any hooks executed SQL code. 3.6.23.1-r1 =========== Shell CSV output under Python 3.1 is corrected (work around Python 3.1 StringIO bug/incompatibility with other Python versions). Simplified access to the shell's :attr:`database ` from the API. Added a shell :ref:`example `. 3.6.23-r1 ========= If setup is downloading files and an error occurs then it retries up to 5 times. Added SQLITE_CONFIG_LOG and SQLITE_OPEN_AUTOPROXY constants. Added :attr:`compile_options` which tells you what compilation defines SQLite was compiled with. Added :meth:`log` to call the SQLite logging interface, and updated :meth:`config` so you can set log destination function. 3.6.22-r1 ========= Made it possible to run distutils 'sdist' from an already produced source that was made from 'sdist'. This was necessary for some Python virtual package environments. Note that the recursive result does not include the HTML help as distutils has no way of including content in a reparented location. :issue:`89` Various settings are output as pragma statements when making a dump such as page size, encoding, auto_vacuum etc. The pragmas are commented out. :issue:`90` 3.6.21-r1 ========= Source and binary files are now digitally signed which means you can verify they have not been tampered with. See :ref:`verifydownload` for instructions. The pragmas generated for a shell dump are emitted outside the transaction as they have no effect inside the transaction. Removed some unintentional logging code left in CouchDB virtual table code. 3.6.20-r1 ========= Support for Python 3.0 has been dropped as it has been `end of lifed `__. Use Python 3.1 onwards. Changes to how some statements are `prepared `__ to allow the new RANGE and LIKE optimisations with bound variables introduced in SQLite 3.6.20 to be used. See :issue:`85` for the long and gory details. You can now access `CouchDB `__ using a virtual table. This lets you easily bidirectionally transfer data between SQLite and CouchDB as well as work on data in both sources at the same time. Other example uses are in the documentation. :ref:`Shell ` changes: * .dump command now outputs views in the order they were created rather than alphabetical as views could reference each other. :issue:`82` * .dump command now outputs the `user_version `__ as a comment. It is used by some programs (such as Firefox) to keep track of the schema version. * Can now output in `JSON `__. * Fixed :issue:`83` - exception if history file didn't exist * You can right justify output in column mode by specifying negative widths. :issue:`84` * You no longer get a traceback doing completions if there is a virtual table in the database but the module is not loaded. :issue:`86` * You can now get detailed tracebacks including local variables using the ".exception ON" command. This is useful when developing virtual tables and similar functionality. * You can now terminate a SQL statement with "go" or "/" on a line by itself. 3.6.19-r1 ========= **Backwards incompatible change** Fixed :issue:`72` where APSW wasn't zero basing virtual table :meth:`~VTTable.BestIndex` constraints returned as documented. If you have working BestIndex code then you need to update it for this release. Thanks to Lefteris for finding this issue. **Backwards incompatible change** The :meth:`~apsw.complete` method has moved from :class:`Connection` to :mod:`apsw` where it should have been all along. You should now call :meth:`apsw.complete` instead. (It even had an example showing it to be part of the module and not a specific connection!) There is now an :class:`interactive shell ` very similar to that `provided by SQLite `__. You can embed it in your own program, inherit from it to provide more commands and output modes, or just run it like this:: $ python -c "import apsw ; apsw.main()" Added the `SQLITE_LIMIT_TRIGGER_DEPTH `__, `SQLITE_OPEN_PRIVATECACHE `__ and `SQLITE_OPEN_SHAREDCACHE `__ constants. The :file:`setup.py` file now has the various options available made applicable to appropriate commands only. Read the :ref:`updated documentation `. You can now specify :option:`build --enable=stat2` to :file:`setup.py` to enable `advanced statistics gathering `__ for query planning. :file:`setup.py` can automatically fetch the :ref:`ext-asyncvfs` extension for you. If the source is present when APSW is built then it will be automatically included and the :meth:`API ` provided. A :meth:`fork_checker` is available which turns on detection when you have used SQLite objects across a fork (a **very** bad thing). This is possible on Unix like operating systems, especially if you use the `multiprocessing module `__. Extension loading is now compiled in by default when using the amalgamation and compiled out when using existing libraries. This is more likely to match your machine. You can use :option:`--omit=load_extension` or :option:`--enable=load_extension` to the build/build_ext commands to explicitly disable/enable extension loading. :issue:`67` :ref:`setup.py ` will now abort on a download that has no checksum. See :ref:`more information ` on checksums. :ref:`setup.py ` can also fetch the version of SQLite currently under development before a release. Use :option:`--version=fossil`. Updated which code uses `experimental SQLite APIs `__ based on changes in SQLite. The test suite will also work correctly with experimental on or off. (It is on by default.) 3.6.18-r1 ========= The APSW license has been updated to allow you (at your option) to use any `OSI approved license `__. The :ref:`speedtest` has been updated to (optionally) use unicode characters and to (optionally) increase the sizes of data items. Fixed error handling code to not record SQLite error strings in some situations where it was not necessary. This results in the code executing a little faster. 3.6.17-r1 ========= APSW has migrated from Subversion to `Mercurial `_ for source code control. Hosting remains at `Google Code `_ Updated a test due to VFS xUnlock errors now being ignored sometimes by SQLite (:cvstrac:`3946`). The downloads page in the help didn't mention the Windows Python 3.1 installer. Running the test suite is now integrated into :file:`setup.py` so you can do the building and testing all in one go. Sample command line:: $ python setup.py install test The test suite will now check the functionality of the FTS3, RTree and ICU extensions if they were included. (The Windows binary distribution includes FTS3 and RTree by default.) Fixed :issue:`55` where FTS3 was unintentionally omitted from the Windows binary distribution. Various documentation updates. 3.6.16-r1 ========= Windows binary distribution includes Python 3.1. Trivial tweaks to keep MSVC happy. 3.6.15-r1 ========= Fixed :issue:`50` where :meth:`blob.read` was returning :const:`None` on end of file instead of the documented (and correct) empty string/bytes. Corrected spelling of option in :ref:`apswtrace ` and only output CURSORFROM if SQL tracing is on. 3.6.14.2-r1 =========== Updated test code because SQLite 3.6.15 returns a different error code on trying to register a function with too many arguments (see :cvstrac:`3875`). 3.6.14.1-r1 =========== Changed some internal symbol names so they won't clash with similar new ones used by SQLite in the amalgamation. Added :attr:`apsw.using_amalgamation` so you can tell if APSW was compiled using the `SQLite amalgamation `__. Using the amalgamation means that SQLite shared libraries are not used and will not affect your code. Added a checksums file so that when :file:`setup.py` downloads SQLite, we know it hasn't been tampered with. (The :ref:`--fetch-sqlite ` argument can be used to automatically download SQLite.) 3.6.13-r1 ========= Added SQLITE_LOCKED_SHAREDCACHE `extended error code `_. Updated tests as the VFS delete error handling code in SQLite now returns the same high level error code between Windows and non-Windows. The CHM format help file produced by the Windows HTML Help Compiler is viewable again under Windows HTML Help Viewer. 3.6.11-r1 ========= You can now use the `hot backup functionality `_ introduced in SQLite 3.6.11. Updated a VFS test to reflect changes in SQLite underlying error handling. (Previously SQLite almost always returned :exc:`FullError` on any write that had an error but now returns :exc:`SQLError`.) Changed close methods so that Connections can be released earlier. In prior releases a :meth:`closed cursor ` could still be used (reincarnated). That is no longer the case and you will get :exc:`CursorClosedError`. 3.6.10-r1 ========= You can use the database as a `context manager `_ as defined in :pep:`0343`. When you use *with* a transaction is started. If the block finishes with an exception then the transaction is rolled back, otherwise it is committed. See :meth:`Connection.__enter__` for an example. Behind the scenes the `savepoint `_ functionality introduced in SQLite 3.6.8 is used. Consequently :class:`Connection` *with* blocks can be nested. If you use Connection level :ref:`execution tracers ` then they will be called with the savepoint SQL statements. You can also use :class:`blobs ` as a context manager which ensures it is always closed when finished using it. See :meth:`blob.__enter__` for an example. Added :ref:`constants `: * SQLITE_SAVEPOINT (authorizer code) * SQLITE_IOERR_CLOSE (extended result code) * SQLITE_IOERR_DIR_CLOSE (extended result code) * New mapping: SQLITE_FCNTL_LOCKSTATE, SQLITE_GET_LOCKPROXYFILE, SQLITE_SET_LOCKPROXYFILE, SQLITE_LAST_ERRNO. SQLite does not document the purpose of these except the first one. Updated :ref:`vfs` test code. SQLite's routines that call :meth:`VFSFile.xTruncate` used to ignore errors but now return an error to the caller. :meth:`VFSFile.xFileControl` is now called so a user implemented one must call any base it inherits from for SQLite to function normally. Updated the xDlSym VFS routine to have the different but compatible type signature as changed in SQLite 3.6.7 to deal with pedantic compiler warnings. Fixed bug in :ref:`apswtrace ` that could result in poorly formatted times. Leading comments are also stripped for queries printed in the final reports. You can also request subsets of the reports. The :ref:`speedtest` script will now fallback to the Python builtin sqlite3 module if it can't find an externally installed pysqlite. 3.6.6.2-r1 ========== Windows binary download for Python 3.0 is :ref:`available `. Various changes in data structures and containers to reduce code size. Changed the code to handle SQLite errors to only use Python functionality and no operating system functionality (thread local storage). This also addresses :issue:`36` where Vista was not binary compatible with XP. Thanks to Rudolf Gaertner for assistance in detecting and diagnosing this issue. :class:`Connections `, :class:`cursors ` and :class:`blobs ` can be used by `weak references `_. You can now install :class:`Connection` wide :meth:`execution ` and :meth:`row ` :ref:`tracers `. The callbacks for execution and row tracers have a different signature to include the cursor the execution or row happened on. This is a backwards incompatible change. See :ref:`tracing ` for details. Due to popular demand, added :meth:`Cursor.fetchall`. This is a longer way of typing ``list(cursor)``. Added attributes to the :class:`Connection` class - :attr:`~Connection.filename`, :attr:`~Connection.open_flags` and :attr:`~Connection.open_vfs`. These let you track how the database was opened. Added a :ref:`apswtrace ` script to allow easy SQL tracing without having to modify your code. Revert to using older SQLite APIs in order to work around :cvstrac:`2158`. (This also saves a little bit of SQLite memory usage). The user visible effect was that you could get different exceptions and error text depending on whether a query was already in the :ref:`statement cache ` or if you were multi-threading. As an example, if you have a query that used an unknown collation then SQLite's `prepare `_ returns :const:`SQLITE_ERROR` with error text about the bad collation. If a query had already been prepared, the collation removed and then `run `_ the new SQLite routines are returning :const:`SQLITE_SCHEMA` and generic ``schema changed`` error text. Changing user defined functions could also cause a previously correct query to become invalid. 3.6.5-r1 ======== The distribution now includes a :ref:`speedtest` script. You can use this to see how APSW performs relative to pysqlite, or to track performance differences between SQLite versions. The underlying queries are derived from `SQLite's speed test `_ The statement cache was completely rewritten. It uses less memory and scales significantly better. It was possible to get a deadlock between the Python GIL and the SQLite database mutex when using the same :class:`Connection` across multiple threads. Fixed by releasing the GIL in more places and added test that inspects the source to verify GIL/mutex handling. Thanks to amicitas reporting this as :issue:`31` SQLite's API has been extended in 3.6.5 so that errors can be retrieved in a thread safe manner. APSW now uses this API. As a consequence of the prior two changes it is now possible and safe to use the same :class:`Connection` across as many threads as you want `concurrently `_. Documentation is now done using `Sphinx `_ which was adopted by Python 2.6 and 3. This has allowed for richer documentation and more output formats such as PDF and `Windows CHM `_ format. The binary distribution for Windows includes the `full text search `__ (FTS) and `Rtree `_ extensions. See also :ref:`setup_py_flags`. The source structure and files were reorganized to make it clearer where things are implemented and to make automatic extraction of documentation easier. 3.6.3-r1 ======== You can now write your own :ref:`VFS` in Python. You can also inherit from an existing VFS making it easy to augment or override small bits of behaviour without having to code everything else. See the :ref:`example ` where database files are obfuscated by XORing their contents. :file:`setup.py` now takes an optional :option:`--fetch-sqlite[=ver]` argument to automatically download and use the latest SQLite amalgamation (or a specified version). On non-Windows platforms it will also work out what compile flags SQLite needs (for example :const:`HAVE_USLEEP`, :const:`HAVE_LOCALTIME_R`). Several other options to :file:`setup.py` are also available to control enabling/omitting certains features and functionality. See :ref:`building ` for further details. APSW checks that SQLite was compiled to be `threadsafe `_ Added new constants: * :const:`SQLITE_IOERR_ACCESS`, :const:`SQLITE_IOERR_CHECKRESERVEDLOCK` and :const:`SQLITE_IOERR_LOCK` extended result codes * :const:`SQLITE_OPEN_NOMUTEX` and :const:`SQLITE_OPEN_FULLMUTEX` open flags * Several new :const:`SQLITE_CONFIG` and :const:`SQLITE_STATUS` codes Wrapped several new SQLite apis: * `sqlite3_config `_ * `sqlite3_initialize/sqlite3_shutdown `_ * `sqlite3_memory_used/sqlite3_memory_highwater `_ * `sqlite3_status `_ * `sqlite3_soft_heap_limit `_ * `sqlite3_release_memory `_ * `sqlite3_randomness `_ The following experimental apis are not wrapped as there is nothing useful you can do with them (yet): * `sqlite3_db_config `_ * `sqlite3_db_status `_ Restored prior behaviour regarding Python ints and longs returning int for numbers fitting in signed 32 bit. This only affects Python 2 as Python 3 uses long exclusively. Thanks to Joe Pham for reporting this as :issue:`24` Added :meth:`Connection.getsqlite3pointer` method to help with :issue:`26` 3.5.9-r2 ======== APSW now works with Python 3 (you need 3.0b1 or later). (:issue:`17`) Removed the :const:`SQLITE_MAX_*` constants since they could be unreliable (eg APSW can't tell what a shared library was compiled with). A workaround is documented in :func:`Connection.limit`. 3.5.9-r1 ======== APSW is now hosted at https://code.google.com/p/apsw You can use this with SQLite 3.5.9 onwards. SQLite now provides the source all `amalgamated `_ into one file which improves performance and makes compilation and linking of SQLite far easier. The build instructions are updated. :const:`SQLITE_COPY` authorizer code and :const:`SQLITE_PROTOCOL` error code are no longer used by SQLite, but the values are left in apsw for backwards compatibility :const:`SQLITE_IOERR_DELETE`, :const:`SQLITE_IOERR_BLOCKED` and :const:`SQLITE_IOERR_NOMEM` :func:`Connection.interrupt` can be called from any thread SQLite has implementation limits on string and blob lengths (roughly constrained to fitting within a signed 32 bit integer - less than 2GB) which weren't checked. Using a 64 bit Python 2.5+ (as I do) it would have been possible to destroy memory and crash the program. Consequently APSW has length checks to ensure it doesn't happen. SQLite now has further `limits checking `_ which cover other things as well such as maximum number of virtual machine opcodes, maximum number of variables etc. These are very useful if you are taking in SQL statements from elsewhere. Call :func:`Connection.limit` A rename method was added for virtual tables. SQLite 3.5 removed the requirement that all operations on a connection be done in the same thread. Consequently all code that enforced the restriction has been removed from APSW. You no longer have to call :func:`Connection.close`. This was previously a requirement to ensure that the correct thread was used (destructors otherwise run in any thread). It is however still a good idea to do so since you can catch exceptions when close is called but not if you let the destructor do the closing. SQLite now has incremental :ref:`blob I/O ` :issue:`4` which could lead to generic error messages was fixed in SQLite 3.5.9. Fixed :issue:`1` error in example code for virtual tables which caused filename errors on Windows. Fixed :issue:`15` releasing the GIL around calls to sqlite3_prepare. Fixed :issue:`7` ensuring that extension module filenames are converted to utf8. Use the `sqlite3_open_v2 `_ interface which allows specifying which vfs to use. This release does not allow you to write your own vfs as the SQLite vfs interface is being changed for SQLite 3.6. Used new SQLite functions that keep track of when virtual tables and collations are no longer used so they can be released. Previously APSW also had to keep track duplicating effort. Improved test coverage a few more percent. The statement cache now defaults to the same number of entries as pysqlite (100). You can however specify more or less as needed. :func:`Connection.collationneeded` was implemented. 3.3.13-r1 ========= As of this release, APSW is now co-hosted with pysqlite meaning there is one site to go to for your Python SQLite bindings. (Both projects subsequently moved to Google Code.) You can use this with SQLite 3.3.13 onwards. There were no API changes in SQLite 3.3.10 to 3.3.13 although some internal bugs were fixed and the 3.3.13 release is recommended over the earlier version. Thanks to Ed Pasma for highlighting these issues: * :func:`Connection.interrupt` can be safely called from any thread. * Empty statements or those consisting entirely of whitespace do not cause misuse errors (internally SQLite started returned NULL pointers for those statements, and `sqlite3_step `_ didn't like being passed the NULL pointer). * Changed special handling of :const:`SQLITE_BUSY` error to be the same as other errors. The special handling previously let you restart on receiving busy, but also hung onto statements which could result in other statements getting busy errors. 3.3.10-r1 ========= You can use this with SQLite 3.3.10 onwards. Added a statement cache that works in conjunction with the `sqlite3_prepare_v2 `_ API. A few issues were exposed in SQLite and hence you must use SQLite 3.3.10 or later. 3.3.9-r1 ======== You can use this with SQLite 3.3.9 onwards. SQLite added `sqlite3_prepare_v2 `_ API. The net effect of this API update is that you will not get SQLITE_SCHEMA any more. SQLite will handle it internally. 3.3.8-r1 ======== You can use this with SQLite 3.3.8 onwards. There was an incompatible API change for virtual tables in SQLite 3.3.8. Virtual tables updated for new api. You must call :func:`~Connection.close` on connections. You can also call :func:`~Cursor.close` on cursors, but it usually isn't necessary. All strings are returned as unicode. :func:`PyErr_WriteUnraisable` was used for errors in destructors. Unfortunately it is almost completely useless, merely printing :func:`str` of the object and exception. This doesn't help in finding where in your code the issue arose so you could fix it. An internal APSW implementation generates a traceback and calls :func:`sys.excepthook`, the default implementation of which prints the exception and the traceback to sys.stderr. .. Note:: The line number reported in the traceback is often off by 1. This is because the destructors run "between" lines of code and so the following line is reported as the current location. Authorizer codes :const:`SQLITE_CREATE_VTABLE`, :const:`SQLITE_DROP_VTABLE` and :const:`SQLITE_FUNCTION` added. SQLite `extended result codes `_ are available - see :ref:`exceptions` for more detail. :data:`Connection.hooks` added so you can easily register functions, virtual tables or similar items with each Connection as it is created. Added :ref:`mapping dicts ` which makes it easy to map the various constants between strings and ints. 3.3.7-r1 ======== Never released as 3.3.8 came along. You can use this release against SQLite 3.3.7. There were no changes in the SQLite 3.3.6 API from 3.3.5. In SQLite 3.3.7 an API was added that allowed removing a chunk of duplicate code. Also added were `Virtual Tables `_ and loading of external modules (shared libraries). APSW had the following changes: * Even more test cases added (you can't have too many tests :-) * When exceptions occur, dummy frames are added to the traceback in the C code. This makes it a lot easier to tell why code was called if you encounter an exception. See :ref:`augmented stack traces ` for details. * String values (traditional and Unicode) work correctly if they have embedded NULL characters (ie not truncated at the NULL). * You can load SQLite shared library extensions. 3.3.5-r1 ======== You can use this release against any release of SQLite 3 from 3.3.5 onwards. A bug was also fixed when reporting an error during the cleanup of an aggregate function if there had also been an error in the step function. (:func:`PyErr_WriteUnraisable(NULL)` crashed on some versions of Python but not others.) SQLite added several functions for returning metadata about result column sets. You have to compile SQLite with :const:`SQLITE_ENABLE_COLUMN_METADATA` to get them. This is not the default for SQLite. I don't believe these are generally useful except in some corner cases and so they aren't wrapped. However please shout if you do need them. Note that :func:`Cursor.getdescription` will already give you generally useful information. (Also see the `pragmas `_) The test code has been converted into using the unittest module. Run :command:`python tests.py -v` to get the tests run. There should be no errors. Updated code to work correctly with new :c:type:`Py_ssize_t` introduced in Python 2.5. See :ref:`64 bit hosts, Python 2.5+ ` for more details on how Python and SQLite handle 64 bit sized items. The following functions were added to SQLite and are wrapped. They are all functions defined on the :class:`Connection` object or :mod:`apsw` module: * `sqlite3_update_hook `_ * `sqlite3_rollback_hook `_ * `sqlite3_enable_shared_cache `_ * `sqlite3_get_autocommit `_ * `sqlite3_profile `_ This callback is run at the end of each statement execution telling you how long it took. 3.2.7-r1 ======== You can use this release against any release of SQLite 3. SQLite 3.2.7 has several bug fixes. The undocumented experimental function :func:`sqlite3_profile` was added, but it not present in apsw yet. The author of pysqlite has improved it considerably since APSW was originally written. The differences section has been updated to reflect those improvements in pysqlite. :const:`SQLITE_INTERNAL` and :const:`SQLITE_NOTFOUND` error codes are not used according to 3.2.7 header file. They are still present in APSW for backwards compatibility. Changed the build instructions so configure is run on non-Windows platforms. Fixed a bug caused by an overly helpful error message trying to tell you how many bindings you supplied that crashed if you didn't supply any. Changed when an error in the step function for an aggregate is reported due to limitations in SQLite. 3.2.2-r1 ======== You can use this release against any release of SQLite 3. SQLite 3.2.2 API removed :func:`sqlite3_global_recover`. That function was not wrapped in APSW. Note that SQLite 3.2.2 contains a bug fix that applies when you use 64 bit integer primary keys (32 bit ints are fine). 3.2.1-r1 ======== You can use this release against any release of SQLite 3. There are no changes in APSW except to correct an error in the example code (collations are registered against the connection not the cursor) SQLite 3.2.1 had one addition in the stable C API, which was a new function named :func:`sqlite3_global_recover`. That function is not applicable for wrapping in APSW. 3.1.3-r1 ======== You can use this release against any release of SQLite 3. The text string returned by apsw.Error used to say "apsw.APSWException" and has been changed to "apsw.Error". This is purely cosmetic and helps make clear what the class is. (The old string was what the original class name was in an earlier version of the code.) Added :const:`SQLITE_ALTER_TABLE` and :const:`SQLITE_REINDEX` constants for the authorizer function. (These constants were introduced in SQLite 3.1.3). Changed various C++-isms into standard C (eg // comments and the placing of some :c:macro:`CHECK_THREAD` macro calls). Added module level function :meth:`~apsw.apswversion` which returns the version of APSW. SQLite 3.1.3 had no changes in the stable C API other than what is mentioned above. There were some new experimental functions added which are not currently documented on the SQLite website, which are not wrapped by APSW. Please contact me if you believe they will remain in SQLite and you would like them wrapped: * :c:func:`sqlite3_sleep` An alternative function which sleeps for a specified number of milliseconds can be provided. By default SQLite just uses the standard operating system call. * :c:func:`sqlite3_expired` This function is internal to statement execution. It would apply to the implementation of :meth:`Cursor.executemany` and could in theory provide a marginal improvement in performance. * A global variable :c:data:`sqlite3_temp_directory` can be used before any databases are opened to set where temporary files are created. By default SQLite just uses the standard operating system mechanisms. 3.0.8-r3 ======== There are no functional changes. The only changes were to correct some variable names in the example code (they were cut and pasted from the test code which used different names) and to make the source zip file extract its contents into a sub-directory which is the more typical way of packaging that sort of thing. 3.0.8-r2 ======== All remaining functionality in the C API for SQLite 3.0.8 is now available. Finished this documentation. 3.0.8-r1 ======== Initial release apsw-3.8.11.1-r1/doc/build.html0000664000000000000000000007075012565423346014512 0ustar rootroot Building — APSW 3.8.11.1-r1 documentation

Building

setup.py

Short story: You run setup.py but you should ideally follow the recommended way which will also fetch needed components for you.

Command Result
python setup.py install test
Compiles APSW with default Python compiler, installs it into Python site library directory and then runs the test suite.
python setup.py install --user
(Python 2.6+, 3). Compiles APSW with default Python compiler and installs it into a subdirectory of your home directory. See PEP 370 for more details.
python setup.py build --compile=mingw32 install
On Windows this will use the free MinGW compiler instead of the Microsoft compilers.
python setup.py build_ext --force --inplace test
Compiles the extension but doesn’t install it. The resulting file will be in the current directory named apsw.so (Unix/Mac) or apsw.pyd (Windows). The test suite is then run. (Note on recent versions of CPython the extension filenames may be more complicated due to PEP 3149.)
python setup.py build --debug install
Compiles APSW with debug information. This also turns on assertions in APSW that double check the code assumptions. If you are using the SQLite amalgamation then assertions are turned on in that too. Note that this will considerably slow down APSW and SQLite.

Additional setup.py flags

There are a number of APSW specific flags to commands you can specify.

fetch

setup.py can automatically fetch SQLite and other optional components. You can set the environment variable http_proxy to control proxy usage for the download. Note the files downloaded are modified from their originals to ensure various names do not clash, adjust them to the download platform and to graft them cleanly into the APSW module. You should not commit them to source code control systems (download seperately if you need clean files).

If any files are downloaded then the build step will automatically use them. This still applies when you do later builds without re-fetching.

python setup.py fetch options
fetch flag Result
--version=VERSION

By default the SQLite download page is consulted to find the current SQLite version which you can override using this flag.

Note

You can also specify fossil as the version and the current development version from SQLite’s source tracking system will be used. (The system is named Fossil.) Note that checksums can’t be checked for fossil. You will also need TCL and make installed for the amalgamation to build as well as several other common Unix tools. (ie this is very unlikely to work on Windows.)

--missing-checksum-ok
Allows setup to continue if the checksum is missing.
--all
Gets all components listed below.
--sqlite
Automatically downloads the SQLite amalgamation. The amalgamation is the preferred way to use SQLite as you have total control over what components are included or excluded (see below) and have no dependencies on any existing libraries on your developer or deployment machines. The amalgamation includes the fts3, rtree and icu extensions. On non-Windows platforms, any existing sqlite3/ directory will be erased and the downloaded code placed in a newly created sqlite3/ directory.
--asyncvfs
Downloads the Asynchronous VFS

Note

The SQLite downloads are not digitally signed which means you have no way of verifying they were produced by the SQLite team or were not modified between the SQLite servers and your computer.

Consequently APSW ships with a checksums file that includes checksums for the various SQLite downloads. If the download does not match the checksum then it is rejected and an error occurs.

The SQLite download page is not checksummed, so in theory a bad guy could modify it to point at a malicious download version instead. (setup only uses the page to determine the current version number - the SQLite download site URL is hard coded.)

If the URL is not listed in the checksums file then setup aborts. You can use --missing-checksum-ok to continue. You are recommended instead to update the checksums file with the correct information.

Note

(This note only applies to non-Windows platforms.) By default the amalgamation will work on your platform. It detects the operating system (and compiler if relevant) and uses the appropriate APIs. However it then only uses the oldest known working APIs. For example it will use the sleep system call. More recent APIs may exist but the amalgamation needs to be told they exist. As an example sleep can only sleep in increments of one second while the usleep system call can sleep in increments of one microsecond. The default SQLite busy handler does small sleeps (eg 1/50th of a second) backing off as needed. If sleep is used then those will all be a minimum of a second. A second example is that the traditional APIs for getting time information are not re-entrant and cannot be used concurrently from multiple threads. Consequently SQLite has mutexes to ensure that concurrent calls do not happen. However you can tell it you have more recent re-entrant versions of the calls and it won’t need to bother with the mutexes.

After fetching the amalgamation, setup automatically determines what new APIs you have by running the configure script that comes with SQLite and noting the output. The information is placed in sqlite3/sqlite3config.h. The build stage will automatically take note of this as needed.

If you get the fossil version then the configure script does not work. Instead the fetch will save and re-use any pre-existing sqlite3/sqlite3config.h.

build/build_ext

You can enable or omit certain functionality by specifying flags to the build and/or build_ext commands of setup.py.

python setup.py build options

Note that the options do not accumulate. If you want to specify multiple enables or omits then you need to give the flag once and giving a comma seperated list. For example:

python setup.py build --enable=fts3,fts3_parenthesis,rtree,icu
build/build_ext flag Result
--enable-all-extensions
Enables the STAT4, FTS3/4, RTree, and ICU extensions if icu-config is on your path
--enable=fts3
--enable=fts4
Enables the full text search extension. This flag only helps when using the amalgamation. If not using the amalgamation then you need to seperately ensure fts3/4 is enabled in the SQLite install. You are likely to want the parenthesis option on unless you have legacy code (–enable-all-extensions turns it on).
--enable=rtree
Enables the spatial table extension. This flag only helps when using the amalgamation. If not using the amalgamation then you need to seperately ensure rtree is enabled in the SQLite install.
--enable=icu
Enables the International Components for Unicode extension. Note that you must have the ICU libraries on your machine which setup will automatically try to find using icu-config. This flag only helps when using the amalgamation. If not using the amalgamation then you need to seperately ensure ICU is enabled in the SQLite install.
--omit=ITEM
Causes various functionality to be omitted. For example --omit=load_extension will omit code to do with loading extensions. If using the amalgamation then this will omit the functionality from APSW and SQLite, otherwise the functionality will only be omitted from APSW (ie the code will still be in SQLite, APSW just won’t call it). In almost all cases you will need to regenerate the SQLite source because the omits also alter the generated SQL parser. See the relevant SQLite documentation.

Note

Extension loading is enabled by default when using the amalgamation and disabled when using existing libraries as this most closely matches current practise. Use --omit=load_extension or --enable=load_extension to explicity disable/enable the extension loading code.

Finding SQLite 3

SQLite 3 is needed during the build process. If you specify fetch --sqlite to the setup.py command line then it will automatically fetch the current version of the SQLite amalgamation. (The current version is determined by parsing the SQLite download page). You can manually specify the version, for example fetch --sqlite --version=3.7.4.

These methods are tried in order:

Amalgamation

The file sqlite3.c and then sqlite3/sqlite3.c is looked for. The SQLite code is then statically compiled into the APSW extension and is invisible to the rest of the process. There are no runtime library dependencies on SQLite as a result. When you use fetch this is where it places the downloaded amalgamation.

Local build

The header sqlite3/sqlite3.h and library sqlite3/libsqlite3.a,so,dll is looked for.

User directories

If you are using Python 2.6+ or Python 3 and specified --user then your user directory is searched first. See PEP 370 for more details.

System directories

The default compiler include path (eg /usr/include) and library path (eg /usr/lib) are used.

Note

If you compiled SQLite with any OMIT flags (eg SQLITE_OMIT_LOAD_EXTENSION) then you must include them in the setup.py command or file. For this example you could use setup.py build --omit=load_extension to add the same flags.

Source distribution (advanced)

If you want to make a source distribution or a binary distribution that creates an intermediate source distribution such as bdist_rpm then you can have the SQLite amalgamation automatically included as part of it. If you specify the fetch command as part of the same command line then everything fetched is included in the source distribution. For example this will fetch all components, include them in the source distribution and build a rpm using those components:

$ python setup.py fetch --all bdist_rpm

Testing

SQLite itself is extensively tested. It has considerably more code dedicated to testing than makes up the actual database functionality.

APSW includes a tests.py file which uses the standard Python testing modules to verify correct operation. New code is developed alongside the tests. Reported issues also have test cases to ensure the issue doesn’t happen or doesn’t happen again.:

$ python setup.py test
               Python /usr/bin/python (2, 6, 6, 'final', 0)
Testing with APSW file /space/apsw/apsw.so
          APSW version 3.7.4-r1
    SQLite lib version 3.7.4
SQLite headers version 3007004
    Using amalgamation True
............................................................................
----------------------------------------------------------------------
Ran 76 tests in 404.557s

OK

The tests also ensure that as much APSW code as possible is executed including alternate paths through the code. 95.5% of the APSW code is executed by the tests. If you checkout the APSW source then there is a script tools/coverage.sh that enables extra code that deliberately induces extra conditions such as memory allocation failures, SQLite returning undocumented error codes etc. That brings coverage up to 99.6% of the code.

A memory checker Valgrind is used while running the test suite. The test suite is run multiple times to make any memory leaks or similar issues stand out. A checking version of Python is also used. See tools/valgrind.sh in the source.

To ensure compatibility with the various Python versions, a script downloads and compiles all supported Python versions in both 2 byte and 4 byte Unicode character configurations against the APSW and SQLite supported versions running the tests. See tools/megatest.py in the source.

In short both SQLite and APSW have a lot of testing!

Table Of Contents

Previous topic

Download

Next topic

Extensions

This Page

apsw-3.8.11.1-r1/doc/changes.html0000664000000000000000000027442312565423346015026 0ustar rootroot Change History — APSW 3.8.11.1-r1 documentation

Change History

3.8.11.1-r1

Added SQLITE_FCNTL_RBU and SQLITE_FCNTL_ZIPVFS constants.

setup’s fetch command can now get arbitrary fossil versions. For example specify fossil-e596a6b6.

Update tests due to a change in Python 3.5 (exception returned with invalid strings for system calls changed from TypeError to ValueError).

Adjusted some internal detection related to the fork checker

3.8.10.1-r1

Added deterministic parameter to Connection.createscalarfunction() (APSW issue 187)

Switched to new SQLite API returning 64 bit values for status() (APSW issue 191)

3.8.9-r1

Fixed column description caching which could be preserved between multiple statements in the same execution (APSW issue 186)

Updated documentation building tool to use new database of information from the SQLite site. This is simpler and more reliable. (Previously used site scraping.)

Added SQLITE_AUTH_USER, SQLITE_FCNTL_LAST_ERRNO, SQLITE_FCNTL_WAL_BLOCK, SQLITE_FCNTL_GET_LOCKPROXYFILE, and SQLITE_FCNTL_SET_LOCKPROXYFILE constants.

Removed SQLITE_GET_LOCKPROXYFILE, SQLITE_SET_LOCKPROXYFILE, SQLITE_LAST_ERRNO file control constants. These are deprecated - use the versions with FCNTL in their name (eg SQLITE_FCNTL_GET_LOCKPROXYFILE).

Added mappings for conflict resolution modes, virtual table configuration options and xShmLock VFS flags.

3.8.8.2-r1

No APSW changes.

3.8.8.1-r1

The column description is now cached on first request during a query so getting it is quick if called for every row.

Added SQLITE_CONFIG_PCACHE_HDRSZ and SQLITE_CONFIG_PMASZ constants, and support for them in config().

Added SQLITE_CHECKPOINT_TRUNCATE constant.

Update year in various places to 2015.

3.8.7.3-r1

No APSW changes.

3.8.7.2-r1

Fixed parsing of icu-config flags

3.8.7.1-r1

Added SQLITE_LIMIT_WORKER_THREADS constant

3.8.6-r1

Updated test suite for Python 3.4 unittest garbage collection changes (APSW issue 164 APSW issue 169)

Using the recommended build option of –enable-all-extensions turns on STAT4. Windows binaries include this too.

3.8.5-r1

Added SQLITE_IOCAP_IMMUTABLE and SQLITE_FCNTL_WIN32_SET_HANDLE constants.

3.8.4.3-r1

Added Cursor.fetchone()

3.8.4.2-r1

No APSW code changes. Rebuild due to updated SQLite version.

3.8.4.1-r1

Windows 64 bit binary builds for Python 3.3+ are back - thanks to Mike C. Fletcher for pointing the way

Correct detection of current SQLite version from download page for setup.py fetch command

Tested against Python 3.4 and binaries for Windows.

3.8.3.1-r1

Updated Shell completions for keywords, functions and pragmas.

3.8.3-r1

APSW is now hosted at Github - https://github.com/rogerbinns/apsw

Added SQLITE_RECURSIVE, SQLITE_READONLY_DBMOVED, SQLITE_FCNTL_COMMIT_PHASETWO, SQLITE_FCNTL_HAS_MOVED and SQLITE_FCNTL_SYNC constants.

3.8.2-r1

Added SQLITE_CONFIG_WIN32_HEAPSIZE, SQLITE_CONSTRAINT_ROWID and SQLITE_FCNTL_TRACE constants.

3.8.1-r1

Added SQLITE_CANTOPEN_CONVPATH and SQLITE_IOERR_CONVPATH extended error codes.

Updated pysqlite urls to point to github.

Various minor build/download documentation updates.

3.8.0.2-r1

No APSW code changes. Rebuild due to updated SQLite version.

Updated documentation tips to show how to get detailed diagnostics.

3.8.0.1-r1

No APSW changes. Rebuild due to updated SQLite version.

Windows binaries for Python 3.3 64 bit are no longer available as a Visual Studio update obliterated the ability to compile them, and I have no patience left to fight Microsoft’s tools.

3.8.0-r2

No APSW changes - updated checksums because SQLite changed the released archive to address an autoconf issue on some platforms

3.8.0-r1

Windows binaries for Python 3.3 64 bit are now available after managing to get several pieces of Microsoft software to cooperate.

Fixed shell dump issue when system routines (eg timestamp, username, hostname) couldn’t automatically be promoted to unicode. They are used in comments in the output. (APSW issue 142)

Added SQLITE_DBSTATUS_DEFERRED_FKS, SQLITE_IOERR_GETTEMPPATH, SQLITE_WARNING_AUTOINDEX and SQLITE_BUSY_SNAPSHOT constants.

3.7.17-r1

Removed tests that checked directly calling VFS read/write with negative offsets or amounts returns errors. This version of SQLite no longer returns errors in those circumstances and typically crashes instead.

Various new constants.

3.7.16.2-r1

No APSW changes - just a binary rebuild. Windows users are recommended to upgrade their SQLite version.

3.7.16.1-r1

Updated tables of functions and pragmas in the Shell to match current SQLite version.

3.7.16-r1

Adjust to different SQLite download URLs

Added SQLITE_CONSTRAINT_* and SQLITE_READONLY_ROLLBACK extended error codes

Removed CouchDB virtual table

3.7.15.2-r1

No APSW changes - binary rebuild to pickup new SQLite version

3.7.15.1-r1

Use https (SSL) for SQLite web site references (downloads and documentation links). On some platforms/versions/SSL libraries, Python’s SSL module doesn’t work with the SQLite website so a fallback to http is used - the downloads still have their checksum verified.

3.7.15-r1

Work around changed semantics for error handling when the VFS xDelete method is asked to delete a file that does not exist.

Completely removed all AsyncVFS related code. This extension is no longer maintained nor supported by the SQLite team. WAL is a good way of getting similar functionality.

Added config() support for SQLITE_CONFIG_COVERING_INDEX_SCAN.

Added several new constants: SQLITE_CONFIG_COVERING_INDEX_SCAN, SQLITE_CONFIG_SQLLOG, SQLITE_FCNTL_BUSYHANDLER, SQLITE_FCNTL_TEMPFILENAME, SQLITE_CANTOPEN_FULLPATH, SQLITE_IOERR_DELETE_NOENT

3.7.14.1-r1

Updated setup and test suite so that all files are explicitly closed instead of relying on garbage collection.

Added Windows binaries for Python 3.3. (Only 32 bit as Python doesn’t provide a free way of making 64 bit Windows binaries.)

Updated setup.py to work with changed SQLite download page formatting when detecting latest version.

Due to a Python 3.3.0 regression bug using the csv ouput mode in the shell can result in bad data or Python crashing. The bug has been fixed for Python 3.3.1 which is due in November 2012.

3.7.14-r2

Fixed an issue with the GIL in the destructor for functions. The bug would be encountered if you create a function with the same name as an existing function and are using an upcoming version of Python (eg 2.7.4). Thanks to Arfrever Frehtes Taifersar Arahesis for finding it (APSW issue 134).

Added shell .print command to match upcoming SQLite shell changes.

3.7.14-r1

Added support for Connection.status() (calls sqlite3_db_status).

The legacy Windows Compiled Help Format documentation is no longer produced - the help compiler setup program can’t cope with modern machines.

3.7.13-r1

Do not free a structure on failure to register a virtual table module as SQLite does that anyway.

Added SQLITE_OPEN_MEMORY constant.

3.7.12.1-r1

No changes to APSW. Binary rebuilds due to SQLite bugfixes.

3.7.12-r1

Re-enabled the asyncvfs.

Added Cursor.description to make DB API interoperability a little easier (APSW issue 131).

Added SQLITE_DBSTATUS_CACHE_WRITE and SQLITE_CANTOPEN_ISDIR constants.

3.7.11-r1

Added SQLITE_ABORT_ROLLBACK and SQLITE_FCNTL_PRAGMA constants.

Added Connection.readonly().

Changed Connection.filename which used to return the string used to open the database and now returns the absolute pathname.

Added Connection.db_filename().

3.7.10-r1

The default sector size returned in VFS routines is 4,096 to match SQLite’s new default.

Several links to SQLite tickets and documentation were updated (APSW issue 122).

The async vfs is disabled due to a bug in its code that leads to random memory reads when dealing with filenames.

Added SQLITE_CONFIG_GETPCACHE2, SQLITE_CONFIG_GETPCACHE2, SQLITE_FCNTL_POWERSAFE_OVERWRITE, SQLITE_FCNTL_VFSNAME and SQLITE_IOCAP_POWERSAFE_OVERWRITE constants.

Fix shell dumping when SQLite doesn’t strip trailing comments from view declarations (discussed here)

Added a URIFilename class to encapsulate how SQLite provides URI parameters to VFS routines (APSW issue 124).

Compatibility break: Depending on flags your VFS xOpen method may get a URIFilename or a string for the filename. You can still pass either to the VFSFile.

Compatibility break: The Virtual File System (VFS) code used to always run strings you provided through VFS.xFullPathname(). This isn’t possible with URI pathnames so that code has been removed. If you construct filenames for VFS.xOpen() directly (ie bypassing the SQLite database open call) then you must call VFS.xFullPathname() yourself first to ensure relative pathnames are turned into absolute pathnames. The SQLite API guarantees that filenames passed to VFS.xOpen() are exactly what was returned from VFS.xFullPathname().

3.7.9-r1

Added SQLITE_DBSTATUS_CACHE_HIT, SQLITE_DBSTATUS_CACHE_MISS and SQLITE_FCNTL_OVERWRITE constants.

3.7.8-r1

Updated documentation and tests due to an undocumented change in VFS xDelete semantics.

Added SQLITE3_FCNTL_PERSIST_WAL and SQLITE3_FCNTL_WIN32_AV_RETRY file controls.

Wrapped sqlite3_sourceid (APSW issue 120)

3.7.7.1-r1

Added SQLITE_CONFIG_URI and support for it in config(), and the open flag SQLITE_OPEN_URI. This makes it easy to use URI filenames.

The Shell now uses URI filenames by default.

New extended error constants: SQLITE_CORRUPT_VTAB, SQLITE_IOERR_SEEK, SQLITE_IOERR_SHMMAP, SQLITE_READONLY_CANTLOCK and SQLITE_READONLY_RECOVERY.

64 bit platforms (LP64 - most non-Windows) and Python 2: The Python int type is returned for 64 bit integers instead of Python long type.

3.7.6.3-r1

When invoking the shell by calling apsw.main() it will not become interactive if you supply SQL commands as command line arguments. This is to have the same behaviour as the SQLite shell (APSW issue 115).

The shell has a .find command making it easy to search for values across all columns of some or all tables.

The shell has a .autoimport command making it easy to import a data file automatically deducing separators, column names and data types.

Detect attempted use of a cursor as input data for itself.

3.7.6.2-r1

Fixed APSW issue 117 where the shell could report an I/O error on changing output target for some operating systems. Thanks to Edzard Pasma for finding and diagnosing this.

Added support for VFS version 3 which allows redirecting system calls used by some VFS implementations (eg for testing or sandboxing).

NotFoundError exception added.

Added Connection.config().

Updated Connection.wal_checkpoint() to use sqlite3_wal_checkpoint_v2 which provides more fine grained control over checkpointing and returns useful information.

3.7.5-r1

Backwards incompatible change in SQLite 3.7.5 for handling of xFileControl(). If you implement this method in a VFS then you must return True or False to indicate if the operation was understood. Connection.filecontrol() now returns that value. (Previously you could not tell the difference between an op being understood and an error resulting, or the op not being understood at all.)

Windows Python 3.2 binaries now available.

3.7.4-r1

Binary downloads for Windows 64 bit Python versions 2.6 and above including Python 3 are now available.

apsw.softheaplimit() now uses sqlite3_soft_heap_limit64 so you can provide values larger than 2GB. It is now also able to return the previous value instead of None.

Improve getting shell timer information for 64 bit Windows.

blob.reopen() is implemented.

FTS4 is enabled and in the binary builds. Note that it is an augmentation of FTS3 rather than totally separate code and described in the SQLite documentation.

3.7.3-r1

You can read blobs into pre-existing buffers using blob.readinto(). (This is more efficient than allocating new buffers as blob.read() does and then copying.) (APSW issue 109).

Fixed bug with unicode output in CSV mode in the shell.

sqlite_create_function_v2 now means that some housekeeping APSW did can be pushed back onto SQLite and the consequent deletion of some code

3.7.2-r1

No changes to APSW. Upgrading to this version of SQLite is recommended.

3.7.1-r1

Updated various constants including SQLITE_FCNTL_CHUNK_SIZE used with Connection.filecontrol().

Fixed Unicode output with some file objects from the shell (APSW issue 108).

With the shell, you can specify handling of characters not present in the output encoding (eg replace to use ‘?’ or similar, ignore, xmlcharrefreplace etc). For example:

.encoding cp437:replace

3.7.0.1-r1

Fixed issue when using a tracer and a context manager fails to commit.

3.7.0-r1

Added several new constants.

Write Ahead Logging is supported. You can make all databases automatically use WAL mode if available by using connection hooks.

Added format_sql_value() for generating a SQL syntax string from a value. This is implemented in C and is significantly faster than doing the same formatting in Python.

Using the above function and other tweaks the Shell dumper is now three to four times faster. Thanks to Nikolaus Rath for pointing out the problem and providing test data.

The shell now does colour highlighting making it easy to visually distinguish prompts, errors, headers and value types when outputting to a terminal. See the --no-colour argument and .colour command. Those of you in the two countries that have not adopted the metric system may also omit the ‘u’. For Windows users you won’t get colour output unless you install colorama

When using the context manager (with statement) of a Connection and the exit commit had an error, then the transaction is rolled back. This could occur if SQLite had buffered the transaction entirely in memory and a non-eager transaction lock had been obtained. Thanks to Geoff Ness for finding the problem. (APSW issue 98).

Fixed bug when an error is returned creating an instance of a virtual table (eg an invalid column name). Before the fix you would get the previous error message or a crash. Thanks to Jose Gomes for finding the problem. APSW issue 103

There is now a PPA for Ubuntu users that is kept up to date with APSW and SQLite at https://launchpad.net/~ubuntu-rogerbinns/+archive/apsw which has the latest SQLite embedded statically inside (ie system SQLite is ignored) and has all the extensions enabled: FTS3, RTree, ICU, asyncvfs

If you open VFS files directly then the filename is always run through xFullPathname first. SQLite guarantees this behaviour but the existing VFS code was not doing that for direct opens. Opens from SQLite were doing it.

Fixed error where apsw.connection_hooks were being run before the statement cache was initialised which would result in a crash if any hooks executed SQL code.

3.6.23.1-r1

Shell CSV output under Python 3.1 is corrected (work around Python 3.1 StringIO bug/incompatibility with other Python versions).

Simplified access to the shell’s database from the API.

Added a shell example.

3.6.23-r1

If setup is downloading files and an error occurs then it retries up to 5 times.

Added SQLITE_CONFIG_LOG and SQLITE_OPEN_AUTOPROXY constants.

Added compile_options which tells you what compilation defines SQLite was compiled with.

Added log() to call the SQLite logging interface, and updated config() so you can set log destination function.

3.6.22-r1

Made it possible to run distutils ‘sdist’ from an already produced source that was made from ‘sdist’. This was necessary for some Python virtual package environments. Note that the recursive result does not include the HTML help as distutils has no way of including content in a reparented location. APSW issue 89

Various settings are output as pragma statements when making a dump such as page size, encoding, auto_vacuum etc. The pragmas are commented out. APSW issue 90

3.6.21-r1

Source and binary files are now digitally signed which means you can verify they have not been tampered with. See Verifying your download for instructions.

The pragmas generated for a shell dump are emitted outside the transaction as they have no effect inside the transaction.

Removed some unintentional logging code left in CouchDB virtual table code.

3.6.20-r1

Support for Python 3.0 has been dropped as it has been end of lifed. Use Python 3.1 onwards.

Changes to how some statements are prepared to allow the new RANGE and LIKE optimisations with bound variables introduced in SQLite 3.6.20 to be used. See APSW issue 85 for the long and gory details.

You can now access CouchDB using a virtual table. This lets you easily bidirectionally transfer data between SQLite and CouchDB as well as work on data in both sources at the same time. Other example uses are in the documentation.

Shell changes:

  • .dump command now outputs views in the order they were created rather than alphabetical as views could reference each other. APSW issue 82
  • .dump command now outputs the user_version as a comment. It is used by some programs (such as Firefox) to keep track of the schema version.
  • Can now output in JSON.
  • Fixed APSW issue 83 - exception if history file didn’t exist
  • You can right justify output in column mode by specifying negative widths. APSW issue 84
  • You no longer get a traceback doing completions if there is a virtual table in the database but the module is not loaded. APSW issue 86
  • You can now get detailed tracebacks including local variables using the ”.exception ON” command. This is useful when developing virtual tables and similar functionality.
  • You can now terminate a SQL statement with “go” or “/” on a line by itself.

3.6.19-r1

Backwards incompatible change Fixed APSW issue 72 where APSW wasn’t zero basing virtual table BestIndex() constraints returned as documented. If you have working BestIndex code then you need to update it for this release. Thanks to Lefteris for finding this issue.

Backwards incompatible change The complete() method has moved from Connection to apsw where it should have been all along. You should now call apsw.complete() instead. (It even had an example showing it to be part of the module and not a specific connection!)

There is now an interactive shell very similar to that provided by SQLite. You can embed it in your own program, inherit from it to provide more commands and output modes, or just run it like this:

$ python -c "import apsw ; apsw.main()"

Added the SQLITE_LIMIT_TRIGGER_DEPTH, SQLITE_OPEN_PRIVATECACHE and SQLITE_OPEN_SHAREDCACHE constants.

The setup.py file now has the various options available made applicable to appropriate commands only. Read the updated documentation.

You can now specify build --enable=stat2 to setup.py to enable advanced statistics gathering for query planning.

setup.py can automatically fetch the Asynchronous VFS extension for you. If the source is present when APSW is built then it will be automatically included and the API provided.

A fork_checker() is available which turns on detection when you have used SQLite objects across a fork (a very bad thing). This is possible on Unix like operating systems, especially if you use the multiprocessing module.

Extension loading is now compiled in by default when using the amalgamation and compiled out when using existing libraries. This is more likely to match your machine. You can use --omit=load_extension or --enable=load_extension to the build/build_ext commands to explicitly disable/enable extension loading. APSW issue 67

setup.py will now abort on a download that has no checksum. See more information on checksums.

setup.py can also fetch the version of SQLite currently under development before a release. Use --version=fossil.

Updated which code uses experimental SQLite APIs based on changes in SQLite. The test suite will also work correctly with experimental on or off. (It is on by default.)

3.6.18-r1

The APSW license has been updated to allow you (at your option) to use any OSI approved license.

The speedtest has been updated to (optionally) use unicode characters and to (optionally) increase the sizes of data items.

Fixed error handling code to not record SQLite error strings in some situations where it was not necessary. This results in the code executing a little faster.

3.6.17-r1

APSW has migrated from Subversion to Mercurial for source code control. Hosting remains at Google Code

Updated a test due to VFS xUnlock errors now being ignored sometimes by SQLite (SQLite ticket #3946).

The downloads page in the help didn’t mention the Windows Python 3.1 installer.

Running the test suite is now integrated into setup.py so you can do the building and testing all in one go. Sample command line:

$ python setup.py install test

The test suite will now check the functionality of the FTS3, RTree and ICU extensions if they were included. (The Windows binary distribution includes FTS3 and RTree by default.)

Fixed APSW issue 55 where FTS3 was unintentionally omitted from the Windows binary distribution.

Various documentation updates.

3.6.16-r1

Windows binary distribution includes Python 3.1.

Trivial tweaks to keep MSVC happy.

3.6.15-r1

Fixed APSW issue 50 where blob.read() was returning None on end of file instead of the documented (and correct) empty string/bytes.

Corrected spelling of option in apswtrace and only output CURSORFROM if SQL tracing is on.

3.6.14.2-r1

Updated test code because SQLite 3.6.15 returns a different error code on trying to register a function with too many arguments (see SQLite ticket #3875).

3.6.14.1-r1

Changed some internal symbol names so they won’t clash with similar new ones used by SQLite in the amalgamation.

Added apsw.using_amalgamation so you can tell if APSW was compiled using the SQLite amalgamation. Using the amalgamation means that SQLite shared libraries are not used and will not affect your code.

Added a checksums file so that when setup.py downloads SQLite, we know it hasn’t been tampered with. (The –fetch-sqlite argument can be used to automatically download SQLite.)

3.6.13-r1

Added SQLITE_LOCKED_SHAREDCACHE extended error code.

Updated tests as the VFS delete error handling code in SQLite now returns the same high level error code between Windows and non-Windows.

The CHM format help file produced by the Windows HTML Help Compiler is viewable again under Windows HTML Help Viewer.

3.6.11-r1

You can now use the hot backup functionality introduced in SQLite 3.6.11.

Updated a VFS test to reflect changes in SQLite underlying error handling. (Previously SQLite almost always returned FullError on any write that had an error but now returns SQLError.)

Changed close methods so that Connections can be released earlier.

In prior releases a closed cursor could still be used (reincarnated). That is no longer the case and you will get CursorClosedError.

3.6.10-r1

You can use the database as a context manager as defined in PEP 0343. When you use with a transaction is started. If the block finishes with an exception then the transaction is rolled back, otherwise it is committed. See Connection.__enter__() for an example.

Behind the scenes the savepoint functionality introduced in SQLite 3.6.8 is used. Consequently Connection with blocks can be nested. If you use Connection level execution tracers then they will be called with the savepoint SQL statements.

You can also use blobs as a context manager which ensures it is always closed when finished using it. See blob.__enter__() for an example.

Added constants:

  • SQLITE_SAVEPOINT (authorizer code)
  • SQLITE_IOERR_CLOSE (extended result code)
  • SQLITE_IOERR_DIR_CLOSE (extended result code)
  • New mapping: SQLITE_FCNTL_LOCKSTATE, SQLITE_GET_LOCKPROXYFILE, SQLITE_SET_LOCKPROXYFILE, SQLITE_LAST_ERRNO. SQLite does not document the purpose of these except the first one.

Updated Virtual File System (VFS) test code. SQLite’s routines that call VFSFile.xTruncate() used to ignore errors but now return an error to the caller. VFSFile.xFileControl() is now called so a user implemented one must call any base it inherits from for SQLite to function normally.

Updated the xDlSym VFS routine to have the different but compatible type signature as changed in SQLite 3.6.7 to deal with pedantic compiler warnings.

Fixed bug in apswtrace that could result in poorly formatted times. Leading comments are also stripped for queries printed in the final reports. You can also request subsets of the reports.

The speedtest script will now fallback to the Python builtin sqlite3 module if it can’t find an externally installed pysqlite.

3.6.6.2-r1

Windows binary download for Python 3.0 is available.

Various changes in data structures and containers to reduce code size.

Changed the code to handle SQLite errors to only use Python functionality and no operating system functionality (thread local storage). This also addresses APSW issue 36 where Vista was not binary compatible with XP. Thanks to Rudolf Gaertner for assistance in detecting and diagnosing this issue.

Connections, cursors and blobs can be used by weak references.

You can now install Connection wide execution and row tracers.

The callbacks for execution and row tracers have a different signature to include the cursor the execution or row happened on. This is a backwards incompatible change. See tracing for details.

Due to popular demand, added Cursor.fetchall(). This is a longer way of typing list(cursor).

Added attributes to the Connection class - filename, open_flags and open_vfs. These let you track how the database was opened.

Added a apswtrace script to allow easy SQL tracing without having to modify your code.

Revert to using older SQLite APIs in order to work around SQLite ticket #2158. (This also saves a little bit of SQLite memory usage). The user visible effect was that you could get different exceptions and error text depending on whether a query was already in the statement cache or if you were multi-threading. As an example, if you have a query that used an unknown collation then SQLite’s prepare returns SQLITE_ERROR with error text about the bad collation. If a query had already been prepared, the collation removed and then run the new SQLite routines are returning SQLITE_SCHEMA and generic schema changed error text. Changing user defined functions could also cause a previously correct query to become invalid.

3.6.5-r1

The distribution now includes a speedtest script. You can use this to see how APSW performs relative to pysqlite, or to track performance differences between SQLite versions. The underlying queries are derived from SQLite’s speed test

The statement cache was completely rewritten. It uses less memory and scales significantly better.

It was possible to get a deadlock between the Python GIL and the SQLite database mutex when using the same Connection across multiple threads. Fixed by releasing the GIL in more places and added test that inspects the source to verify GIL/mutex handling. Thanks to amicitas reporting this as APSW issue 31

SQLite’s API has been extended in 3.6.5 so that errors can be retrieved in a thread safe manner. APSW now uses this API.

As a consequence of the prior two changes it is now possible and safe to use the same Connection across as many threads as you want concurrently.

Documentation is now done using Sphinx which was adopted by Python 2.6 and 3. This has allowed for richer documentation and more output formats such as PDF and Windows CHM format.

The binary distribution for Windows includes the full text search (FTS) and Rtree extensions. See also Additional setup.py flags.

The source structure and files were reorganized to make it clearer where things are implemented and to make automatic extraction of documentation easier.

3.6.3-r1

You can now write your own Virtual File System (VFS) in Python. You can also inherit from an existing VFS making it easy to augment or override small bits of behaviour without having to code everything else. See the example where database files are obfuscated by XORing their contents.

setup.py now takes an optional --fetch-sqlite[=ver] argument to automatically download and use the latest SQLite amalgamation (or a specified version). On non-Windows platforms it will also work out what compile flags SQLite needs (for example HAVE_USLEEP, HAVE_LOCALTIME_R). Several other options to setup.py are also available to control enabling/omitting certains features and functionality. See building for further details.

APSW checks that SQLite was compiled to be threadsafe

Added new constants:

  • SQLITE_IOERR_ACCESS, SQLITE_IOERR_CHECKRESERVEDLOCK and SQLITE_IOERR_LOCK extended result codes
  • SQLITE_OPEN_NOMUTEX and SQLITE_OPEN_FULLMUTEX open flags
  • Several new SQLITE_CONFIG and SQLITE_STATUS codes

Wrapped several new SQLite apis:

The following experimental apis are not wrapped as there is nothing useful you can do with them (yet):

Restored prior behaviour regarding Python ints and longs returning int for numbers fitting in signed 32 bit. This only affects Python 2 as Python 3 uses long exclusively. Thanks to Joe Pham for reporting this as APSW issue 24

Added Connection.getsqlite3pointer() method to help with APSW issue 26

3.5.9-r2

APSW now works with Python 3 (you need 3.0b1 or later).

(APSW issue 17) Removed the SQLITE_MAX_* constants since they could be unreliable (eg APSW can’t tell what a shared library was compiled with). A workaround is documented in Connection.limit().

3.5.9-r1

APSW is now hosted at https://code.google.com/p/apsw

You can use this with SQLite 3.5.9 onwards.

SQLite now provides the source all amalgamated into one file which improves performance and makes compilation and linking of SQLite far easier. The build instructions are updated.

SQLITE_COPY authorizer code and SQLITE_PROTOCOL error code are no longer used by SQLite, but the values are left in apsw for backwards compatibility

SQLITE_IOERR_DELETE, SQLITE_IOERR_BLOCKED and SQLITE_IOERR_NOMEM

Connection.interrupt() can be called from any thread

SQLite has implementation limits on string and blob lengths (roughly constrained to fitting within a signed 32 bit integer - less than 2GB) which weren’t checked. Using a 64 bit Python 2.5+ (as I do) it would have been possible to destroy memory and crash the program. Consequently APSW has length checks to ensure it doesn’t happen. SQLite now has further limits checking which cover other things as well such as maximum number of virtual machine opcodes, maximum number of variables etc. These are very useful if you are taking in SQL statements from elsewhere. Call Connection.limit()

A rename method was added for virtual tables.

SQLite 3.5 removed the requirement that all operations on a connection be done in the same thread. Consequently all code that enforced the restriction has been removed from APSW.

You no longer have to call Connection.close(). This was previously a requirement to ensure that the correct thread was used (destructors otherwise run in any thread). It is however still a good idea to do so since you can catch exceptions when close is called but not if you let the destructor do the closing.

SQLite now has incremental blob I/O

APSW issue 4 which could lead to generic error messages was fixed in SQLite 3.5.9.

Fixed APSW issue 1 error in example code for virtual tables which caused filename errors on Windows.

Fixed APSW issue 15 releasing the GIL around calls to sqlite3_prepare.

Fixed APSW issue 7 ensuring that extension module filenames are converted to utf8.

Use the sqlite3_open_v2 interface which allows specifying which vfs to use. This release does not allow you to write your own vfs as the SQLite vfs interface is being changed for SQLite 3.6.

Used new SQLite functions that keep track of when virtual tables and collations are no longer used so they can be released. Previously APSW also had to keep track duplicating effort.

Improved test coverage a few more percent.

The statement cache now defaults to the same number of entries as pysqlite (100). You can however specify more or less as needed.

Connection.collationneeded() was implemented.

3.3.13-r1

As of this release, APSW is now co-hosted with pysqlite meaning there is one site to go to for your Python SQLite bindings. (Both projects subsequently moved to Google Code.)

You can use this with SQLite 3.3.13 onwards. There were no API changes in SQLite 3.3.10 to 3.3.13 although some internal bugs were fixed and the 3.3.13 release is recommended over the earlier version.

Thanks to Ed Pasma for highlighting these issues:

  • Connection.interrupt() can be safely called from any thread.
  • Empty statements or those consisting entirely of whitespace do not cause misuse errors (internally SQLite started returned NULL pointers for those statements, and sqlite3_step didn’t like being passed the NULL pointer).
  • Changed special handling of SQLITE_BUSY error to be the same as other errors. The special handling previously let you restart on receiving busy, but also hung onto statements which could result in other statements getting busy errors.

3.3.10-r1

You can use this with SQLite 3.3.10 onwards.

Added a statement cache that works in conjunction with the sqlite3_prepare_v2 API. A few issues were exposed in SQLite and hence you must use SQLite 3.3.10 or later.

3.3.9-r1

You can use this with SQLite 3.3.9 onwards.

SQLite added sqlite3_prepare_v2 API. The net effect of this API update is that you will not get SQLITE_SCHEMA any more. SQLite will handle it internally.

3.3.8-r1

You can use this with SQLite 3.3.8 onwards. There was an incompatible API change for virtual tables in SQLite 3.3.8.

Virtual tables updated for new api.

You must call close() on connections. You can also call close() on cursors, but it usually isn’t necessary.

All strings are returned as unicode.

PyErr_WriteUnraisable() was used for errors in destructors. Unfortunately it is almost completely useless, merely printing str() of the object and exception. This doesn’t help in finding where in your code the issue arose so you could fix it. An internal APSW implementation generates a traceback and calls sys.excepthook(), the default implementation of which prints the exception and the traceback to sys.stderr.

Note

The line number reported in the traceback is often off by 1. This is because the destructors run “between” lines of code and so the following line is reported as the current location.

Authorizer codes SQLITE_CREATE_VTABLE, SQLITE_DROP_VTABLE and SQLITE_FUNCTION added.

SQLite extended result codes are available - see Exceptions for more detail.

Connection.hooks added so you can easily register functions, virtual tables or similar items with each Connection as it is created.

Added mapping dicts which makes it easy to map the various constants between strings and ints.

3.3.7-r1

Never released as 3.3.8 came along.

You can use this release against SQLite 3.3.7. There were no changes in the SQLite 3.3.6 API from 3.3.5. In SQLite 3.3.7 an API was added that allowed removing a chunk of duplicate code. Also added were Virtual Tables and loading of external modules (shared libraries).

APSW had the following changes:

  • Even more test cases added (you can’t have too many tests :-)
  • When exceptions occur, dummy frames are added to the traceback in the C code. This makes it a lot easier to tell why code was called if you encounter an exception. See augmented stack traces for details.
  • String values (traditional and Unicode) work correctly if they have embedded NULL characters (ie not truncated at the NULL).
  • You can load SQLite shared library extensions.

3.3.5-r1

You can use this release against any release of SQLite 3 from 3.3.5 onwards. A bug was also fixed when reporting an error during the cleanup of an aggregate function if there had also been an error in the step function. (PyErr_WriteUnraisable(NULL)() crashed on some versions of Python but not others.)

SQLite added several functions for returning metadata about result column sets. You have to compile SQLite with SQLITE_ENABLE_COLUMN_METADATA to get them. This is not the default for SQLite. I don’t believe these are generally useful except in some corner cases and so they aren’t wrapped. However please shout if you do need them. Note that Cursor.getdescription() will already give you generally useful information. (Also see the pragmas)

The test code has been converted into using the unittest module. Run python tests.py -v to get the tests run. There should be no errors.

Updated code to work correctly with new Py_ssize_t introduced in Python 2.5. See 64 bit hosts, Python 2.5+ for more details on how Python and SQLite handle 64 bit sized items.

The following functions were added to SQLite and are wrapped. They are all functions defined on the Connection object or apsw module:

3.2.7-r1

You can use this release against any release of SQLite 3.

SQLite 3.2.7 has several bug fixes. The undocumented experimental function sqlite3_profile() was added, but it not present in apsw yet.

The author of pysqlite has improved it considerably since APSW was originally written. The differences section has been updated to reflect those improvements in pysqlite.

SQLITE_INTERNAL and SQLITE_NOTFOUND error codes are not used according to 3.2.7 header file. They are still present in APSW for backwards compatibility.

Changed the build instructions so configure is run on non-Windows platforms.

Fixed a bug caused by an overly helpful error message trying to tell you how many bindings you supplied that crashed if you didn’t supply any.

Changed when an error in the step function for an aggregate is reported due to limitations in SQLite.

3.2.2-r1

You can use this release against any release of SQLite 3.

SQLite 3.2.2 API removed sqlite3_global_recover(). That function was not wrapped in APSW. Note that SQLite 3.2.2 contains a bug fix that applies when you use 64 bit integer primary keys (32 bit ints are fine).

3.2.1-r1

You can use this release against any release of SQLite 3.

There are no changes in APSW except to correct an error in the example code (collations are registered against the connection not the cursor)

SQLite 3.2.1 had one addition in the stable C API, which was a new function named sqlite3_global_recover(). That function is not applicable for wrapping in APSW.

3.1.3-r1

You can use this release against any release of SQLite 3.

The text string returned by apsw.Error used to say “apsw.APSWException” and has been changed to “apsw.Error”. This is purely cosmetic and helps make clear what the class is. (The old string was what the original class name was in an earlier version of the code.)

Added SQLITE_ALTER_TABLE and SQLITE_REINDEX constants for the authorizer function. (These constants were introduced in SQLite 3.1.3).

Changed various C++-isms into standard C (eg // comments and the placing of some CHECK_THREAD macro calls).

Added module level function apswversion() which returns the version of APSW.

SQLite 3.1.3 had no changes in the stable C API other than what is mentioned above. There were some new experimental functions added which are not currently documented on the SQLite website, which are not wrapped by APSW. Please contact me if you believe they will remain in SQLite and you would like them wrapped:

  • sqlite3_sleep() An alternative function which sleeps for a specified number of milliseconds can be provided. By default SQLite just uses the standard operating system call.
  • sqlite3_expired() This function is internal to statement execution. It would apply to the implementation of Cursor.executemany() and could in theory provide a marginal improvement in performance.
  • A global variable sqlite3_temp_directory can be used before any databases are opened to set where temporary files are created. By default SQLite just uses the standard operating system mechanisms.

3.0.8-r3

There are no functional changes. The only changes were to correct some variable names in the example code (they were cut and pasted from the test code which used different names) and to make the source zip file extract its contents into a sub-directory which is the more typical way of packaging that sort of thing.

3.0.8-r2

All remaining functionality in the C API for SQLite 3.0.8 is now available.

Finished this documentation.

3.0.8-r1

Initial release

apsw-3.8.11.1-r1/doc/index.html0000664000000000000000000005713312565423350014515 0ustar rootroot APSW documentation — APSW 3.8.11.1-r1 documentation

APSW documentation

APSW 3.8.11.1-r1 released 20 August 2015

Use with SQLite 3.8.11 or later, CPython 2.3 or later including CPython 3.

APSW provides an SQLite 3 wrapper that provides the thinnest layer over the SQLite database library possible. Everything you can do from the SQLite C API, you can do from Python. Although APSW looks vaguely similar to the PEP 249 (DBAPI), it is not compliant with that API because instead it works the way SQLite 3 does. (pysqlite is DBAPI compliant - see the differences between apsw and pysqlite 2).

APSW is hosted at https://github.com/rogerbinns/apsw

Contents:

Indices and tables

Table Of Contents

Next topic

Tips

This Page

apsw-3.8.11.1-r1/doc/vtable.html0000664000000000000000000013643312565423350014664 0ustar rootroot Virtual Tables — APSW 3.8.11.1-r1 documentation

Virtual Tables

Virtual Tables are a feature introduced in SQLite 3.3.7. They let a developer provide an underlying table implementations, while still presenting a normal SQL interface to the user. The person writing SQL doesn’t need to know or care that some of the tables come from elsewhere.

Some examples of how you might use this:

  • Translating to/from information stored in other formats (eg a csv/ini format file)
  • Accessing the data remotely (eg you could make a table that backends into Amazon’s API)
  • Dynamic information (eg currently running processes, files and directories, objects in your program)
  • Information that needs reformatting (eg if you have complex rules about how to convert strings to/from Unicode in the dataset)
  • Information that isn’t relationally correct (eg if you have data that has ended up with duplicate “unique” keys with code that dynamically corrects it)
  • There are other examples on the SQLite page

You need to have 3 types of object. A module, a virtual table and a cursor. These are documented below. You can also read the SQLite C method documentation. At the C level, they are just one set of methods. At the Python/APSW level, they are split over the 3 types of object. The leading x is omitted in Python. You can return SQLite error codes (eg SQLITE_READONLY) by raising the appropriate exceptions (eg ReadOnlyError). exceptionfor() is a useful helper function to do the mapping.

VTModule class

class VTModule

Note

There is no actual VTModule class - it is just shown this way for documentation convenience. Your module instance should implement all the methods documented here.

A module instance is used to create the virtual tables. Once you have a module object, you register it with a connection by calling Connection.createmodule():

# make an instance
mymod=MyModuleClass()

# register the vtable on connection con
con.createmodule("modulename", mymod)

# tell SQLite about the table
con.cursor().execute("create VIRTUAL table tablename USING modulename('arg1', 2)")

The create step is to tell SQLite about the existence of the table. Any number of tables referring to the same module can be made this way. Note the (optional) arguments which are passed to the module.

VTModule.Connect(connection, modulename, databasename, tablename, *args) → [ sql string, table object ]

The parameters and return are identical to Create(). This method is called when there are additional references to the table. Create() will be called the first time and Connect() after that.

The advise is to create caches, generated data and other heavyweight processing on Create() calls and then find and reuse that on the subsequent Connect() calls.

The corresponding call is VTTable.Disconnect(). If you have a simple virtual table implemtation, then just set Connect() to be the same as Create():

class MyModule:

     def Create(self, connection, modulename, databasename, tablename, *args):
         # do lots of hard work

     Connect=Create
VTModule.Create(connection, modulename, databasename, tablename, *args) → [ sql string, table object ]

Called when a table is first created on a connection.

Parameters:
  • connection – An instance of Connection
  • modulename – The string name under which the module was registered
  • databasename – The name of the database. This will be main for directly opened files and the name specified in ATTACH statements.
  • tablename – Name of the table the user wants to create.
  • args – Any arguments that were specified in the create virtual table statement.
Returns:

A list of two items. The first is a SQL create table statement. The columns are parsed so that SQLite knows what columns and declared types exist for the table. The second item is an object that implements the table methods.

The corresponding call is VTTable.Destroy().

VTTable class

class VTTable

Note

There is no actual VTTable class - it is just shown this way for documentation convenience. Your table instance should implement the methods documented here.

The VTTable object contains knowledge of the indices, makes cursors and can perform transactions.

A virtual table is structured as a series of rows, each of which has the same columns. The value in a column must be one of the 5 supported types, but the type can be different between rows for the same column. The virtual table routines identify the columns by number, starting at zero.

Each row has a unique 64 bit integer rowid with the Cursor routines operating on this number, as well as some of the Table routines such as UpdateChangeRow.

VTTable.Begin()

This function is used as part of transactions. You do not have to provide the method.

VTTable.BestIndex(constraints, orderbys)

This is a complex method. To get going initially, just return None and you will be fine. Implementing this method reduces the number of rows scanned in your table to satisfy queries, but only if you have an index or index like mechanism available.

Note

The implementation of this method differs slightly from the SQLite documentation for the C API. You are not passed “unusable” constraints. The argv/constraintarg positions are not off by one. In the C api, you have to return position 1 to get something passed to VTCursor.Filter() in position 0. With the APSW implementation, you return position 0 to get Filter arg 0, position 1 to get Filter arg 1 etc.

The purpose of this method is to ask if you have the ability to determine if a row meets certain constraints that doesn’t involve visiting every row. An example constraint is price > 74.99. In a traditional SQL database, queries with constraints can be speeded up with indices. If you return None, then SQLite will visit every row in your table and evaluate the constraint itself. Your index choice returned from BestIndex will also be passed to the Filter() method on your cursor object. Note that SQLite may call this method multiple times trying to find the most efficient way of answering a complex query.

constraints

You will be passed the contraints as a sequence of tuples containing two items. The first item is the column number and the second item is the operation.

Example query: select * from foo where price > 74.99 and quantity<=10 and customer='Acme Widgets'

If customer is column 0, price column 2 and quantity column 5 then the constraints will be:

(2, apsw.SQLITE_INDEX_CONSTRAINT_GT),
(5, apsw.SQLITE_INDEX_CONSTRAINT_LE),
(0, apsw.SQLITE_INDEX_CONSTRAINT_EQ)

Note that you do not get the value of the constraint (ie “Acme Widgets”, 74.99 and 10 in this example).

If you do have any suitable indices then you return a sequence the same length as constraints with the members mapping to the constraints in order. Each can be one of None, an integer or a tuple of an integer and a boolean. Conceptually SQLite is giving you a list of constraints and you are returning a list of the same length describing how you could satisfy each one.

Each list item returned corresponding to a constraint is one of:

None
This means you have no index for that constraint. SQLite will have to iterate over every row for it.
integer
This is the argument number for the constraintargs being passed into the Filter() function of your cursor (the values “Acme Widgets”, 74.99 and 10 in the example).
(integer, boolean)
By default SQLite will check what you return. For example if you said that you had an index on price, SQLite will still check that each row you returned is greater than 74.99. If you set the boolean to False then SQLite won’t do that double checking.

Example query: select * from foo where price > 74.99 and quantity<=10 and customer=='Acme Widgets'. customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price.

Constraints (in) Constraints used (out)
(2, apsw.SQLITE_INDEX_CONSTRAINT_GT),
(5, apsw.SQLITE_INDEX_CONSTRAINT_LE),
(0, apsw.SQLITE_INDEX_CONSTRAINT_EQ)
1,
None,
0

When your Filter method in the cursor is called, constraintarg[0] will be “Acme Widgets” (customer constraint value) and constraintarg[1] will be 74.99 (price constraint value). You can also return an index number (integer) and index string to use (SQLite attaches no significance to these values - they are passed as is to your VTCursor.Filter() method as a way for the BestIndex method to let the Filter() method know which of your indices or similar mechanism to use.

orderbys

The second argument to BestIndex is a sequence of orderbys because the query requested the results in a certain order. If your data is already in that order then SQLite can give the results back as is. If not, then SQLite will have to sort the results first.

Example query: select * from foo order by price desc, quantity asc

Price is column 2, quantity column 5 so orderbys will be:

(2, True),  # True means descending, False is ascending
(5, False)

Return

You should return up to 5 items. Items not present in the return have a default value.

0: constraints used (default None)
This must either be None or a sequence the same length as constraints passed in. Each item should be as specified above saying if that constraint is used, and if so which constraintarg to make the value be in your VTCursor.Filter() function.
1: index number (default zero)
This value is passed as is to VTCursor.Filter()
2: index string (default None)
This value is passed as is to VTCursor.Filter()
3: orderby consumed (default False)
Return True if your output will be in exactly the same order as the orderbys passed in
4: estimated cost (default a huge number)
Approximately how many disk operations are needed to provide the results. SQLite uses the cost to optimise queries. For example if the query includes A or B and A has 2,000 operations and B has 100 then it is best to evaluate B before A.

A complete example

Query is select * from foo where price>74.99 and quantity<=10 and customer=="Acme Widgets" order by price desc, quantity asc. Customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price.

BestIndex(constraints, orderbys)

constraints= ( (2, apsw.SQLITE_INDEX_CONSTRAINT_GT),
               (5, apsw.SQLITE_INDEX_CONSTRAINT_LE),
               (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ)  )

orderbys= ( (2, True), (5, False) )

# You return

( (1, None, 0),   # constraints used
  27,             # index number
  "idx_pr_cust",  # index name
  False,          # results are not in orderbys order
  1000            # about 1000 disk operations to access index
)

# Your Cursor.Filter method will be called with:

27,              # index number you returned
"idx_pr_cust",   # index name you returned
"Acme Widgets",  # constraintarg[0] - customer
74.99            # constraintarg[1] - price
VTTable.Commit()

This function is used as part of transactions. You do not have to provide the method.

VTTable.Destroy()

The opposite of VTModule.Create(). This method is called when the table is no longer used. Note that you must always release resources even if you intend to return an error, as it will not be called again on error. SQLite may also leak memory if you return an error.

VTTable.Disconnect()

The opposite of VTModule.Connect(). This method is called when a reference to a virtual table is no longer used, but VTTable.Destroy() will be called when the table is no longer used.

VTTable.FindFunction(name, nargs)

Called to find if the virtual table has its own implementation of a particular scalar function. You should return the function if you have it, else return None. You do not have to provide this method.

This method is called while SQLite is preparing a query. If a query is in the statement cache then FindFunction won’t be called again. If you want to return different implementations for the same function over time then you will need to disable the statement cache.

Parameters:
  • name – The function name
  • nargs – How many arguments the function takes
VTTable.Open()

Returns a cursor object.

VTTable.Rename(newname)

Notification that the table will be given a new name. If you return without raising an exception, then SQLite renames the table (you don’t have to do anything). If you raise an exception then the renaming is prevented. You do not have to provide this method.

VTTable.Rollback()

This function is used as part of transactions. You do not have to provide the method.

VTTable.Sync()

This function is used as part of transactions. You do not have to provide the method.

VTTable.UpdateChangeRow(row, newrowid, fields)

Change an existing row. You may also need to change the rowid - for example if the query was UPDATE table SET rowid=rowid+100 WHERE ...

Parameters:
  • row – The existing 64 bit integer rowid
  • newrowid – If not the same as row then also change the rowid to this.
  • fields – A tuple of values the same length and order as columns in your table
VTTable.UpdateDeleteRow(rowid)

Delete the row with the specified rowid.

Parameters:rowid – 64 bit integer
VTTable.UpdateInsertRow(rowid, fields) → newrowid

Insert a row with the specified rowid.

Parameters:
  • rowidNone if you should choose the rowid yourself, else a 64 bit integer
  • fields – A tuple of values the same length and order as columns in your table
Returns:

If rowid was None then return the id you assigned to the row. If rowid was not None then the return value is ignored.

VTCursor class

class VTCursor

Note

There is no actual VTCursor class - it is just shown this way for documentation convenience. Your cursor instance should implement all the methods documented here.

The VTCursor object is used for iterating over a table. There may be many cursors simultaneously so each one needs to keep track of where it is.

VTCursor.Close()

This is the destructor for the cursor. Note that you must cleanup. The method will not be called again if you raise an exception.

VTCursor.Column(number)

Requests the value of the specified column number of the current row. If number is -1 then return the rowid.

Returns:Must be one one of the 5 supported types
VTCursor.Eof() → bool

Called to ask if we are at the end of the table. It is called after each call to Filter and Next.

Returns:False if the cursor is at a valid row of data, else True

Note

This method can only return True or False to SQLite. If you have an exception in the method or provide a non-boolean return then True (no more data) will be returned to SQLite.

VTCursor.Filter(indexnum, indexname, constraintargs)

This method is always called first to initialize an iteration to the first row of the table. The arguments come from the BestIndex() method in the table object with constraintargs being a tuple of the constraints you requested. If you always return None in BestIndex then indexnum will be zero, indexstring will be None and constraintargs will be empty).

VTCursor.Next()

Move the cursor to the next row. Do not have an exception if there is no next row. Instead return False when Eof() is subsequently called.

If you said you had indices in your VTTable.BestIndex() return, and they were selected for use as provided in the parameters to Filter() then you should move to the next appropriate indexed and constrained row.

VTCursor.Rowid() → 64 bit integer

Return the current rowid.

Troubleshooting virtual tables

Virtual Tables are a relatively recent addition to SQLite and haven’t been widely used yet. They do work well if all your routines work perfectly.

A big help is using the local variables recipe as described in augmented stack traces which will give you more details in errors, and shows an example with the complex BestIndex() function.

You may also find errors compounding. For example if you have an error in the Filter method of a cursor, SQLite then closes the cursor. If you also return an error in the Close method then the first error may mask the second or vice versa.

Note

SQLite may ignore responses from your methods if they don’t make sense. For example in BestIndex, if you set multiple arguments to have the same constraintargs position then your Filter won’t receive any constraintargs at all.

Table Of Contents

Previous topic

Backup

Next topic

Virtual File System (VFS)

This Page

apsw-3.8.11.1-r1/doc/py-modindex.html0000664000000000000000000001055212565423350015635 0ustar rootroot Python Module Index — APSW 3.8.11.1-r1 documentation

Python Module Index

a
 
a
apsw Python access to SQLite database library
apsw-3.8.11.1-r1/doc/types.html0000664000000000000000000002277712565423350014560 0ustar rootroot Types — APSW 3.8.11.1-r1 documentation

Types

Read about SQLite 3 types. ASPW always maintains the correct type for values, and never converts them to something else. Note however that SQLite may convert types based on column affinity as described. ASPW requires that all values supplied are one of the corresponding Python/SQLite types (or a subclass).

Mapping

  • None in Python is NULL in SQLite
  • Python int or long is INTEGER in SQLite. The value represented must fit within a 64 bit signed quantity (long long at the C level) or an overflow exception is generated.
  • Python’s float type is used for REAL in SQLite. (At the C level they are both 8 byte quantities and there is no loss of precision).
  • In Python 2, Python’s string or unicode is used for TEXT supplied to SQLite and all text returned from SQLite is unicode. For Python 3 only unicode is used.
  • For Python 2 the buffer class is used for BLOB in SQLite. In Python 3 the bytes type is used, although you can still supply buffers.

Unicode

All SQLite strings are Unicode. The actual binary representations can be UTF8, or UTF16 in either byte order. ASPW uses the UTF8 interface to SQLite which results in the binary string representation in your database defaulting to UTF8 as well. All this is totally transparent to your Python code.

Everywhere strings are used (eg as database values, SQL statements, bindings names, user defined functions) you can use Unicode strings, and in Python 3 must use Unicode. In Python 2, you can also use the bare Python string class, and ASPW will automatically call the unicode converter if any non-ascii characters are present.

When returning text values from SQLite, ASPW always uses the Python unicode class.

If you don’t know much about Unicode then read Joel’s article. SQLite does not include conversion from random non-Unicode encodings to or from Unicode. (It does include conversion between 8 bit and 16 bit Unicode encodings). Python includes codecs for conversion to or from many different character sets.

If you don’t want to use Unicode and instead want a simple bytes in are the same bytes out then you should only use blobs.

If you want to do manipulation of unicode text such as upper/lower casing or sorting then you need to know about locales. This is because the exact same sequence of characters sort, upper case, lower case etc differently depending on where you are. As an example Turkic languages have multiple letter i, German has ß which behaves like ss, various accents sort differently in different European countries. Fortunately there is a libary you can ask to do the right locale specific thing ICU. A default SQLite compilation only deals with the 26 letter Roman alphabet. If you enable ICU with SQLite then you get good stuff. See the Building section on how to enable ICU for SQLite with APSW. Note that Python does not currently include ICU support and hence sorting, upper/lower casing etc are limited and do not take locales into account.

In summary, never confuse bytes with strings (which C sadly treats as the same thing). Either always use bytes (and SQLite blobs) for everything or use strings (and SQLite strings) for everything. If you take the latter approach and have to deal with external input/output then you must know what encodings are being used and it is best to convert to Unicode as early as possible on input and late as possible on output.

Table Of Contents

Previous topic

Exceptions

Next topic

Execution and tracing

This Page

apsw-3.8.11.1-r1/doc/tips.html0000664000000000000000000006054312565423350014364 0ustar rootroot Tips — APSW 3.8.11.1-r1 documentation

Tips

These tips are based on mailing list postings. You are recommended to read all the documentation as well.

SQLite is different

While SQLite provides a SQL database like many others out there, it is also unique in many ways. Read about the unique features at the SQLite website.

Cursors

SQLite only calculates each result row as you request it. For example if your query returns 10 million rows SQLite will not calculate all 10 million up front. Instead the next row will be calculated as you ask for it.

Cursors on the same Connection are not isolated from each other. Anything done on one cursor is immediately visible to all other Cursors on the same connection. This still applies if you start transactions. Connections are isolated from each other.

Read more about Cursors.

Bindings

When using a cursor, always use bindings. String interpolation may seem more convenient but you will encounter difficulties. You may feel that you have complete control over all data accessed but if your code is at all useful then you will find it being used more and more widely. The computer will always be better than you at parsing SQL and the bad guys have years of experience finding and using SQL injection attacks in ways you never even thought possible.

The documentation gives many examples of how to use various forms of bindings.

Unicode

SQLite only stores text as Unicode. However it relies on SQLite API users to provide valid UTF-8 and does not double check. (APSW only provides valid UTF-8). It is possible using other wrappers and tools to cause invalid UTF-8 to appear in the database which will then cause retrieval errors. You can work around this by using the SQL CAST operator. For example:

SELECT id, CAST(label AS blob) from table

Then proceed to give the Joel Unicode article to all people involved.

Diagnostics

Both SQLite and APSW provide detailed diagnostic information. Errors will be signalled via an exception.

APSW ensures you have detailed information both in the stack trace as well as what data APSW/SQLite was operating on.

SQLite has a warning/error logging facility. To set your own logger use:

def handler(errcode, message):
    errstr=apsw.mapping_result_codes[errcode & 255]
    extended=errcode & ~ 255
    print "SQLITE_LOG: %s (%d) %s %s" % (message, errcode, errstr, apsw.mapping_extended_result_codes.get(extended, ""))

apsw.config(apsw.SQLITE_CONFIG_LOG, handler)

Note

The handler must be set before any other calls to SQLite. Once SQLite is initialised you cannot change the logger - a MisuseError will happen (this restriction is in SQLite not APSW).

This is an example of what gets printed when I use /dev/null as the database name in the Connection and then tried to create a table.:

SQLITE_LOG: cannot open file at line 28729 of [7dd4968f23] (14) SQLITE_CANTOPEN
SQLITE_LOG: os_unix.c:28729: (2) open(/dev/null-journal) - No such file or directory (14) SQLITE_CANTOPEN
SQLITE_LOG: statement aborts at 38: [create table foo(x,y);] unable to open database file (14) SQLITE_CANTOPEN

Parsing SQL

Sometimes you want to know what a particular SQL statement does. The SQLite query parser directly generates VDBE byte code and cannot be hooked into. There is however an easier way.

Make a new Connection object making sure the statement cache is disabled (size zero). Install an execution tracer that returns apsw.SQLITE_DENY which will prevent any queries from running. Install an authorizer.

Then call Cursor.execute() on your query. Your authorizer will then be called (multiple times if necessary) with details of what the query does including expanding views and triggers that fire. Finally the execution tracer will fire. If the query string had multiple statements then the execution tracer lets you know how long the first statement was.

Unexpected behaviour

Occasionally you may get different results than you expected. Before littering your code with print, try apswtrace with all options turned on to see exactly what is going on. You can also use the SQLite shell to dump the contents of your database to a text file. For example you could dump it before and after a run to see what changed.

One fairly common gotcha is using double quotes instead of single quotes. (This wouldn’t be a problem if you use bindings!) SQL strings use single quotes. If you use double quotes then it will mostly appear to work, but they are intended to be used for identifiers such as column names. For example if you have a column named a b (a space b) then you would need to use:

SELECT "a b" from table

If you use double quotes and happen to use a string whose contents are the same as a table, alias, column etc then unexpected results will occur.

Customizing cursors

Some developers want to customize the behaviour of cursors. An example would be wanting a rowcount or batching returned rows. (These don’t make any sense with SQLite but the desire may be to make the code source compatible with other database drivers).

APSW does not provide a way to subclass the cursor class or any other form of factory. Consequently you will have to subclass the Connection and provide an alternate implementation of Connection.cursor(). You should encapsulate the APSW cursor - ie store it as a member of your cursor class and forward calls as appropriate. The cursor only has two important methods - Cursor.execute() and Cursor.executemany().

If you want to change the rows returned then use a row tracer. For example you could call Cursor.getdescription() and return a dictionary instead of a tuple:

def row_factory(cursor, row):
    return {k[0]: row[i] for i, k in enumerate(cursor.getdescription())}

# You can also set this on just a cursor
connection.setrowtrace(row_factory)

Busy handling

SQLite uses locks to coordinate access to the database by multiple connections (within the same process or in a different process). The general goal is to have the locks be as lax as possible (allowing concurrency) and when using more restrictive locks to keep them for as short a time as possible. See the SQLite documentation for more details.

By default you will get a BusyError if a lock cannot be acquired. You can set a timeout which will keep retrying or a callback where you decide what to do.

Database schema

When starting a new database, it can be quite difficult to decide what tables and fields to have and how to link them. The technique used to design SQL schemas is called normalization. The page also shows common pitfalls if you don’t normalize your schema.

Shared Cache Mode

SQLite supports a shared cache mode where multiple connections to the same database can share a cache instead of having their own. It is not recommended that you use this mode.

A big issue is that busy handling is not done the same way. The timeouts and handlers are ignored and instead SQLITE_LOCKED_SHAREDCACHE extended error is returned. Consequently you will have to do your own busy handling. (SQLite ticket, APSW issue 59)

The amount of memory and I/O saved is trivial compared to Python’s overal memory and I/O consumption. You may also need to tune the shared cache’s memory back up to what it would have been with seperate connections to get the same performance.

The shared cache mode is targetted at embedded systems where every byte of memory and I/O matters. For example an MP3 player may only have kilobytes of memory available for SQLite.

Write Ahead Logging

SQLite 3.7 introduces write ahead logging which has several benefits, but also some drawbacks as the page documents. WAL mode is off by default. In addition to turning it on manually for each database, you can also turn it on for all opened databases by using connection_hooks:

def setwal(db):
    db.cursor().execute("pragma journal_mode=wal")
    # custom auto checkpoint interval (use zero to disable)
    db.wal_autocheckpoint(10)

apsw.connection_hooks.append(setwal)

Note that if wal mode can’t be set (eg the database is in memory or temporary) then the attempt to set wal mode will be ignored. The pragma will return the mode in effect. It is also harmless to call functions like Connection.wal_autocheckpoint() on connections that are not in wal mode.

If you write your own VFS, then inheriting from an existing VFS that supports WAL will make your VFS support the extra WAL methods too. (Your VFS will point directly to the base methods - there is no indirect call via Python.)

apsw-3.8.11.1-r1/doc/search.html0000664000000000000000000001033112565423350014640 0ustar rootroot Search — APSW 3.8.11.1-r1 documentation

Search

Please activate JavaScript to enable the search functionality.

From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list.

apsw-3.8.11.1-r1/doc/searchindex.js0000664000000000000000000014766212565423350015362 0ustar rootrootSearch.setIndex({envversion:42,terms:{sqlite_corrupt_vtab:[0,14],poorli:0,four:0,prefix:11,sleep:[19,0,8,20],"6pm":20,oldest:[8,20],whose:[6,20],typeerror:[15,18,0,5],sqlite3_busy_handl:16,thinnest:3,executioncompleteerror:18,under:[10,4,19,16,0,20],digit:[1,0,8],sqlite_iocap_atomic16k:14,everi:[10,19,15,16,18,6,0,20],"void":[16,19],sqlite_config_covering_index_scan:[0,14],sqlite3_initi:[0,14],affect:[16,9,0,14],call_function2:16,isdir:20,mapping_sync:14,sqlite_iocap_undeletable_when_open:14,cmd:9,correct:[10,16,8,21,0,12,5],shlex:9,sqlite_open_privatecach:[0,14],indexnum:10,initialis:[6,0],seper:[6,11,8,2],direct:[16,0,7],xa5:20,consequ:[19,16,8,6,11,0,7,2],second:[9,10,21,16,8,11,19,20,5],aggreg:[16,11,0,20],getprocaddress:19,xf4:20,xf6:20,even:[9,10,13,16,8,18,6,11,0,19,5],keyserv:1,sqlite_max_attach:20,neg:[13,16,0,5],calcul:[6,11,2],rerais:9,nstep:16,fossil:[0,8],"new":[9,10,19,14,15,16,17,8,18,6,11,0,7,5],net:0,ever:9,told:[18,8],widget:10,randominteg:20,abov:[10,4,8,1,11,0,7,21],internalerror:18,never:[9,0,15,6,19,12],xfe:20,here:[9,10,13,20,16,18,11,0,7],sqlite3point:16,sqlite3_blob_open:16,studio:0,debugg:7,path:[11,19,8,20],sqlite_nolf:[18,14],mapping_locking_level:14,mapping_bestindex_constraint:14,adopt:0,precis:12,sji:9,aka:1,unix:[9,19,20,8,1,11,0,7],mapping_virtual_table_configuration_opt:14,sqlite_ioerr_unlock:14,total:[16,8,1,11,0,12],ord:20,unit:21,exectrac:11,describ:[10,19,15,16,17,11,0,12],would:[9,21,4,19,14,15,18,16,5,1,6,11,0,7,2],sqlite3_db_config:[16,0],complete_command:9,tail:20,call:[11,9,10,19,13,14,21,20,15,16,17,8,18,6,2,0,7,12,5],recommend:14,strike:4,type:[15,16,0,5,14],until:[15,9,13,5,2],sqlite_shm_lock:14,check_thread:0,sqlite3_profil:[16,0],relat:[19,0],notic:4,notadberror:18,warn:[6,0,8],exce:18,sqlite_drop_t:14,implemt:10,moin:9,excl:20,getconnect:[15,11,2],must:[10,4,13,14,15,16,17,8,18,6,0,19,12,21],join:[9,20],err:9,restor:[9,0,21],work:[3,9,10,14,21,20,15,18,16,17,8,1,6,11,0,7,5],conceptu:[15,10],wors:1,could:[10,19,14,15,16,8,1,6,11,0,2],sqlite_fcntl_pragma:[0,14],sqlite_config_pagecach:14,sqlite3_column_nam:15,give:[19,9,10,13,20,16,8,18,6,11,0,7],dlsym:19,autodetect:9,sqlite_busy_snapshot:[0,14],indic:[18,16,0],unavail:19,want:[9,10,19,5,14,15,16,8,20,1,6,11,0,7,12,2],newrowid:10,updatedeleterow:10,enc:9,end:[9,10,14,20,15,16,5,11,0,7,2],thing:[0,20,15,17,18,11,13,7,12],setrowtrac:[15,6,11,16,20],length:[10,19,16,5,11,0,20],how:[11,9,10,19,13,14,21,20,15,16,17,8,18,6,2,0,7,12,5],hot:[19,0],playback:19,confin:15,answer:10,verifi:[16,0,8,14],config:[16,6,0,8,14],sqlite3_bind_text:15,updat:[10,19,14,15,16,8,11,0,20,2],recogn:9,tablenam:[10,20],after:[9,10,13,14,16,8,6,11,0,19,5],altogeth:[15,16],befor:[9,10,13,14,20,15,16,17,5,18,6,11,0,19,21],wrong:5,cursor1:15,cursor2:15,arch:1,demonstr:[19,7,20],attempt:[0,15,5,18,6,13,2],third:17,classmethod:20,grant:4,exclud:8,sqlite3_step:[15,11,0],receiv:[10,19,0,14],maintain:[15,11,0,12],environ:[19,9,0,8],sqlite_fcntl_zipvf:[0,14],enter:9,exclus:[11,0],mechan:[10,21,0,14,20],order:[9,10,16,8,18,11,0,20,12],oper:[10,19,14,16,17,8,18,6,0,20,2],memoryhighwat:14,diagnos:0,over:[3,9,10,19,15,8,1,6,0,7],failur:[19,16,0,8],mnxcv:11,sourceconnect:16,fts4:[17,0,8],fts3:[0,8],keyboard:9,vari:19,myfil:[16,19],fit:[0,12],fix:[9,0,7],viewabl:0,fetchal:[15,0,2],sqlite_error:[18,19,0,14],sqlite_open:11,easier:[19,6,0],sqlite3_db_statu:[16,0],descend:10,them:[9,19,14,15,16,17,8,6,11,0,7,12],thei:[9,10,19,15,18,16,8,1,6,11,0,20,12],proce:6,sqlite_dbstatus_cache_writ:[0,14],safe:[9,0,15,11,13,19],obfudb:20,"break":[9,0,20],sqlite3_create_function_v2:16,interrupt:[18,16,9,0],strnumcol:20,blobopen:[16,5,20],choic:[9,10],largedata:5,getvalu:20,timeout:[6,9,11],each:[9,10,19,14,21,20,15,16,18,6,11,0,7,2],debug:[8,7],european:12,mean:[9,10,13,14,20,16,8,11,0,19],sqlite_fcntl_powersafe_overwrit:[0,14],sqlite_dbstatus_lookaside_us:14,sqlite_status_pagecache_s:14,mapping_db_statu:14,resum:11,sqlite_constraint_trigg:14,push_input:9,locking_mod:11,extract:[1,9,0,8],network:9,newli:8,sqlite3_vfs_regist:19,content:[3,19,5,6,11,0,20],mapping_open_flag:14,got:[18,16,14,20],unrecogn:9,process_sql:[9,20],getexectrac:[15,16],dsa:1,situat:[18,0],"7b61b6c6ce":9,free:[13,16,0,8,14],standard:[1,14,0,8,2],inputflag:19,sqlite_fcntl_set_lockproxyfil:[0,14],workaround:0,traceback:[9,4,19,18,11,0,7],filter:[11,10,20],pythonioencod:9,fork_check:[18,0,14],iso:9,isn:[9,10,18,0,7,2],ism:0,regress:0,user_vers:0,onto:[9,0],rang:[18,0,20,21],resetcursor:[18,7],sqlite_constraint_check:14,sqlite3_db_filenam:16,independ:7,sqlite3_enable_shared_cach:[0,14],restrict:[6,0,7,4],hook:[19,14,16,6,11,0,20],instruct:[1,20,0,8,7],mingw:8,messag:[9,21,19,14,6,11,0,2],wasn:0,primari:[18,0],hood:16,setexectrac:[15,16,11,20],rewritten:0,sqlite_cantopen:[18,6,14],xdlopen:19,top:11,sometim:[19,6,9,0],stack:0,sqlite_ioerr:[18,14],wal_autocheckpoint:[6,16],too:[15,16,8,18,6,0,20,2],similarli:5,vista:0,consol:9,tool:[8,1,6,0,7,21],took:[16,0],sqlite3_sourceid:[0,14],sqlite_config_scratch:14,foo_x:11,technic:[11,9,5],target:[17,6,0],sqlite_rang:[18,14],provid:[3,19,9,10,4,13,20,16,17,5,1,6,11,0,7],sqlite3_backup_init:16,older:[15,0],zero:[9,10,13,14,21,20,16,5,18,6,11,0,19,2],sqlite3_status64:14,project:0,matter:[6,9,16,11,21],gnupg:1,vfsnotimplementederror:18,html:[1,9,0],sqlite_config_pmasz:[0,14],ran:8,modern:0,"_ctype":19,raw:11,seed:19,manner:[0,5],increment:[16,0,8,5],interfac:[16,0,14],seem:6,incompat:0,seek:[5,20],paramtwo:20,threadingviolationerror:[15,18,13],sqlite_dbconfig_enable_fkei:14,latter:12,especi:0,cmdloop:9,transmit:9,xdelet:[19,0],simplifi:0,usernam:[1,0],object:[0,14,15,16,5,13],microsecond:[19,8],distutils2:1,letter:12,hide:7,choos:[19,11,10],trustdb:1,tradit:[10,0,8],don:[9,10,19,14,21,20,15,16,8,18,6,11,0,7,12,2],pagecount:13,doc:9,xfiles:19,doe:[3,21,0,14,15,16,17,8,18,11,13,20,2],sqlite3_bind_zeroblob:15,dummi:0,declar:[15,10,0,20],rogerb:[1,20],section:[11,0,12,2],came:0,pragma:[13,6,11,0],opposit:10,reprepar:[18,11],sqlite3_blob_byt:5,syntax:[15,11,0,14],errstr:6,fixup_backslash:9,protocol:[18,2],sqlite_config_memstatu:14,involv:[1,18,6,11,10],absolut:[19,0,5,14],acquir:[6,11],sqlite_dbstatus_lookaside_hit:14,libari:12,vtcursor:3,explain:9,configur:[16,9,0,8,14],busi:[16,0,8],gkjlfdhgjkhsdfkjg:11,explic:8,npage:13,stop:[11,9,16],compli:2,amazon:10,sqlite3_config:[0,14],iso8859:9,report:[14,5,11,0,7,8],sqlite_limit_column:[14,20],bar:[18,11,7,20],sqlite_attach:14,emb:[9,0],excel:9,lc_ctype:9,method:[11,9,10,13,14,15,16,19,8,18,6,2,0,7,5],cleanli:8,bad:[14,15,8,18,6,0],testlimit:20,metadata:0,sqlite_fcntl_vfsnam:[0,14],connection_hook:[16,6,11,0,14],mapping_conflict_resolution_mod:14,beg:9,datatyp:5,sequenceofbind:15,result:[11,9,10,19,14,20,15,16,8,18,6,2,0,7,12,5],respons:10,fail:[18,19,0,14],best:[9,10,12,21],subject:4,awar:[17,2],using_amalgam:[0,14],said:10,compile_opt:[0,14],databas:[0,14,8,13,5,21],figur:20,outstand:[16,13],approach:[15,7,12,5],attribut:[18,0,2],accord:0,extend:[9,14,18,6,0,7],newnam:10,"7dd4968f23":6,xrang:20,weak:0,sqlite_drop_temp_index:14,extens:[16,0,8],sqlite3_uri_paramet:19,sqlite_funct:[0,14],behaviour:[3,9,5,1,0,7,21],planner:16,expos:[19,0],print_exc:18,howev:[9,19,8,6,11,0,12,2],against:[16,11,0,8,14],logic:18,maxnam:20,countri:[0,12],com:[1,3,0],col:20,con:[18,10,7],sqlite_checkpoint_ful:14,had:[9,10,14,15,18,6,0],sai:[10,0,19],"2nd":18,duplic:[10,0,14],xshm:19,convolut:19,three:[9,0,11,19,20,2],been:[9,10,13,14,15,16,1,6,11,0,19],accumul:8,sqlite_status_pagecache_us:14,much:[8,12,5],interpret:[21,2],asyncvfsnam:17,sqlite3_blob_clos:5,life:0,rather:[19,0,7],databasefilenam:[15,9],worker:17,ani:[19,9,10,4,13,5,14,15,16,17,8,20,6,21,0,7,12,2],millionth:19,child:14,"catch":[9,19,20,0,13,7],enableloadextens:16,ident:[15,10,21],gnu:1,properti:[15,5],commerci:4,weren:0,abrupt:16,sqlite_fcntl_mmap_s:14,aid:18,vagu:3,sqlite3_progress_handl:16,seven:20,have:[19,9,10,13,5,21,14,15,18,16,17,8,20,1,6,11,0,7,12,2],pythonscript:11,kernel32:19,tabl:[14,15,16,17,8,18,11,0,20,21],player:6,dumper:0,sever:[8,18,6,21,0,7,2],sqlite_iocap_atomic1k:14,incorrectli:18,perform:[9,10,14,16,17,18,6,0,21],suggest:7,make:[9,10,19,14,21,20,15,16,17,8,18,6,11,0,7,2],couchdb:0,"1f72ac0":11,connectionclosederror:18,complex:[18,10,20],split:[9,10,20],mapping_xshmlock_flag:14,complet:[9,10,13,14,15,16,18,6,11,0,19],mapping_statu:14,sqlite_empti:[18,14],encryptm:20,hand:[20,21],py_object:[16,19],fairli:[6,21],rais:[9,10,14,15,16,5,11,19,7,2],cp437:[9,0],protocolerror:18,tune:6,gdfklhj:11,kept:0,undesir:1,scenario:19,inherit:[9,19,17,18,6,0,20],obfu:20,contact:0,thi:[19,9,10,4,13,5,21,14,15,18,16,17,8,20,1,6,11,0,7,12,2],everyth:[3,9,0,8,12],left:[15,0,5],identifi:[6,11,10,16],just:[9,10,19,14,20,16,8,1,6,11,0,7,5],badfunc:7,nullvalu:9,yet:[15,10,0],languag:[15,12],previous:[16,0,5],jose:0,largefil:5,easi:[9,19,14,17,11,0,7],interfer:14,statementcaches:16,sqlite_constraint_vtab:14,shout:0,els:[10,13,20,16,1,0,19,12,21],save:[9,0,8,6,11,19],applic:[0,4,2],quirk:9,preserv:0,sqlite_dbstatus_cache_hit:[0,14],background:17,sqlite_format:[18,14],measur:[16,7,14],ctime:20,specif:[0,8,2],arbitrari:0,sqlite_open_nomutex:[0,14],manual:[6,8,20],createaggregatefunct:[16,20],hung:0,underli:[10,0,14,16,5,19,21],right:[0,7,12],old:[9,0],deal:[9,19,0,13,7,12],sqlite_detach:14,interv:[6,16],somehow:7,percentag:21,intern:[16,0,8,14],sqlite_dbstatus_cache_us:14,indirect:[6,16],manifest:15,cooper:0,collect:[15,16,0,14],subclass:[6,9,12],buffer:[19,20,5,11,0,7,12,2],sqlite_nomem:[18,14],wibbl:15,foo:[9,10,20,16,5,18,6,11,19,7,21],operationalerror:7,core:19,plu:[9,8],sensibl:1,popular:[16,11,0,7],sqlite_ioerr_short_read:[18,14],promot:0,file93:20,post:6,"super":[16,19,20],file94:20,obj:[16,19],slightli:10,sqlite_upd:[16,14,20],unfortun:[11,0],distinct:[11,9,16],sqlite_config_multithread:14,commit:[10,0,20,15,16,8,21,13,19,2],produc:[0,8],sqlite_busi:[18,16,11,0,14],ppa:[1,0],"float":[9,19,20,12,2],encod:[11,9,0,12],bound:0,down:[19,8],wrap:[0,16,5,19,7,21],opportun:16,storag:[11,0],nbyte:5,accordingli:19,wal:[19,6,0,16],setprogresshandl:[16,20],wai:[3,9,10,19,5,21,14,15,16,17,8,1,6,11,0,7,2],support:[10,19,14,15,16,8,18,6,0,7,12,2],sqlite3_bind:18,fight:0,avail:[9,10,19,14,20,15,18,16,17,5,1,6,11,0,7,2],xsectors:19,reli:[6,0],fraction:19,basevf:20,gil:[11,0,2],stopiter:2,fork:[0,14],head:20,async_control:17,mycol:16,form:[1,6,11,7],offer:9,forc:[9,15,16,8,13,20,5],auto_vacuum:0,getcompletelin:9,sqlite_intern:[18,0,14],mymod:10,callproc:2,isfil:20,xmlcharrefreplac:[9,0],arrays:2,bugfix:0,sqlite_create_view:14,maximum:[0,14,16,5,21,19,20,2],tell:[10,19,16,17,8,18,0,5],sqlite_recurs:[0,14],xdevicecharacterist:19,fundament:7,autoconf:0,emit:0,featur:[9,10,20,15,18,6,0,7],arahesi:0,sqlite_open_main_db:14,semicolon:9,classic:11,sqlite_interrupt:[18,14],diagnost:[3,9,14,11,0,7],exist:[9,10,19,15,16,8,18,6,0,20,5],ship:8,check:[10,19,14,20,8,1,6,11,0,7],graft:8,assembl:5,readonli:[18,16,0],encrypt:20,connectionid:11,when:[9,10,19,5,21,14,15,18,16,17,8,20,1,6,11,0,7,12,2],entrypoint:16,test:[14,21],roll:[13,16,0],sqlite3_expir:0,sqlite_dbstatus_schema_us:14,intend:[6,10],stringio:[0,20],sqlite_misus:[18,14],mapping_access:14,sqlite_lock_non:[19,14],apswexcept:0,warp:20,consid:[18,19,7,14],sql:[0,14,16,8,13,21],open_flag:[16,11,0],faster:[21,0,7,5],anywher:[11,7],bidirection:0,ignor:[19,10,13,16,8,6,21,0,7,5],time:[11,9,10,19,13,14,21,20,15,16,17,8,18,6,2,0,7,5],push:[9,0],lastrowid:2,backward:0,sqlite3_soft_heap_limit:0,from_address:[16,19],mapping_limit:14,skip:[15,16,11,7,20],consum:[10,7,21],osi:[0,4],signific:10,sqlite_fcntl_rbu:[0,14],row:[15,16,5,21,0,20,2],millisecond:[16,9,0],decid:6,depend:[19,18,16,8,1,11,0,12],million:[15,6],aspn:18,intermedi:8,aspw:[11,12],sqlite_ignor:[16,14,20],x01:20,x00:20,isinst:20,x04:20,softheaplimit:[0,14],sourc:[13,14,21],"__exit__":[16,13,5],string:[9,10,19,14,21,20,15,16,5,18,6,11,0,7,12,2],getautocommit:16,condit:[11,8],word:[11,9,17],sqlite_constraint_primarykei:14,exact:[14,12],freht:0,administr:9,level:[9,10,19,14,20,16,5,0,7,12],did:[1,19,0],gui:[6,8],iter:[10,21,20,15,11,19,7,2],item:[9,10,14,15,16,8,18,11,0,20,21],unsupport:7,icu:[0,8],team:[0,8],sqlite_open_autoproxi:[0,14],round:[19,13],databasenam:[16,10,20],prevent:[16,6,10,7],permiss:[16,4],sign:[1,16,0,8,12],heavyweight:10,sqlite_iocap_sequenti:14,sqlite_row:14,sqlite_notadb:[18,14],appear:[18,6],"64bit":1,current:[9,10,19,14,15,16,8,0,20,12,5],sqlite_ioerr_rdlock:14,alongsid:8,deriv:[19,0],nuanc:7,honour:20,gener:[0,8,14],sqlite_open_uri:[0,14,20],water:14,slow:8,"__main__":18,address:[19,9,0],along:0,spinni:20,wait:[18,16,11],invis:8,sqlite_ioerr_shmlock:14,checksum:[0,8],errorhandl:2,queue:17,behav:[19,5,12],pysqlit:[3,0,21],extrem:2,nocolour:9,sqlite_perm:[18,14],overrid:[9,0,16,8,19,20],semant:[19,0],have_usleep:0,extra:[6,9,13,8,14],myvf:19,tweak:0,prefer:8,paramat:[16,9],visibl:[15,6,0],instal:[9,14,15,16,8,1,6,11,0],progresshandl:20,sqlite_mismatch:[18,14],xnextsystemcal:19,sqlite_constraint_commithook:14,memori:[14,16,8,0,20,21],visit:10,subvers:0,live:16,handler:[16,8,18,6,11,19,20],triggerorview:20,sqlite_create_temp_view:14,checkout:8,sqlite_reindex:[0,14],peopl:6,lp64:0,clue:21,visual:0,sqlite_open_w:14,setrollbackhook:16,examin:18,forkingviolationerror:[18,14],toobig:[11,20],sqlite_open_cr:[16,14,20],effort:[19,0,7,2],easiest:19,paramon:20,prepar:[10,16,18,11,0,20],pretend:7,uniqu:[15,6,11,10,16],can:[3,19,9,10,13,5,21,14,15,18,16,17,8,20,1,6,11,0,7,12,2],sqlite_savepoint:[0,14],mapping_device_characterist:14,purpos:[10,9,0,14,4],claim:4,encapsul:[0,15,16,6,13,19],sqlite_fcntl_last_errno:[0,14],topic:21,abort:[16,0,8,20],sqlite_sync_norm:14,occur:[18,6,0,8,5],alwai:[9,10,13,20,16,5,18,6,0,7,12,21],multipl:[11,9,10,13,14,21,20,15,16,19,8,18,6,2,0,7,12,5],sqlite_checkpoint_pass:[16,14],modulenam:[10,20],write:[3,9,20,15,16,17,5,18,0,7],till:5,anyon:[1,4],pure:0,bigon:20,sqlite_ioerr_delet:[0,14],map:[3,9,14,15,11,0],product:4,max:[11,20],setinputs:2,mac:[19,8,7],"4th":16,xore:[19,0],repeatedli:[19,13],mai:[9,10,4,13,14,21,18,16,8,1,6,11,0,19,12,5],process_unknown_arg:9,get_resource_usag:9,data:[9,10,19,14,20,15,16,5,18,6,0,7,21],grow:16,sqlite_fcntl_size_hint:14,sqlite_ioerr_fstat:14,misrepres:4,deal_with_binary_data:7,mangl:7,inform:[9,10,14,15,8,18,6,0,20,2],sandbox:[19,0],create_funct:7,combin:[16,7],practis:[8,21],xshmlock:[0,14],callabl:[15,16],sqlite_fcntl_sync_omit:14,approv:[0,4],nolfserror:18,sqlite_fcntl_win32_set_handl:[0,14],sqlite_fcntl_persist_w:14,gitignor:20,still:[10,19,15,16,8,6,11,0,7,12,5],pointer:[19,16,0],dynam:10,entiti:9,conjunct:[13,16,0,5],sqlite_open_master_journ:14,group:14,thank:0,setwal:6,instantli:9,platform:[1,16,0,8,14],window:[9,19,14,8,1,0,7],mail:[6,19,21],main:[9,10,13,14,20,16,8,11,0,19,21],vdbe:6,recoveri:19,non:[9,10,19,20,16,8,18,0,7,12],oldlimit:14,savepoint:[16,11,0,7],prereigst:16,filesourc:20,initi:[9,10,14,16,18,0],col1:20,sqlite_limit_:20,now:[11,0,20],discuss:0,nor:[15,9,0],term:16,name:[9,10,19,21,15,16,8,18,6,11,0,20,12,2],sqlite_dbstatus_stmt_us:14,drop:[20,0,8,7],revert:[16,0],separ:[15,9,0],sqlite_fcntl_trac:[0,14],compil:[9,14,17,8,1,11,0,20,12],sqlite_statu:0,makedefault:19,replac:[19,9,0,20],individu:11,continu:[18,16,9,8,20],releasememori:14,newval:16,redistribut:4,significantli:[19,0,7],year:[1,6,0],happen:[9,21,19,15,16,8,18,6,11,0,7,5],py_ssize_t:0,shown:[9,10],abcdefghijklmnopqrstuvwxyz:11,"3rd":16,space:[6,11,20,8,5],createcol:[16,20],db_filenam:[16,0],factori:[16,6,7,20],integr:[9,0],earlier:[18,19,0,7,5],uri_boolean:[19,20],newsiz:19,state:[15,16,9,14],migrat:0,sqlite_sync_dataonli:14,argv:[9,10,14],git:1,theori:[0,8,21],org:[9,4],"byte":[19,9,13,14,20,8,6,2,0,7,12,5],care:[9,10,7],couldn:0,where:[9,10,19,14,15,8,18,6,11,0,20,12,2],insert:[9,10,14,21,20,15,16,5,18,11,7,2],synchron:14,turn:[9,19,14,16,8,18,6,11,0,7],place:[16,9,0,8,7],lax:6,summari:[11,9,7,12],frequent:[16,21],first:[9,10,19,14,15,16,17,8,6,11,0,7,5],origin:[18,0,8,4,21],directli:[9,10,6,11,0,7],qmark:2,onc:[10,14,15,16,8,6,11,19,5],sqlite_ioerr_gettemppath:[0,14],housekeep:0,yourself:[1,10,9,0,21],reopen:[0,5],sqlite3_sleep:0,happi:[1,0],open:[9,10,4,19,14,15,16,17,5,18,6,11,0,20],size:[0,20,16,5,18,6,11,13,19,21],sqlite_shm_shar:14,given:[11,9,10],silent:[15,21,7,14],convent:19,stuck:7,caught:[18,20],slept:11,sqlite_lock_shar:[19,14],checker:[0,8],conveni:[9,10,8,6,11,19],cope:0,copi:[0,16,5,1,11,13,20],specifi:[9,10,19,16,17,8,1,11,0,5],myupdatehook:20,"short":[6,9,19,8],mostli:6,sqlite_fcntl_get_lockproxyfil:[0,14],than:[9,10,19,14,21,16,8,18,6,11,0,7,5],wide:[10,6,0,14],posix:9,were:[19,9,10,13,14,15,16,8,18,11,0,7,21],posit:[10,5],seri:[1,10],pre:[0,8],command_exit:9,wobbl:15,argument:[9,10,19,14,16,18,11,0,20],dash:[11,9],sqlite_replac:14,deliv:14,pyerr_writeunrais:0,dbapi:15,bitwis:19,date:[1,11,0,7,2],techniqu:6,destroi:[10,0,20],note:[14,16,8,0,5,21],sqlite_ioerr_clos:[0,14],ideal:8,howmani:20,take:[10,0,20,16,17,8,11,13,19,12,21],advis:10,noth:[0,7],sqlite3_vfs_unregist:19,mapping_authorizer_return:14,begin:[9,10,15,5,11,20,2],sure:[6,9,14],trace:[15,16,0,20],normal:[10,19,14,16,5,6,11,0,7,21],track:[10,19,0,8,7],price:10,clearer:0,beta:20,colorama:[9,0],async_run:17,renam:[10,0],sqlite_constraint_uniqu:14,later:[3,14,16,8,0,20],drive:21,quantiti:[19,11,10,12],runtim:[16,8],sqlite_busy_recoveri:14,sqlite_ioerr_fsync:14,width:[9,0],output_html:9,show:[9,10,19,20,17,8,18,6,11,0,7,21],german:12,fts3_parenthesi:[1,8],cheap:15,concurr:[9,8,18,6,11,0,2],fletcher:0,sqlite_iocap_atom:14,corner:0,help:[9,10,19,14,8,1,11,0,21],xml:9,onli:[11,9,10,13,14,21,15,16,19,8,18,6,2,0,7,12,5],explicitli:0,sqlite3_bind_blob:15,"true":[9,10,13,14,20,15,16,8,11,0,19],transact:[10,13,15,16,6,21,0,7,2],activ:[15,11],behind:[15,16,18,0,7,21],rtree:[1,0,8],dict:[15,18,9,0,14],sqlite_limit_variable_numb:14,backspac:20,startswith:20,xdlsym:[19,0],nomemerror:18,variou:[9,14,18,17,8,1,6,11,0,12],get:[19,9,10,13,5,21,14,15,18,16,17,8,20,1,6,11,0,7,12,2],becaus:[3,9,10,19,15,16,8,1,11,0,12],ssl:0,cannot:[11,9,18,16,8,1,6,2,7,5],requir:[9,4,15,16,17,5,18,0,7,12,21],yield:20,roger:[1,4],arfrev:0,sqlite3_global_recov:0,sqlite_lock_reserv:14,sqlite_limit_function_arg:14,xaccess:[19,14],sqlite_corrupt:[18,14],excepthook:[19,0,14],detect:[9,19,15,8,18,11,0],enumer:[6,20],label:6,getattr:20,sourcedatabasenam:16,between:[3,9,10,19,14,8,18,0,12,21],"import":14,across:[19,14,15,16,11,0,7,21],fcntl:0,assumpt:8,august:3,parent:[16,14],sqlite_iocap_atomic4k:14,screen:9,sqlite_open_deleteonclos:[19,14],sqlite_ioerr_seek:[0,14],cycl:16,etraceback:19,sqlite_iocap_atomic2k:14,come:[1,9,10,8,21],rangeerror:18,inconsist:[18,9],present:[9,10,0,17,19,12],sqlite_lock_exclus:14,mani:[19,9,10,13,14,21,15,16,18,6,11,0,7,12,2],undocu:[0,8],overview:[9,19,20],unittest:0,xfilecontrol:[19,16,0],pop:9,unobfusc:20,colon:[15,11,9,7,14],cancel:20,damag:4,better:[3,0],cursor_execut:18,sqlite_sync_ful:14,taifersar:0,rebuild:0,vfsfile:[3,0,20],mark:[19,14,4],surplu:19,thousand:16,resolut:[0,14],emptyerror:18,upgrad:0,sqlite_ioerr_shmopen:14,sqlite_status_pagecache_overflow:14,"case":[19,15,18,16,8,1,0,12,5],interoper:0,pedant:0,cast:[6,7],invok:[16,11,9,0,14],have_localtime_r:0,margin:0,sqlite3_libvers:14,indexstr:10,stdout:[11,9,20],metric:0,canon:1,worri:15,destin:[13,0],sqlite3_db_readonli:16,bias:5,eras:8,ascii:[9,20,12],"__init__":20,develop:[10,6,0,8],sqlite3async:17,obfusc:[19,0,20],alphabet:[0,12,4],same:[11,9,10,19,13,14,21,15,16,17,8,18,6,2,0,7,12,5],trip:13,binari:[0,8],enable_locking_styl:14,document:[4,16,17,8,18,11,0,2],finish:[15,13,16,0],strconvert:16,closest:19,utf8:[9,0,12],nest:[15,16,0,7,2],assist:0,driver:6,ness:0,capabl:19,formaterror:18,improv:[0,7,14],extern:[19,0,12],appropri:[9,10,13,14,16,8,6,0,19,2],megabyt:21,macro:0,sqlite3_blob_writ:5,justifi:0,sqlite3_interrupt:[18,16],without:[10,4,8,11,0,7],model:[15,18],roughli:[9,0],execut:[16,21,0,8,5],tip:[3,0,14],rest:[11,8],xsleep:19,invalid:[9,15,18,6,0,7],gjkhgfdsgfd:11,sqlite_delet:[16,14,20],http_proxi:8,speed:[10,0,20,21],obliter:0,sqlite_index_constraint_g:14,constraintarg:10,sqlite_index_constraint_l:[10,14],littl:0,declared_column_typ:15,sqlite_limit_expr_depth:14,blob:[13,14],versa:10,sqlite_fcntl_busyhandl:[0,14],sqlite_fcntl_lockst:[0,14],file20:20,real:[20,12],around:[15,19,6,0],read:[19,9,10,13,20,15,16,5,18,6,11,0,7,12],createmodul:[16,10,20],sqlite3_prepare_v2:[15,11,0],world:[1,20],os_unix:6,threadsaf:[0,14],sqlite_limit_like_pattern_length:14,sqlite3_backup_finish:13,sqlite_ioerr_dir_fsync:14,integ:[9,10,19,14,15,16,18,0,20,12],server:[1,8,20],benefit:[6,7],either:[9,10,15,5,18,0,12],manag:[0,16,5,1,13,7,21],pasma:0,uri_paramet:[19,20],obfuscatedvfsfil:20,ascend:10,testdb:[11,20],rowid:[18,16,10,5,20],gdfjkhg:11,getrowtrac:[15,16],kilobyt:[6,21],exit:[9,14,16,17,5,11,0,20,21],inject:[15,6],sqlite3_update_hook:[16,0],complic:[18,8],overli:0,freed:14,sqlite_notfound:[18,0,14],power:[16,19],garbag:[15,16,0,14],inspect:0,broken:[1,16],sqlite_ioerr_trunc:14,sqlite_fcntl_win32_av_retri:14,findfunct:[16,10],"throw":[11,13],sqlite_drop_temp_view:14,sqlite_fail:14,src:[9,4],executescript:[7,21],sqlite_pragma:14,stand:8,act:[15,9],sqlite_protocol:[18,0,14],utf16:12,backup:14,other:[19,9,10,13,14,21,15,18,16,8,1,6,11,0,7,2],routin:[10,0,5,18,19,20],effici:[10,0],xread:[19,20],vfsname:[19,14,20],surviv:[16,19],sqlite_open_readonli:[19,14],strip:[0,20],sqlite_cantopen_notempdir:14,exceptionfor:[19,10,14],log:[3,16,11,0,14],mingw32:8,interrupterror:[18,16],aren:[0,7],mycommithook:20,sqlite_checkpoint_trunc:[0,14],compliant:[3,7],sqlite_limit_length:[14,20],sqlite3_blob_reopen:5,lot:[18,10,0,8,21],pyerr_displai:19,mismatcherror:18,programmat:9,tupl:[9,10,14,15,16,18,6,11,19,20],regard:0,idea:[16,0,14],sqlite_open_memori:[0,14],baffl:9,procedur:2,sqlite3_set_author:16,gori:0,sqlite_last_errno:0,longer:[10,19,15,18,11,0],"0b1":0,possibl:[3,9,19,15,8,1,6,0,12],"default":[19,9,10,13,14,21,20,18,16,17,8,1,6,11,0,7,12,5],valueerror:[19,0,5,20],xlock:19,open_vf:[16,0],embed:[6,0],deadlock:[11,0,7],connect:[13,0,5,14],sqlite_limit_worker_thread:[0,14],mapping_config:14,creat:[19,9,10,13,14,20,15,18,16,8,1,6,11,0,7,5],certain:[10,0,8],gaertner:0,intro:9,decreas:19,file:[3,9,21,14,20,18,16,17,8,1,6,11,0,7,5],sqlite_constraint_rowid:[0,14],sqlite_config_lookasid:14,fill:[15,20],incorrect:18,again:[9,10,0,20,16,8,18,13,19],sqlite3_statu:0,googl:0,readinto:[0,5],event:4,sqlite_ioerr_block:[0,14],field:[17,6,9,10,11],cleanup:[10,0,20],spatial:[17,8],writabl:5,pathnam:[19,0],you:[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21],getdescript:[7,15,6,0,20,2],poor:7,sqlite_config_singlethread:14,sequenc:[9,10,15,5,11,20,12],symbol:[19,0],docstr:9,sqlite_warning_autoindex:[0,14],mutlipl:19,sqlite_status_malloc_count:14,sqlite_config_sqllog:[0,14],reduc:[10,11,0,2],deliber:[8,20],enablesharedcach:14,unload:19,directori:[10,0,17,8,6,19,20],descript:[15,9,0,2],scroll:2,repar:0,platter:19,sqlite3_limit:16,represent:[11,9,19,12],all:[19,9,10,13,14,21,20,15,18,16,17,8,1,6,11,0,7,12,2],consider:[0,8],sqlite_config_getpcach:14,illustr:[11,7],bytearrai:5,scalar:[18,16,10,7,20],mp3:6,abil:[10,0,7],sqlite_max_:[0,20],follow:[9,4,14,20,8,18,11,0,7],disk:20,ptr:19,mytrac:20,apswtrac:[6,11,0,7],sqlite_analyz:14,sqlncommand:9,valgrind:8,articl:[6,9,12],init:9,program:[9,10,14,16,11,0],amicita:0,sqlite_iocap_atomic64k:14,introduc:[10,6,16,0,7],complete_sql:9,liter:9,fals:[9,10,13,14,20,15,16,17,5,18,11,0,19,21],checkin:14,faq:14,util:7,worst:16,fam:18,veri:[9,14,15,16,5,11,0,7,8],ticket:[19,6,11,0,16],megatest:8,list:[9,10,4,19,14,21,15,18,16,8,1,6,11,0,2],sqlite_version_numb:[14,20],emul:7,addressof:16,adjust:[19,0,8],stderr:[11,9,0],small:[9,14,8,0,5,21],cantopenerror:18,sqlite3_total_chang:16,edzard:0,sync:[19,10],journal_mod:6,past:0,rate:15,design:[6,16],pass:[9,10,19,14,15,16,8,18,0,20],further:[18,0],sqlite_log:6,whenc:5,rath:0,what:[3,21,15,16,8,18,11,0,20,5],abc:[11,20],xor:20,sub:0,"_phcount":20,sum:11,sqlite_drop_vt:[0,14],abl:[19,9,0],delet:[10,19,14,16,0,20],version:[9,4,19,14,20,15,18,17,8,1,0,7,21],sqlite_iocap_powersafe_overwrit:[0,14],localtim:20,"public":1,sqlite_readonly_recoveri:[0,14],hasn:[16,0],full:[9,19,16,17,8,18,11,0,5],themselv:14,sqlite_iocap_atomic512:14,vtabl:[18,10],sqlite_don:14,sqlite3_extension_init:16,ver:0,parenthesi:8,modifi:[16,11,0,8,7],backlash:9,valu:[19,9,10,13,14,21,20,15,16,5,18,11,0,7,12,2],search:[3,9,0,17,8,19],ahead:[3,16,0,20],vttabl:[3,16],sqlite_iocap_safe_append:14,collationneed:[16,0],prior:[19,16,11,0],amount:[19,14,16,5,6,11,0,20,2],pick:[1,11,9],action:[16,14,20],via:[9,15,16,17,6,13,21],filenam:[9,21,19,16,8,1,11,0,20,5],sqlite_drop_trigg:14,famili:19,timesten:11,select:[9,10,20,15,16,18,6,11,7,2],apswvers:[0,14,20],sqlite_readonly_rollback:[0,14],exectraceabort:[18,11],regist:[10,19,20,16,17,11,0,7],two:[9,10,19,20,15,18,16,5,1,6,11,0,7,21],coverag:[0,8,7],paramstyl:2,mapping_extended_result_cod:[6,14],minor:0,more:[9,10,19,5,21,14,15,18,16,8,20,1,6,11,0,7,2],desir:[6,5],henc:[1,0,12],abspath:19,site:[0,8,20],flag:14,particular:[9,10,14,16,17,6,2],known:[15,19,8],cach:[15,16,0,14,21],setbusyhandl:[18,16,11],none:[9,10,19,15,16,5,18,11,0,20,12,2],hour:20,result_constraint:18,sqlite_drop_temp_trigg:14,tb_next:18,dev:[1,6],orderbi:10,sqlite_shm_exclus:14,remain:[9,0,15,16,5,11,13,19],yourscript:11,abandon:15,deb:1,def:[10,20,15,16,18,6,11,19,7],prompt:[9,0,14],scan:10,registr:14,share:[3,16,0,14],accept:9,minimum:[8,21],unreli:0,setoutputs:2,huge:[9,10],newlin:[9,20],awkward:16,divid:19,setauthor:[16,20],anoth:[19,14,15,18,16,1,11,13,7,2],mapi:20,toobigerror:[18,20],reset:[16,14],divis:18,reject:8,csv:[10,9,0,20],simpl:[9,10,20,17,19,7,12],unabl:[18,19,6,13],regener:8,sqlite_config_uri:[0,14],resourc:[16,9,10],referenc:16,reflect:0,okai:[5,20],associ:[18,16,2],circumst:0,github:[1,3,0],xsync:19,row_factori:6,confus:[19,12],author:[4,14,16,18,6,11,0,20],notfounderror:[18,19,0],caus:[15,16,8,18,6,11,0,20],callback:[16,18,6,11,0,7],zerodivisionerror:18,sqlite_dbstatus_max:14,rotat:21,memoryus:14,held:4,through:[0,8,1,19,20,5],process_quick:19,paramet:[9,10,0,14,20,15,16,5,18,21,13,19,2],style:[18,2],overhead:[11,21],c_int:16,sqlite_ioerr_writ:14,relev:[16,9,8,14],sqlite_limit_vdbe_op:14,harmless:[6,19,8],pend:16,bypass:0,might:10,alter:[15,8,4,21],setwalhook:16,tri:[18,16,6,8,7],good:[14,20,16,1,0,7,12],"return":[11,9,10,13,14,21,20,15,16,19,8,18,6,2,0,7,12,5],e596a6b6:0,timestamp:[11,0],compound:10,troubleshoot:[18,3],unlik:[15,8,14],easili:[11,9,0,7,21],token:9,alreadi:[9,10,0,14,5,19],compris:14,exect:19,found:[1,18],unicod:[3,14,16,17,8,0,7,21],setcommithook:[16,20],sqlite_constraint_notnul:14,truncat:0,harm:16,inplac:8,hard:[10,8,21],sqlite_iocap_atomic32k:14,permissionserror:18,realli:[15,20,7,14],expect:[9,20,15,18,6,19,7],pythonscriptopt:11,http:[1,3,9,0,4],beyond:5,sqlite3_vfs_find:19,unquot:14,highwat:[16,14],print:[19,9,13,20,15,16,8,18,6,11,0,7,21],difficulti:6,lefteri:0,proxi:8,advanc:5,thousandth:16,uri_int:[19,20],sqlite_notice_recover_w:14,asc:[1,10],quick:0,reason:[1,7],base:[19,15,18,6,0,20,12,21],believ:[11,0],ask:[10,14,6,0,12,2],strnum:20,earliest:16,recv:1,your:[21,0,14,15,16,5,2,13,8],thread:[14,15,16,8,18,0,2],omit:[10,13,11,0,8],misuseerror:[18,6],dir:20,"0xa5":20,lifetim:19,assign:10,feed:21,sqlite_fcntl_overwrit:[0,14],major:[19,7],upper:12,feel:6,number:[11,9,10,19,14,21,15,18,16,17,5,1,2,0,20,8],placehold:16,done:[0,14,20,15,5,6,11,13,19],construct:0,sqlerror:[18,19,0],stabl:0,miss:[15,18,19,8,14],usleep:8,rowtrac:20,differ:[3,11,21,14,15,18,16,5,1,2,0,20,8],collat:[18,16,11,0,20],sqlite_dbstatus_lookaside_miss_ful:14,script:[11,0,8,21,7],interact:[9,0,14],gpg:1,sqlite_checkpoint_restart:14,least:[11,19],corrupterror:18,checkpoint:[16,6,0,14],stori:8,sqlite3_bind_nul:15,sqlite_ioerr_delete_no:[19,0,14],connectionnotclosederror:18,scheme:[9,20],unrecognis:19,store:[9,10,14,20,5,6,19,7,2],schema:[18,3,9,0,20],statement:[0,14,15,16,5,18,21,13,20,2],urifilenam:[3,0,20],part:[9,10,19,14,16,8,18,11,0,20,21],pars:[3,9,0,8,7],consult:8,off:[9,10,20,8,6,11,0,7],st_:20,grace:16,bestindex:[18,10,0,20],kind:[18,21],whenev:[16,17,14],remot:10,remov:[9,4,15,16,0,20],bigstmt:21,cost:10,str:[19,11,0,20],consumpt:[6,11],setprofil:16,st_ctime:20,comput:[6,8],multpl:16,packag:[1,0,20],dedic:8,"null":[9,19,15,18,6,0,20,12],sqlite_index_constraint_eq:[10,14],nikolau:0,option:[21,0,8,14],push_output:9,built:0,lib:[8,20],randint:20,self:[9,10,19,16,18,13,20],violat:18,also:[9,10,19,5,21,14,15,18,16,8,20,1,6,11,0,7,12,2],sqlite3_backup_remain:13,unintent:0,useless:[18,0],distribut:21,previou:[18,11,0,14],reach:[16,9,8,7],most:[9,10,18,16,8,1,11,0,7],plai:16,plan:0,sqlite3_file_control:16,alpha:20,sqlite_ioerr_checkreservedlock:[0,14],sqlite_config_getpcache2:[0,14],clear:[19,16,0,20],sqlite_config_malloc:14,sqlite_ioerr_mmap:14,clean:[16,13,8],schemachangeerror:18,latest:[1,0],microsoft:[0,8],sector:[19,0],reincarn:0,sqlite_create_temp_t:14,sqlite_readonly_dbmov:[0,14],session:7,relation:10,reus:[15,10],fine:[10,0,7],affin:12,xea:18,penalti:14,writer:17,solut:1,sqlite_ioerr_nomem:[0,14],queu:17,factor:21,sqlite3_busy_timeout:16,fullerror:[18,19,0],hit:[9,21],unus:10,"__file__":20,express:[17,4],nativ:[19,2],longest:[11,20],liabl:4,command_help:9,banner:9,restart:0,inheritfromvfsnam:20,keyword:0,tcl:[9,8,21],sqlite_readonli:[18,10,14],statist:[0,20],nextset:2,wrote:4,set:[9,10,19,14,20,18,16,17,8,1,6,11,0,7,12,5],dump:[6,9,0,21],cdll:19,overloadfunct:[16,10],mutabl:16,geoff:0,bare:12,arg:[9,10,14,16,18,20],disadvantag:1,sqlite_ioerr_shms:14,someth:[9,10,14,20,16,1,7,12],sqlite_config_mutex:14,pickup:0,greg:4,mutex:[14,11,0,8,7],amalgam:[0,8,14],sqlite3_prepar:[18,0],sqlite_omit_load_extens:8,experi:6,sqlite_notic:14,altern:[9,4,8,6,0,5],signatur:[1,0],sqlite_limit_sql_length:14,appreci:4,sqlite3_uri_boolean:19,numer:[14,16,18,19,20,2],induc:8,isol:[15,6],distinguish:0,sqlite3_bind_int64:15,both:[9,14,15,8,6,0,12],last:[9,19,15,18,1,11,13,7,2],rogerbinn:[1,3,0],roman:12,csvtest:20,"8am":20,context:[9,0,14,16,5,13,7],pdf:0,whole:[11,14,20],allconstraint:18,sqlite_open_sharedcach:[0,14],load:[9,0,16,8,18,19],simpli:2,point:[9,19,16,17,8,6,2,0,7,5],instanti:[14,20],format:[9,10,14,18,0,20],sqlite3_fcntl_persist_w:0,except:[0,14,15,16,5,13,20,2],header:[18,9,0,8,20],"100mb":5,shutdown:[19,14],suppli:[9,19,14,15,16,5,18,11,0,20,12,21],backend:10,contraint:10,java:7,devic:14,due:[18,16,0,8,14],empti:[9,10,19,14,17,5,18,0],destructor:[10,0,14],xrandom:19,sqlite_fcntl_sync:[0,14],mercuri:0,setbusytimeout:[18,16,11],"1e0e5a0":11,reformat:10,sqlite_locked_sharedcach:[6,0,14],fire:[18,6],imag:18,coordin:6,understand:[15,9,21,14,20],sqlite_fcntl_wal_block:[0,14],func:[15,7],demand:[16,0],sqlite3_random:[0,14],f_prev:18,look:[3,9,14,20,15,16,8,7],xa1:20,erron:18,xa4:20,batch:[6,9],"while":[9,10,14,15,16,8,18,6,11,13,20,21],executemani:[20,15,16,18,6,0,7,2],unifi:19,sysfil:20,error:[0,14,15,16,5,1,2,13,8],everyon:1,loop:[15,9],earli:12,sqlite_status_memory_us:[14,20],xcheckreservedlock:19,readm:20,itself:[10,15,8,18,11,0],fedora:1,obsolet:19,belong:15,nanosecond:16,getlin:9,sqlite3:[16,0,8,20],sqlite3_memory_us:[0,14],conflict:[0,14],sqlite_enable_column_metadata:0,x85:20,unintention:0,moment:16,temporari:[19,6,0,16],user:[9,10,19,14,20,16,8,18,6,11,0,7,12],sqlite_drop_temp_t:14,inflag:19,recent:[9,10,16,8,18,11,7],numarg:16,toip:16,equival:7,vfsfileclosederror:18,entri:[15,11,9,0,16],spent:11,testvtabl:18,person:10,"2gb":[19,11,0,5],fjfjfj:15,sqlite_open_exclus:14,sqlite_dbconfig_enable_trigg:14,sqlite3_create_module_v2:16,turkic:12,readonlyerror:[18,10],love:20,cut:0,cur:[18,17,7,5],sqlite3_clos:16,totalchang:16,eager:0,xopen:[19,0,20],snap:11,input:14,subsequ:[10,9,0,19],bin:8,sqlite_dbstatus_deferred_fk:[0,14],ilove7:20,transpar:12,big:[6,10],lockederror:[18,13],datasourc:16,bit:[16,0,21],characterist:14,semi:[15,11,9,7,14],signal:[6,19],whitespac:0,mymoduleclass:10,"32bit":1,getsqlite3point:0,encount:[6,9,0],tb_frame:18,often:[16,0],sqlite_ioerr_access:[0,14],acknowledg:4,arrai:[19,5],creation:[13,14],some:[19,9,10,13,20,18,17,8,1,6,21,0,7,5],back:[19,9,10,13,20,15,16,8,6,21,0,7,2],global:[0,20,2],understood:[19,16,0],litter:6,sampl:[11,0],sqliteasync_halt_idl:17,scale:[0,21],mapping_result_cod:[6,14],chunksiz:16,pep:[3,13,16,5,2,0,7,8],per:21,mnxb:11,substitut:7,retri:[6,11,0,16],larg:[18,20],recognis:7,sqlite_config_getmalloc:14,machin:[1,0,8],zeroblob:14,run:[9,10,14,15,18,16,8,1,6,11,0,20,21],sqlite_ioerr_convpath:[0,14],gome:0,step:[10,0,16,17,8,1,11,13,20],patienc:0,sqlite3_blob_read:5,sqlite_ok:[16,14,20],sqlite_access_read:14,constraint:[18,10,0,14],maxpathnam:19,rowcount:[6,2],regular:[9,17],etyp:19,threadid:11,exc_info:[18,19],"1gb":[11,5],block:[19,16,0,5],file3:20,file1:20,file7:20,pythonpath:11,within:[18,6,0,12],sqlite3_wal_checkpoint_v2:[16,0],ensur:[0,14,20,16,8,6,13,19,21],chang:[13,5],sqlite_copi:[0,14],websit:[6,0],"_handl":19,updatechangerow:10,xsetsystemcal:19,fast:[15,20],custom:3,includ:[3,19,9,10,4,13,14,20,15,18,16,17,8,1,6,11,0,7,12,21],suit:[14,11,0,8,7],forward:6,sqlite_create_function_v2:0,sqlite3_fcntl_win32_av_retri:0,myfunc:18,reorgan:0,xff:20,link:[6,0],translat:[19,10],atom:15,line:[3,14,20,18,16,8,1,11,0,7,21],createscalarfunct:[18,16,0,7,20],info:9,utc:19,xfullpathnam:[19,0],utf:[6,9,19,11],consist:[1,9,0,21],sqlite3_release_memori:[0,14],caller:[11,0],sqlite3_column_decltyp:15,scipi:9,those:[4,0,16,8,11,13,5],highlight:0,readlin:9,similar:[3,9,10,19,14,8,1,0,7],xcurrenttim:19,sqlite_limit_trigger_depth:[0,14],parser:[6,8],doesn:[9,10,16,8,21,0,7,2],repres:[14,16,5,19,20,12,2],"char":20,incomplet:9,home:[1,8,20],titl:15,sqlite_ioerr_lock:[0,14],scrape:0,listdir:20,codec:12,pyreadlin:9,sqlite_config_getmutex:14,gigabyt:21,sqlite3_complet:14,gkjlfd:11,ctype:[16,19],eval:20,set_encod:9,mymodul:10,enhanc:7,sqlite_toobig:[18,14],vice:10,sqlite_limit_attach:[14,20],tradeoff:11,dot:[9,20],leak:[19,10,8],far:[18,0,7],hello:20,objwrap:16,sqlite_open_main_journ:14,code:[4,14,15,16,8,0,2],partial:19,x96:20,queri:[9,10,19,14,20,15,16,18,6,11,0,7,21],sqlite_deni:[16,6,14,20],sqlite3_get_autocommit:[16,0],sqlite_index_constraint_gt:[10,14],sqlite_warn:14,mapping_file_control:14,sqlite3_load_extens:16,privat:[16,20],elsewher:[10,0],send:[11,9],lower:12,late:12,aris:4,fatal:19,blobbi:20,sent:9,sqlite_insert:[16,14,20],random:[0,14,16,19,20,12],rollback:[19,16,11,10,2],whichev:5,loadlibrari:19,rownumb:2,sqlite3_overload_funct:16,obscur:19,dbname:[16,11,20],recip:[18,10],joel:[6,9,12],load_extens:[0,8],"try":[9,10,13,14,21,20,15,8,18,6,11,0,7,2],display_tim:9,sqlite_open_temp_db:14,pleas:0,malici:8,impli:4,smaller:13,fortun:[9,12],cfg:20,download:[0,8],acid:5,append:[15,18,6,20],compat:[16,6,9,0,8],index:[1,3,11,10,19],handle_except:9,compar:[6,20,21],bail:9,sqlite_index_constraint_match:14,access:[19,10,13,14,20,16,5,18,6,11,0,7,21],sqlite_access_readwrit:14,idx_pr_cust:10,xgetsystemcal:19,deduc:0,whatev:[15,11,19,7],"50th":8,hkp:1,len:20,aborterror:18,sqlite_lock_pend:14,intercept:7,let:[10,19,20,15,6,11,0,7],ubuntu:[1,0],ioerror:[18,19],syncdir:19,becom:[18,9,0],sinc:[20,15,16,11,0,7,2],convert:[10,11,9,0,12],convers:[9,12],sqlite_create_index:14,autoimport:[9,0],larger:[19,16,11,0,7],param:19,typic:[19,16,9,0],chanc:11,"__enter__":[13,16,0,5],firefox:0,nearest:19,appli:[15,16,6,0,8],approxim:10,"boolean":[18,13,10,19],opcod:[0,14],autherror:18,from:[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21],zip:[1,0],doubl:[6,9,10,8],chm:0,next:[10,15,16,6,11,20,2],mycursor:15,few:[15,18,0],chr:20,usr:8,stage:8,sqlite_config_pcache_hdrsz:[0,14],sqlite_ioerr_read:14,remaind:11,sort:[10,16,17,0,20,12],mismatch:18,backslash:9,trail:[9,14,15,1,0,20],sqlitelibvers:[16,14,20],sqlite_cantopen_fullpath:[0,14],actual:[10,14,8,11,19,12,2],account:[16,12],retriev:[6,9,0,5],augment:0,alia:6,raw_input:14,column:[9,10,20,15,16,18,6,11,0,7,12],meet:10,sqlite_abort_rollback:[0,14],control:[16,21,0,8,14],xcc:20,xca:20,process:[9,10,19,14,16,17,8,1,6,11,13],lock:[19,9,13,14,16,17,18,6,11,0,7,2],sdist:0,pop_input:9,high:[0,14],acm:10,tab:9,opensourc:4,datatabas:19,msvc:0,serial:11,"0x978800":18,sqlite_ful:[18,19,14],delai:[17,20],sqlite_schema:[18,0,14],friendlier:9,sqlite_config_heap:14,sqlite_config_mmap_s:14,sig:1,subdirectori:8,instead:[3,9,10,19,15,8,1,6,11,0,12,2],narg:[16,10],sqlite_constraint_foreignkei:14,circular:16,everywher:12,errcod:6,sqlite_vtab_constraint_support:14,overridden:19,xc3:20,xc0:20,xc4:20,xc8:20,alloc:[14,8,18,11,0,5],binn:[1,4],sqlite_max_length:20,bind:[3,9,20,15,16,18,11,0,7,21],counter:20,correspond:[10,14,18,1,19,7,12,2],issu:[9,19,14,15,16,8,18,6,11,0,7],libsqlite3:8,allow:[9,13,14,20,15,16,8,6,2,0,7,5],tamper:[1,0],fallback:0,sqlite3_soft_heap_limit64:[0,14],elif:19,ouput:0,ipv4convert:16,move:[10,0],comma:[15,8,20],cursorclosederror:[18,0],sqlite_open_fullmutex:[0,14],sqlite3_collation_need:16,sqlite3_last_insert_rowid:16,button:16,extendedresult:[18,19],crash:[0,7,14],greater:[16,10],python:[21,14,15,18,16,5,1,2,0,20,8],sqlite_iocap_atomic8k:14,spell:0,innermost:[18,16],cover:0,mention:0,sqlite3_backup_step:13,front:[6,9],unregist:19,sqlite3_chang:16,multiprocess:[0,14],anyth:[10,14,15,6,19,20],edit:9,sqlite_config_pcache2:14,sqlite_auth:[18,14],xgetlasterror:19,guard:1,vtmodul:3,mode:[3,9,14,16,0,20],reserv:[16,19,5],notif:10,tracer:[15,18,16,0,20],subset:[9,0],sqlite3_compileoption_get:14,chunk:[16,0],sqlite3_uri_int64:19,"static":[15,14,0,8,7],our:[16,20],patch:9,special:0,out:[9,10,19,20,18,8,1,6,11,0,7,12],variabl:[9,10,15,8,18,0],gentoo:1,incompleteexecutionerror:[15,18,11],reboot:19,outputflag:19,categori:14,sqlite3_commit_hook:16,suitabl:[9,10,7],rel:[10,0,5],sqlite_status_scratch_overflow:14,common:[15,6,8],dbfile:20,experiment:[0,20],insid:[18,11,0],pylong_fromvoidptr:16,manipul:[12,2],ipv6convert:16,dictionari:[9,15,16,18,6,11,20],releas:[3,10,13,18,1,11,0,2],"_phspinner":20,bam:11,sqlite_fcntl_chunk_s:[16,0,14],asyncvf:[17,0,8],unnam:18,updateinsertrow:10,put:[15,21,8,7],timer:[9,0],keep:[10,14,16,6,0,7,21],loadextens:16,constrainterror:[18,20],enforc:0,outsid:[0,20],timezon:19,rudolf:0,sqlite_auth_us:[0,14],softwar:[9,0,4],isbn:15,scene:[15,18,16,0],echo:9,sqlite3_wal_autocheckpoint:16,mapping_wal_checkpoint:14,plainli:4,owner:14,facil:6,underscor:19,prioriti:11,busyerror:[13,16,18,6,11,19],"long":[9,19,16,18,6,11,0,12],strict:9,unknown:0,licens:0,perfectli:[18,10],accent:12,system:[3,9,14,18,16,8,1,6,0,7,21],wrapper:[3,6,7],sqlite_status_scratch_s:14,attach:[16,9,10,20,2],attack:[15,6],sadli:12,monkei:9,privaci:1,termin:[18,9,0],"final":[0,16,8,6,13,20],shell:[3,21,0,14,20],virtualt:18,adaptor:7,gotcha:[15,6],fetchmani:2,rst:20,sqlite_dbstatus_lookaside_miss_s:14,exactli:[10,19,15,16,6,11,0],haven:[13,10,7],indexnam:10,pysqlite2:7,cpython:[3,8],sqlite_limit_compound_select:14,bother:8,sqlite_rollback:14,richer:0,structur:[10,0,14],charact:[9,15,16,8,0,12,21],num:[15,9],sens:[6,9,10,14,2],someon:19,sqlite_fcntl_tempfilenam:[0,14],sqlite3_log:14,stdin:[11,9],viewer:0,julian:[19,2],py2:7,py3:7,deprec:0,corrupt:14,sqlite3_blob:5,close:[10,0,14,20,15,16,17,8,18,11,13,19,5],need:[11,9,10,19,13,14,21,20,15,16,17,8,18,6,2,0,7,12,5],sqlite_create_t:[14,20],tidi:20,optimis:[10,16,0],sqlite_constraint_funct:14,sqlite_select:[14,20],sqlite_read:[14,20],sqlite_config_log:[6,0,14],sqlite_cantopen_isdir:[0,14],"switch":0,builtin:0,sqlite3_bind_doubl:15,which:[19,9,10,13,5,21,14,15,18,16,17,8,20,1,6,11,0,7,12,2],mapping_db_config:14,singl:[9,14,15,8,6,11,7,21],extensionloadingerror:[18,16],unless:[9,14,16,8,18,0,7,21],clash:[0,8],deploy:8,who:1,c25aab7e7:9,eight:20,sqlite_max_column:20,pyd:[8,7],why:[9,0],url:[0,8],gather:0,request:[9,10,19,14,15,18,17,5,1,6,0,2],uri:[19,0,20],inde:18,deni:18,determin:[19,10,8],occasion:6,constrain:[10,0],ee19e690ec:9,text:[9,19,14,17,8,6,11,0,7,12,21],sqlite_readonly_cantlock:[0,14],xd7:20,bring:8,xd1:20,sqlite_lock:[18,19,14],desc:[10,20],trivial:[6,11,0],anywai:[18,0],redirect:0,sqlite_fcntl_commit_phasetwo:[0,14],wal_checkpoint:[16,0],locat:0,launchpad:0,xdlerror:19,should:[11,9,10,19,14,21,15,16,17,8,18,6,2,0,12,5],won:[9,10,16,8,18,11,0,7],statements_nobind:21,local:[10,15,16,17,8,18,0,20,12],setupdatehook:[16,20],sqlite3_enable_load_extens:16,sqlite_shm_unlock:14,increas:[19,9,0,5],inststruct:16,sqlite_fcntl_has_mov:[0,14],dlclose:19,sqlite_alter_t:[0,14],xbestindex:18,enabl:[17,16,0,8,12],sqlite_constraint_:0,whether:0,stuff:12,sqlite_create_temp_trigg:14,"7fccea8456e0":11,contain:[15,10,9,0,7],sqlite_config:0,view:[6,0,16],legaci:[0,8],modulo:18,unexecut:[15,11],call_funct:16,frame:[18,19,16,0],knowledg:10,outflag:19,displai:[9,20],sqlite_create_trigg:14,temporarili:9,obfuvf:20,statu:[16,0,14,20],aros:[0,14],correctli:[18,11,0,7,20],pattern:9,boundari:[15,21],misus:0,dll:8,written:0,st_size:20,entrant:8,progress:[16,20],neither:[15,2],numbyt:19,kei:[1,10,16,0,18],sqlite3config:8,pop_output:9,entir:[18,0,5],joe:0,disconnect:[10,20],addit:14,sqlite_dbstatus_cache_miss:[0,14],stat4:[0,8],stat2:0,goal:6,equal:[16,10],etc:[1,15,0,8,14],instanc:[18,10,19,0,13],grain:0,notpres:20,"_yaml":20,freeli:4,swallow:7,sqlite_abort:[18,14],bindingserror:[15,18],comment:0,sqlite_open_transient_db:14,ipython:9,implment:20,respect:14,rpm:[1,8,21],orig:20,quit:[6,9,17],evalu:[19,16,11,10],addition:[1,15],compos:15,compon:[17,8],json:[9,0],treat:[16,9,19,12],immedi:[15,6,11,19],async_initi:17,mike:0,upcom:0,sqlite3_open_v2:[16,11,0],assert:8,sqlite_status_scratch_us:14,togeth:[16,19],file17:20,arisen:11,determinist:[16,0],multi:0,plain:14,xwrite:[19,20],harder:7,cursor:[13,16,0,14,21],defin:[19,13,14,20,15,16,5,18,11,0,7,12,2],dbapi2:7,xdlclose:19,format_sql_valu:[0,14],sqlite_set_lockproxyfil:0,gamma:20,layer:[3,20,21],helper:10,almost:[18,19,0,8,21],xtruncat:[19,0],sqlite_transact:14,archiv:[0,21],sqlite_status_parser_stack:14,initfil:9,satisfi:[11,10],sqlite_open_temp_journ:14,sqlite_open_readwrit:[16,14,20],member:[6,9,10],handl:[3,9,16,0,7,21],see:[3,9,21,4,19,5,14,15,18,16,17,8,20,1,6,11,0,7,12,2],auto:[6,16],output_column:9,fetchon:[15,0,2],start:[9,10,19,14,21,15,18,16,17,5,1,6,11,0,20,2],difficult:[18,6],process_command:[9,20],sqlite_get_lockproxyfil:0,sqlite3_wal_hook:16,hostnam:0,mapping_authorizer_funct:[14,20],col2:20,upon:9,effect:[0,14,15,16,6,13],dai:[19,2],column_nam:15,dealloc:14,sqlite_fcntl_file_point:14,php:7,distutil:0,expand:6,cosmet:0,xclose:19,dotfil:20,colour:[9,0],well:[10,17,8,6,11,0,20,12,21],thought:6,pham:0,exampl:[21,0,14,15,16,5,1,2,13,8],command:[3,20,8,1,11,0,7,21],interpol:6,usual:0,sqlite_ioerr_dir_clos:[0,14],paus:19,less:[0,16,5,18,19,2],last_insert_rowid:[15,16,2],obtain:[15,0,2],sqlite_iocap_immut:[0,14],cursorfrom:[11,0],"0x98d8c0":18,simultan:[9,10],web:[19,0],onward:[0,5],drawback:6,spinner:20,sqlite3_shutdown:[0,14],add:[9,20,18,8,1,7,2],valid:[18,6,9,10],sqlite_notice_recover_rollback:14,sqlite3_memory_highwat:[0,14],logger:6,match:[19,9,0,8],sqlite_drop_view:14,sqlite_config_seri:14,dest:9,piec:0,sysdir:20,five:20,know:[10,19,14,16,1,6,0,20,12],press:[16,9],sqlite3_create_collation_v2:16,recurs:0,xe9:20,python2:20,loss:12,print_exc_plu:18,like:[9,10,19,14,15,18,16,8,1,6,11,0,7,12,5],necessari:[9,0,16,6,11,19],lose:5,sqlite_config_win32_heaps:[0,14],async:0,page:[3,9,10,13,16,8,6,0,19],bdist_rpm:8,exceed:20,didn:[17,11,0,14],sqlite_config_pcach:14,linux:[1,19,8],"export":20,superclass:[16,19],flush:20,guarante:0,librari:[3,9,19,14,16,8,18,0,20],win32:19,lead:[10,11,9,0,19],process_arg:9,avoid:[11,19,5],filecontrol:[19,16,0],estim:10,leav:8,"1f6b8d0":11,sqlite_create_temp_index:14,getfiledata:20,journal:[6,19,21],usag:[3,0,14,15,16,8,18,11,13,20,21],gklsdfjgkldfjhnbnvc:11,host:[0,21],memcon:20,although:[1,3,0,7,12],offset:[19,9,0,5,20],simpler:0,about:[9,10,19,15,8,6,11,0,20,12,21],readwrit:11,linenumb:9,freelibrari:19,constructor:[19,2],discard:[15,11],disabl:[9,10,16,17,8,6,11,0,7,21],sqlite_drop_index:14,desrib:2,own:[9,10,19,14,20,16,18,6,11,0,7,21],millsecond:16,xunlock:[19,0],automat:[9,19,14,15,16,8,11,0,12],dataset:10,warranti:4,handle_interrupt:9,zerodivis:19,pitfal:6,mere:[19,0],transfer:0,trigger:[18,6,16],wouldn:6,arg1:10,biggest:20,"function":[11,9,10,19,14,20,16,17,8,18,6,2,0,7,12,5],unexpect:[3,7],"0x7fffffff":20,obfuscatedvf:20,spuriou:18,sqlite_dbconfig_lookasid:14,overflow:12,bug:[9,0],count:[16,19,2],made:[10,19,18,16,1,0],temp:16,sqlite_ioerr_shmmap:[0,14],wish:[9,19],temp_stor:14,writeabl:16,unlock:18,troubl:19,asynchron:[0,8],record:[19,16,11,0],below:[19,10,8,14],limit:[14,16,18,11,0,20,12],otherwis:[9,0,16,8,13,20],problem:[15,18,1,6,0,7],zebra:15,sqlite3_temp_directori:0,sqliteasync_halt:17,"int":[9,19,14,16,5,0,20,12],mask:10,dure:[2,0,16,5,11,13,8],novemb:0,implement:[19,9,10,13,20,16,8,18,6,2,0,7,5],ini:10,process_complete_lin:[9,20],quot:[15,6,20],"0dfbd904":1,percent:0,sqlite_index_constraint_lt:14,virtual:[3,14,20,16,18,0,7],book:15,bool:[13,16,10,14,19],futur:19,rememb:[11,19],kjfhgk:11,sqlite_status_malloc_s:14,stat:20,repeat:[1,16],sqlite_open_subjourn:14,"1kb":13,gjkfhgjkhdfkjh:11,sqlite_create_vt:[0,14],sqlite3_rollback_hook:[16,0],sqlite_access_exist:14,sqlite_cantopen_convpath:[0,14],debian:1,stai:7,sphinx:0,eof:[9,10,20],sqlite3_backup_pagecount:13,myobfudb:20,reliabl:0,rule:10,cpu:11,portion:19,sqlite_constraint:[18,14],"0x988f30":18,auxiliari:18},objtypes:{"0":"py:module","1":"py:method","2":"py:exception","3":"py:attribute","4":"py:class","5":"py:data"},objnames:{"0":["py","module","Python module"],"1":["py","method","Python method"],"2":["py","exception","Python exception"],"3":["py","attribute","Python attribute"],"4":["py","class","Python class"],"5":["py","data","Python data"]},filenames:["changes","download","dbapi","index","copyright","blob","tips","pysqlite","build","shell","vtable","execution","types","backup","apsw","cursor","connection","extensions","exceptions","vfs","example","benchmarking"],titles:["Change History","Download","DBAPI notes","APSW documentation","Copyright and License","Blob Input/Output","Tips","pysqlite differences","Building","Shell","Virtual Tables","Execution and tracing","Types","Backup","APSW Module","Cursors (executing SQL)","Connections to a database","Extensions","Exceptions","Virtual File System (VFS)","Example","Benchmarking"],objects:{"":{apsw:[14,0,0,"-"]},"apsw.VTCursor":{Eof:[10,1,1,""],Column:[10,1,1,""],Next:[10,1,1,""],Filter:[10,1,1,""],Rowid:[10,1,1,""],Close:[10,1,1,""]},"apsw.Connection":{setcommithook:[16,1,1,""],db_filename:[16,1,1,""],createcollation:[16,1,1,""],last_insert_rowid:[16,1,1,""],overloadfunction:[16,1,1,""],setrollbackhook:[16,1,1,""],getrowtrace:[16,1,1,""],loadextension:[16,1,1,""],close:[16,1,1,""],setrowtrace:[16,1,1,""],collationneeded:[16,1,1,""],totalchanges:[16,1,1,""],wal_checkpoint:[16,1,1,""],setauthorizer:[16,1,1,""],"__enter__":[16,1,1,""],readonly:[16,1,1,""],filename:[16,3,1,""],filecontrol:[16,1,1,""],setbusytimeout:[16,1,1,""],sqlite3pointer:[16,1,1,""],config:[16,1,1,""],setwalhook:[16,1,1,""],status:[16,1,1,""],createaggregatefunction:[16,1,1,""],"__exit__":[16,1,1,""],getexectrace:[16,1,1,""],open_vfs:[16,3,1,""],setupdatehook:[16,1,1,""],getautocommit:[16,1,1,""],setprofile:[16,1,1,""],createmodule:[16,1,1,""],setexectrace:[16,1,1,""],setprogresshandler:[16,1,1,""],interrupt:[16,1,1,""],blobopen:[16,1,1,""],setbusyhandler:[16,1,1,""],enableloadextension:[16,1,1,""],createscalarfunction:[16,1,1,""],backup:[16,1,1,""],cursor:[16,1,1,""],limit:[16,1,1,""],wal_autocheckpoint:[16,1,1,""],changes:[16,1,1,""],open_flags:[16,3,1,""]},"apsw.VTTable":{Rename:[10,1,1,""],UpdateDeleteRow:[10,1,1,""],Begin:[10,1,1,""],Disconnect:[10,1,1,""],FindFunction:[10,1,1,""],UpdateInsertRow:[10,1,1,""],UpdateChangeRow:[10,1,1,""],Sync:[10,1,1,""],Destroy:[10,1,1,""],Commit:[10,1,1,""],Rollback:[10,1,1,""],Open:[10,1,1,""],BestIndex:[10,1,1,""]},"apsw.Shell":{set_encoding:[9,1,1,""],pop_input:[9,1,1,""],process_complete_line:[9,1,1,""],handle_interrupt:[9,1,1,""],push_input:[9,1,1,""],process_args:[9,1,1,""],getcompleteline:[9,1,1,""],write:[9,1,1,""],push_output:[9,1,1,""],complete_command:[9,1,1,""],process_command:[9,1,1,""],cmdloop:[9,1,1,""],usage:[9,1,1,""],getline:[9,1,1,""],complete_sql:[9,1,1,""],fixup_backslashes:[9,1,1,""],complete:[9,1,1,""],db:[9,3,1,""],pop_output:[9,1,1,""],Error:[9,2,1,""],display_timing:[9,1,1,""],handle_exception:[9,1,1,""],get_resource_usage:[9,1,1,""],process_sql:[9,1,1,""],process_unknown_args:[9,1,1,""]},"apsw.VFS":{xSetSystemCall:[19,1,1,""],xFullPathname:[19,1,1,""],xGetLastError:[19,1,1,""],xDelete:[19,1,1,""],xNextSystemCall:[19,1,1,""],unregister:[19,1,1,""],xAccess:[19,1,1,""],xDlOpen:[19,1,1,""],xSleep:[19,1,1,""],xCurrentTime:[19,1,1,""],xDlError:[19,1,1,""],xDlSym:[19,1,1,""],xGetSystemCall:[19,1,1,""],excepthook:[19,1,1,""],xDlClose:[19,1,1,""],xRandomness:[19,1,1,""],xOpen:[19,1,1,""]},apsw:{softheaplimit:[14,1,1,""],VTTable:[10,4,1,""],connection_hooks:[14,3,1,""],ExecTraceAbort:[18,2,1,""],LockedError:[18,2,1,""],IncompleteExecutionError:[18,2,1,""],EmptyError:[18,2,1,""],VTModule:[10,4,1,""],shutdown:[14,1,1,""],ProtocolError:[18,2,1,""],VFS:[19,4,1,""],releasememory:[14,1,1,""],AuthError:[18,2,1,""],Shell:[9,4,1,""],log:[14,1,1,""],VFSFileClosedError:[18,2,1,""],memoryused:[14,1,1,""],NotADBError:[18,2,1,""],apswversion:[14,1,1,""],ConstraintError:[18,2,1,""],VFSFile:[19,4,1,""],CorruptError:[18,2,1,""],exceptionfor:[14,1,1,""],RangeError:[18,2,1,""],SchemaChangeError:[18,2,1,""],blob:[5,4,1,""],AbortError:[18,2,1,""],ForkingViolationError:[18,2,1,""],IOError:[18,2,1,""],VFSNotImplementedError:[18,2,1,""],main:[14,1,1,""],sqlite3_sourceid:[14,1,1,""],InternalError:[18,2,1,""],compile_options:[14,3,1,""],status:[14,1,1,""],ExtensionLoadingError:[18,2,1,""],FullError:[18,2,1,""],MismatchError:[18,2,1,""],complete:[14,1,1,""],randomness:[14,1,1,""],MisuseError:[18,2,1,""],VTCursor:[10,4,1,""],NotFoundError:[18,2,1,""],zeroblob:[5,4,1,""],TooBigError:[18,2,1,""],PermissionsError:[18,2,1,""],ConnectionClosedError:[18,2,1,""],Cursor:[15,4,1,""],SQLITE_VERSION_NUMBER:[14,5,1,""],format_sql_value:[14,1,1,""],ExecutionCompleteError:[18,2,1,""],Error:[18,2,1,""],initialize:[14,1,1,""],CantOpenError:[18,2,1,""],vfsnames:[14,1,1,""],FormatError:[18,2,1,""],BusyError:[18,2,1,""],NoMemError:[18,2,1,""],memoryhighwater:[14,1,1,""],URIFilename:[19,4,1,""],InterruptError:[18,2,1,""],SQLError:[18,2,1,""],config:[14,1,1,""],sqlitelibversion:[14,1,1,""],Connection:[16,4,1,""],enablesharedcache:[14,1,1,""],NoLFSError:[18,2,1,""],BindingsError:[18,2,1,""],using_amalgamation:[14,3,1,""],ThreadingViolationError:[18,2,1,""],fork_checker:[14,1,1,""],ReadOnlyError:[18,2,1,""],ConnectionNotClosedError:[18,2,1,""],CursorClosedError:[18,2,1,""],backup:[13,4,1,""]},"apsw.zeroblob":{length:[5,1,1,""]},"apsw.blob":{"__exit__":[5,1,1,""],read:[5,1,1,""],readinto:[5,1,1,""],"__enter__":[5,1,1,""],write:[5,1,1,""],reopen:[5,1,1,""],length:[5,1,1,""],close:[5,1,1,""],seek:[5,1,1,""],tell:[5,1,1,""]},"apsw.VFSFile":{xRead:[19,1,1,""],xClose:[19,1,1,""],xUnlock:[19,1,1,""],xSync:[19,1,1,""],xWrite:[19,1,1,""],xFileControl:[19,1,1,""],xLock:[19,1,1,""],xDeviceCharacteristics:[19,1,1,""],excepthook:[19,1,1,""],xSectorSize:[19,1,1,""],xCheckReservedLock:[19,1,1,""],xFileSize:[19,1,1,""],xTruncate:[19,1,1,""]},"apsw.Cursor":{fetchall:[15,1,1,""],execute:[15,1,1,""],executemany:[15,1,1,""],description:[15,3,1,""],getconnection:[15,1,1,""],getdescription:[15,1,1,""],getrowtrace:[15,1,1,""],setexectrace:[15,1,1,""],getexectrace:[15,1,1,""],close:[15,1,1,""],fetchone:[15,1,1,""],setrowtrace:[15,1,1,""]},"apsw.URIFilename":{filename:[19,1,1,""],uri_boolean:[19,1,1,""],uri_parameter:[19,1,1,""],uri_int:[19,1,1,""]},"apsw.VTModule":{Create:[10,1,1,""],Connect:[10,1,1,""]},"apsw.Error":{result:[18,3,1,""],extendedresult:[18,3,1,""]},"apsw.backup":{finish:[13,1,1,""],"__exit__":[13,1,1,""],pagecount:[13,3,1,""],"__enter__":[13,1,1,""],step:[13,1,1,""],done:[13,3,1,""],close:[13,1,1,""],remaining:[13,3,1,""]}},titleterms:{code:1,execut:[15,11],fts3:17,permiss:18,disk:18,find:8,row:11,note:[9,2],copyright:4,vtcursor:10,rtree:17,except:[18,19],better:7,busi:[18,6],blob:5,input:5,sourc:[1,8],python:11,pypi:1,bit:11,specif:18,troubleshoot:10,vtmodul:10,mode:6,output:5,unicod:[6,9,12],tracer:11,intern:18,connect:[16,2],download:1,stack:18,cursor:[15,6,2],entranc:11,databas:[6,16],your:1,"import":13,icu:17,refer:14,zeroblob:5,advanc:8,gener:18,usag:9,behaviour:6,extens:[17,2],host:11,addit:8,ahead:6,vttabl:10,etc:18,pysqlit:7,chang:0,backup:13,modul:[14,2],easy_instal:1,api:[14,2],build_ext:8,differ:[6,7],log:6,memori:18,licens:4,system:19,vfsfile:19,custom:6,interfac:2,statement:11,recommend:8,type:[12,2],schema:6,shell:9,option:2,unexpect:6,benchmark:21,flag:8,pars:6,line:9,cach:[6,11],multi:11,thread:11,augment:18,setup:8,histori:0,asynchron:17,error:[18,19],fetch:8,control:1,sqlite:[18,6,8,14],constant:14,share:6,indic:3,apsw:[18,3,11,7,14],diagnost:6,abort:18,file:19,tabl:[3,10],pip:1,dbapi:2,tip:6,detail:13,virtual:[19,10],write:6,binari:1,build:8,test:8,verifi:1,document:3,map:12,distribut:8,trace:[18,11],object:2,speedtest:21,what:7,sql:[15,6],"class":[9,10,19,15,16,5,13],handl:6,bind:6,urifilenam:19,doe:7,exampl:[9,20],command:9,model:11}})apsw-3.8.11.1-r1/doc/example.html0000664000000000000000000023132612565423350015037 0ustar rootroot Example — APSW 3.8.11.1-r1 documentation

Example

This code demonstrates usage of the APSW api. It gives you a good overview of all the things that can be done. Also included is output so you can see what gets printed when you run the code.

import os, sys, time
import apsw

###
### Check we have the expected version of apsw and sqlite
###

print "      Using APSW file",apsw.__file__                # from the extension module
print "         APSW version",apsw.apswversion()           # from the extension module
print "   SQLite lib version",apsw.sqlitelibversion()      # from the sqlite library code
print "SQLite header version",apsw.SQLITE_VERSION_NUMBER   # from the sqlite header file at compile time
|       Using APSW file /space/apsw/apsw.so
|          APSW version 3.8.11.1-r1
|    SQLite lib version 3.8.11.1
| SQLite header version 3008011
###
### Opening/creating database
###

connection=apsw.Connection("dbfile")
cursor=connection.cursor()
###
### simple statement
###

cursor.execute("create table foo(x,y,z)")

###
### using different types
###

cursor.execute("insert into foo values(?,?,?)", (1, 1.1, None))  # integer, float/real, Null
cursor.execute("insert into foo(x) values(?)", ("abc", ))        # string (note trailing comma to ensure tuple!)
cursor.execute("insert into foo(x) values(?)",                   # a blob (binary data)
                    (buffer("abc\xff\xfe"), ))                   # Use b"abc\xff\xfe" for Python 3

###
### multiple statements
###

cursor.execute("delete from foo; insert into foo values(1,2,3); create table bar(a,b,c) ; insert into foo values(4, 'five', 6.0)")

###
### iterator
###

for x,y,z in cursor.execute("select x,y,z from foo"):
    print cursor.getdescription()  # shows column names and declared types
    print x,y,z

###
### iterator - multiple statements
###

for m,n,o in cursor.execute("select x,y,z from foo ; select a,b,c from bar"):
    print m,n,o

###
### bindings - sequence
###

cursor.execute("insert into foo values(?,?,?)", (7, 'eight', False))
cursor.execute("insert into foo values(?,?,?1)", ('one', 'two'))  # nb sqlite does the numbers from 1

###
### bindings - dictionary
###

cursor.execute("insert into foo values(:alpha, :beta, :gamma)", {'alpha': 1, 'beta': 2, 'gamma': 'three'})
###
### tracing execution
###

def mytrace(cursor, statement, bindings):
    "Called just before executing each statement"
    print "SQL:",statement
    if bindings:
        print "Bindings:",bindings
    return True  # if you return False then execution is aborted

cursor.setexectrace(mytrace)
cursor.execute("drop table bar ; create table bar(x,y,z); select * from foo where x=?", (3,))
| SQL: drop table bar ;
| SQL: create table bar(x,y,z);
| SQL: select * from foo where x=?
| Bindings: (3,)
###
### tracing results
###

def rowtrace(cursor, row):
    """Called with each row of results before they are handed off.  You can return None to
    cause the row to be skipped or a different set of values to return"""
    print "Row:", row
    return row

cursor.setrowtrace(rowtrace)
for row in cursor.execute("select x,y from foo where x>3"):
     pass
| SQL: select x,y from foo where x>3
| Row: (4, 'five')
| Row: (7, 'eight')
| Row: ('one', 'two')
# Clear tracers
cursor.setrowtrace(None)
cursor.setexectrace(None)

###
### executemany
###

# (This will work correctly with multiple statements, as well as statements that
# return data.  The second argument can be anything that is iterable.)
cursor.executemany("insert into foo (x) values(?)", ( [1], [2], [3] ) )

# You can also use it for statements that return data
for row in cursor.executemany("select * from foo where x=?", ( [1], [2], [3] ) ):
    print row
###
### defining your own functions
###

def ilove7(*args):
    "a scalar function"
    print "ilove7 got",args,"but I love 7"
    return 7

connection.createscalarfunction("seven", ilove7)

for row in cursor.execute("select seven(x,y) from foo"):
    print row
| ilove7 got (1, 2) but I love 7
| (7,)
| ilove7 got (4, 'five') but I love 7
| (7,)
| ilove7 got (7, 'eight') but I love 7
| (7,)
| ilove7 got ('one', 'two') but I love 7
| (7,)
| ilove7 got (1, 2) but I love 7
| (7,)
| ilove7 got (1, None) but I love 7
| (7,)
| ilove7 got (2, None) but I love 7
| (7,)
| ilove7 got (3, None) but I love 7
| (7,)
###
### aggregate functions are more complex
###

# Here we return the longest item when represented as a string.

class longest:
    def __init__(self):
        self.longest=""

    def step(self, *args):
        for arg in args:
            if len( str(arg) ) > len (self.longest):
                self.longest=str(arg)

    def final(self):
        return self.longest

    # Under Python 2.3 remove the following line and add
    # factory=classmethod(factory) at the end
    @classmethod
    def factory(cls):
        return cls(), cls.step, cls.final

connection.createaggregatefunction("longest", longest.factory)
for row in cursor.execute("select longest(x,y) from foo"):
    print row
| ('eight',)
###
### Defining collations.
###

# The default sorting mechanisms don't understand numbers at the end of strings
# so here we define a collation that does

cursor.execute("create table s(str)")
cursor.executemany("insert into s values(?)",
                  ( ["file1"], ["file7"], ["file17"], ["file20"], ["file3"] ) )

for row in cursor.execute("select * from s order by str"):
    print row
| ('file1',)
| ('file17',)
| ('file20',)
| ('file3',)
| ('file7',)
def strnumcollate(s1, s2):
    # return -1 if s1<s2, +1 if s1>s2 else 0

    # split values into two parts - the head and the numeric tail
    values=[s1, s2]
    for vn,v in enumerate(values):
        for i in range(len(v), 0, -1):
            if v[i-1] not in "01234567890":
                break
        try:
            v=( v[:i], int(v[i:]) )
        except ValueError:
            v=( v[:i], None )
        values[vn]=v
    # compare
    if values[0]<values[1]:
        return -1
    if values[0]>values[1]:
        return 1
    return 0

connection.createcollation("strnum", strnumcollate)

for row in cursor.execute("select * from s order by str collate strnum"):
    print row
| ('file1',)
| ('file3',)
| ('file7',)
| ('file17',)
| ('file20',)
###
### Authorizer (eg if you want to control what user supplied SQL can do)
###

def authorizer(operation, paramone, paramtwo, databasename, triggerorview):
    """Called when each operation is prepared.  We can return SQLITE_OK, SQLITE_DENY or
    SQLITE_IGNORE"""
    # find the operation name
    print apsw.mapping_authorizer_function[operation],
    print paramone, paramtwo, databasename, triggerorview
    if operation==apsw.SQLITE_CREATE_TABLE and paramone.startswith("private"):
        return apsw.SQLITE_DENY  # not allowed to create tables whose names start with private

    return apsw.SQLITE_OK  # always allow

connection.setauthorizer(authorizer)
cursor.execute("insert into s values('foo')")
cursor.execute("select str from s limit 1")
| SQLITE_INSERT s None main None
| SQLITE_SELECT None None None None
| SQLITE_READ s str main None
# Cancel authorizer
connection.setauthorizer(None)
###
### progress handler (SQLite 3 experimental feature)
###

# something to give us large numbers of random numbers
import random
def randomintegers(howmany):
    for i in xrange(howmany):
        yield (random.randint(0,9999999999),)

# create a table with 100 random numbers
cursor.execute("begin ; create table bigone(x)")
cursor.executemany("insert into bigone values(?)", randomintegers(100))
cursor.execute("commit")

# display an ascii spinner
_phcount=0
_phspinner="|/-\\"
def progresshandler():
    global _phcount
    sys.stdout.write(_phspinner[_phcount%len(_phspinner)]+chr(8)) # chr(8) is backspace
    sys.stdout.flush()
    _phcount+=1
    time.sleep(0.1) # deliberate delay so we can see the spinner (SQLite is too fast otherwise!)
    return 0  # returning non-zero aborts

# register progresshandler every 20 instructions
connection.setprogresshandler(progresshandler, 20)

# see it in action - sorting 100 numbers to find the biggest takes a while
print "spinny thing -> ",
for i in cursor.execute("select max(x) from bigone"):
    print # newline
    print i # and the maximum number

connection.setprogresshandler(None)
###
### commit hook (SQLite3 experimental feature)
###

def mycommithook():
    print "in commit hook"
    hour=time.localtime()[3]
    if hour<8 or hour>17:
        print "no commits out of hours"
        return 1  # abort commits outside of 8am through 6pm
    print "commits okay at this time"
    return 0  # let commit go ahead

connection.setcommithook(mycommithook)
try:
    cursor.execute("begin; create table example(x,y,z); insert into example values (3,4,5) ; commit")
except apsw.ConstraintError:
    print "commit was not allowed"

connection.setcommithook(None)
| in commit hook
| no commits out of hours
| commit was not allowed
###
### update hook
###

def myupdatehook(type, databasename, tablename, rowid):
    print "Updated: %s database %s, table %s, row %d" % (
        apsw.mapping_authorizer_function[type], databasename, tablename, rowid)

connection.setupdatehook(myupdatehook)
cursor.execute("insert into s values(?)", ("file93",))
cursor.execute("update s set str=? where str=?", ("file94", "file93"))
cursor.execute("delete from s where str=?", ("file94",))
connection.setupdatehook(None)
| Updated: SQLITE_INSERT database main, table s, row 7
| Updated: SQLITE_UPDATE database main, table s, row 7
| Updated: SQLITE_DELETE database main, table s, row 7
###
### Blob I/O
###

cursor.execute("create table blobby(x,y)")
# Add a blob we will fill in later
cursor.execute("insert into blobby values(1,zeroblob(10000))")
# Or as a binding
cursor.execute("insert into blobby values(2,?)", (apsw.zeroblob(20000),))
# Open a blob for writing.  We need to know the rowid
rowid=cursor.execute("select ROWID from blobby where x=1").next()[0]
blob=connection.blobopen("main", "blobby", "y", rowid, 1) # 1 is for read/write
blob.write("hello world")
blob.seek(2000)
blob.write("hello world, again")
blob.close()
###
### Virtual tables
###

# This virtual table stores information about files in a set of
# directories so you can execute SQL queries

def getfiledata(directories):
    columns=None
    data=[]
    counter=1
    for directory in directories:
        for f in os.listdir(directory):
            if not os.path.isfile(os.path.join(directory,f)):
                continue
            counter+=1
            st=os.stat(os.path.join(directory,f))
            if columns is None:
                columns=["rowid", "name", "directory"]+[x for x in dir(st) if x.startswith("st_")]
            data.append( [counter, f, directory] + [getattr(st,x) for x in columns[3:]] )
    return columns, data

# This gets registered with the Connection
class Source:
    def Create(self, db, modulename, dbname, tablename, *args):
        columns,data=getfiledata([eval(a.replace("\\", "\\\\")) for a in args]) # eval strips off layer of quotes
        schema="create table foo("+','.join(["'%s'" % (x,) for x in columns[1:]])+")"
        return schema,Table(columns,data)
    Connect=Create

# Represents a table
class Table:
    def __init__(self, columns, data):
        self.columns=columns
        self.data=data

    def BestIndex(self, *args):
        return None

    def Open(self):
        return Cursor(self)

    def Disconnect(self):
        pass

    Destroy=Disconnect

# Represents a cursor
class Cursor:
    def __init__(self, table):
        self.table=table

    def Filter(self, *args):
        self.pos=0

    def Eof(self):
        return self.pos>=len(self.table.data)

    def Rowid(self):
        return self.table.data[self.pos][0]

    def Column(self, col):
        return self.table.data[self.pos][1+col]

    def Next(self):
        self.pos+=1

    def Close(self):
        pass

# Register the module as filesource
connection.createmodule("filesource", Source())

# Arguments to module - all directories in sys.path
sysdirs=",".join(["'%s'" % (x,) for x in sys.path[1:] if len(x) and os.path.isdir(x)])
cursor.execute("create virtual table sysfiles using filesource("+sysdirs+")")

# Which 3 files are the biggest?
for size,directory,file in cursor.execute("select st_size,directory,name from sysfiles order by st_size desc limit 3"):
    print size,file,directory
| 24882176 testdb /space/apsw
| 4045200 apsw.so /space/apsw
| 915015 _yaml.so /home/rogerb/.local/lib/python2.7/site-packages
# Which 3 files are the oldest?
for ctime,directory,file in cursor.execute("select st_ctime,directory,name from sysfiles order by st_ctime limit 3"):
    print ctime,file,directory
| 1388285092.49 setup.cfg /space/apsw
| 1391490876.74 .gitignore /space/apsw
| 1391584463.54 README.rst /space/apsw
###
### A VFS that "obfuscates" the database file contents.  The scheme
### used is to xor all bytes with 0xa5.  This scheme honours that used
### for MAPI and SQL Server.
###

def encryptme(data):
    if not data: return data
    return "".join([chr(ord(x)^0xa5) for x in data])

# Inheriting from a base of "" means the default vfs
class ObfuscatedVFS(apsw.VFS):
    def __init__(self, vfsname="obfu", basevfs=""):
        self.vfsname=vfsname
        self.basevfs=basevfs
        apsw.VFS.__init__(self, self.vfsname, self.basevfs)

    # We want to return our own file implmentation, but also
    # want it to inherit
    def xOpen(self, name, flags):
        # We can look at uri parameters
        if isinstance(name, apsw.URIFilename):
            print "fast is", name.uri_parameter("fast")
            print "level is", name.uri_int("level", 3)
            print "warp is", name.uri_boolean("warp", False)
            print "notpresent is", name.uri_parameter("notpresent")
| fast is speed
| level is 7
| warp is True
| notpresent is None
        return ObfuscatedVFSFile(self.basevfs, name, flags)

# The file implementation where we override xRead and xWrite to call our
# encryption routine
class ObfuscatedVFSFile(apsw.VFSFile):
    def __init__(self, inheritfromvfsname, filename, flags):
        apsw.VFSFile.__init__(self, inheritfromvfsname, filename, flags)

    def xRead(self, amount, offset):
        return encryptme(super(ObfuscatedVFSFile, self).xRead(amount, offset))

    def xWrite(self, data, offset):
        super(ObfuscatedVFSFile, self).xWrite(encryptme(data), offset)

# To register the VFS we just instantiate it
obfuvfs=ObfuscatedVFS()
# Lets see what vfs are now available?
print apsw.vfsnames()
| ['unix', 'obfu', 'unix-excl', 'unix-dotfile', 'unix-none']
# Make an obfuscated db, passing in some URI parameters
obfudb=apsw.Connection("file:myobfudb?fast=speed&level=7&warp=on",
                       flags=apsw.SQLITE_OPEN_READWRITE | apsw.SQLITE_OPEN_CREATE | apsw.SQLITE_OPEN_URI,
                       vfs=obfuvfs.vfsname)
# Check it works
obfudb.cursor().execute("create table foo(x,y); insert into foo values(1,2)")

# Check it really is obfuscated on disk
print `open("myobfudb", "rb").read()[:20]`
| '\xf6\xf4\xe9\xcc\xd1\xc0\x85\xc3\xca\xd7\xc8\xc4\xd1\x85\x96\xa5\xa1\xa5\xa4\xa4'
# And unobfuscating it
print `encryptme(open("myobfudb", "rb").read()[:20])`
| 'SQLite format 3\x00\x04\x00\x01\x01'
# Tidy up
obfudb.close()
os.remove("myobfudb")
###
### Limits
###

# Print some limits
for limit in ("LENGTH", "COLUMN", "ATTACHED"):
    name="SQLITE_LIMIT_"+limit
    maxname="SQLITE_MAX_"+limit  # compile time
    orig=connection.limit(getattr(apsw, name))
    print name, orig
    # To get the maximum, set to 0x7fffffff and then read value back
    connection.limit(getattr(apsw, name), 0x7fffffff)
    max=connection.limit(getattr(apsw, name))
    print maxname, max

# Set limit for size of a string
cursor.execute("create table testlimit(s)")
cursor.execute("insert into testlimit values(?)", ( "x"*1024, )) # 1024 char string
connection.limit(apsw.SQLITE_LIMIT_LENGTH, 1023) # limit is now 1023
try:
    cursor.execute("insert into testlimit values(?)", ( "y"*1024, ))
    print "string exceeding limit was inserted"
except apsw.TooBigError:
    print "Caught toobig exception"
connection.limit(apsw.SQLITE_LIMIT_LENGTH, 0x7fffffff)
| SQLITE_LIMIT_LENGTH 1000000000
| SQLITE_MAX_LENGTH 1000000000
| SQLITE_LIMIT_COLUMN 2000
| SQLITE_MAX_COLUMN 2000
| SQLITE_LIMIT_ATTACHED 10
| SQLITE_MAX_ATTACHED 10
| Caught toobig exception
###
### Backup to memory
###

# We will copy the disk database into a memory database

memcon=apsw.Connection(":memory:")

# Copy into memory
with memcon.backup("main", connection, "main") as backup:
    backup.step() # copy whole database in one go

# There will be no disk accesses for this query
for row in memcon.cursor().execute("select * from s"):
    pass
###
### Shell
###

# Here we use the shell to do a csv export providing the existing db
# connection

# Export to a StringIO
import StringIO as io # use io in Python 3
output=io.StringIO()
shell=apsw.Shell(stdout=output, db=connection)
# How to execute a dot command
shell.process_command(".mode csv")
shell.process_command(".headers on")
# How to execute SQL
shell.process_sql("create table csvtest(col1,col2); insert into csvtest values(3,4); insert into csvtest values('a b', NULL)")
# Let the shell figure out SQL vs dot command
shell.process_complete_line("select * from csvtest")

# Verify output
print output.getvalue()
| col1,col2
| 3,4
| a b,
|
###
### Statistics
###

print "SQLite memory usage current %d max %d" % apsw.status(apsw.SQLITE_STATUS_MEMORY_USED)
| SQLite memory usage current 420072 max 494080
###
### Cleanup
###

# We can close connections manually (useful if you want to catch exceptions)
# but you don't have to
connection.close(True)  # force it since we want to exit

# Delete database - we don't need it any more
os.remove("dbfile")

Previous topic

Tips

Next topic

Download

This Page

apsw-3.8.11.1-r1/doc/extensions.html0000664000000000000000000002645112565423350015604 0ustar rootroot Extensions — APSW 3.8.11.1-r1 documentation

Extensions

SQLite includes a number of extensions providing additional functionality. All extensions are disabled by default and you need to take steps to have them available at compilation time, to enable them and then to use them.

Asynchronous VFS

This extension does SQLite I/O in a background thread processing a queue of requests. To enable it you must have used fetch --asyncvfs to setup.py at some point. It is enabled by the downloaded file sqlite3async.c being present in the same directory as setup.py. See the SQLite documentation.

To use you must first call apsw.async_initialize() which will register the VFS. If you didn’t make it the default VFS then you need to specify it when opening your database. From this point on, any operations you perform that require writing to the database will be queued and the database locked. You should start a background thread to perform the write operations calling apsw.async_run() to do the work. You can call apsw.async_control() to set and get various things (eg adding delays, controlling locking) as well as telling it when the writer code should exit. This is a simple example:

# Inherit from default vfs, do not make this the new default
asyncvfsname=apsw.async_initialize("", False)
# Open database
db=apsw.Connection("database", vfs=asyncvfsname)
# Make a worker thread
t=threading.Thread(target=apsw.async_run)
t.start()
# do some work
cur=db.cursor()
cur.execute("..")
# Close db
db.close()
# Tell worker to quit when queue is empty
apsw.async_control(apsw.SQLITEASYNC_HALT, apsw.SQLITEASYNC_HALT_IDLE)

FTS3/4

This is the third version of the full text search extension. It makes it easy to find words in multi-word text fields. You must enable the extension via setup.py build flags before it will work. There are no additional APIs and the documented SQL works as is.

Note that FTS4 is some augmentations to FTS3 and are enabled whenever FTS3 is enabled as described in the documentation

ICU

The ICU extension provides an International Components for Unicode interface, in particular enabling you do sorting and regular expressions in a locale aware way. The documentation shows how to use it.

RTree

The RTree extension provides a spatial table - see the documentation. You must enable the extension via setup.py build flags before it will work. There are no additional APIs and the documented SQL works as is.

Table Of Contents

Previous topic

Building

Next topic

APSW Module

This Page

apsw-3.8.11.1-r1/doc/dbapi.html0000664000000000000000000003533412565423350014464 0ustar rootroot DBAPI notes — APSW 3.8.11.1-r1 documentation

DBAPI notes

DBAPI is defined in PEP 249. This section desribes how APSW complies or differs from it.

Module Interface

There is no connect method. Use the Connection constructor instead.

The Connection object and any cursors can be used in any thread. As an extreme example, you could call Cursor.next() in seperate threads each thread getting the next row. You cannot use the cursor concurrently in multiple threads for example calling Cursor.execute() at the same time. If you attempt to do so then an exception will be raised. The Python Global Interpreter Lock (GIL) is released during all SQLite API calls allowing for maximum concurrency.

Three different paramstyles are supported. Note that SQLite starts parameter numbers from one not zero when using qmark/numeric style.

qmark ... WHERE name=?
numeric ... WHERE name=?4
named
... WHERE name=:name or
... WHERE name=$name

The DBAPI exceptions are not used. The exceptions used correspond to specific SQLite error codes.

Connection Objects

There are no commit or rollback methods. You should use Cursor.execute() with BEGIN and COMMIT or ROLLBACK as appropriate. The SQLite documentation has more details. In particular note that SQLite does not support nested transactions. You can only start one transaction and will get an error if you try to start another one.

Several methods that are defined in DBAPI to be on the cursor are instead on the Connection object, since this is where SQLite actually stores the information. Doing operations in any other cursor attached to the same Connection object does update their values, and this makes you aware of that.

Cursor Objects

Use Cursor.getdescription() instead of description. This information is only obtained on request.

There is no rowcount. Row counts don’t make sense in SQLite any way. SQLite returns results one row at a time, not calculating the next result row until you ask for it. Consequently getting a rowcount would have to calculate all the result rows and would not reduce the amount of effort needed.

callproc is not implemented as SQLite doesn’t support stored procedures.

execute() returns the Cursor object and you can use it as an iterator to get the results (if any).

executemany() returns the Cursor object and you can use it as an iterator to get the results (if any).

fetchone is not available. Use the cursor as an iterator, or call next() to get the next row, or raises StopIteration when there are no more results.

fetchmany is not available. Simply use the cursor as an iterator or call next() for however many results you want.

fetchall is available, but not too useful. Simply use the cursor as an iterator, call next(), or use list which is less typing:

all=list(cursor.execute("...."))

nextset is not applicable or implemented.

arraysize is not available as fetchmany isn’t.

Neither setinputsizes or setoutputsize are applicable or implemented.

Type objects

None of the date or time methods are available since SQLite 3 does not have a native date or time type. There are functions for manipulating dates and time which are represented as strings or Julian days (floating point number).

Use the standard Python buffer class for BLOBs in Python 2 and the bytes type in Python 3.

Optional DB API Extensions

rownumber is not available.

Exception classes are not available as attributes of Connection but instead are on the apsw module. See Exceptions for more details.

Use Cursor.getconnection() to get the associated Connection object from a cursor.

scroll and messages are not available.

The Cursor object supports the iterator protocol and this is the only way of getting information back.

To get the last inserted row id, call Connection.last_insert_rowid(). That stores the id from the last insert on any Cursor associated with the the Connection. You can also add select last_insert_rowid() to the end of your execute statements:

for row in cursor.execute("BEGIN; INSERT ... ; INSERT ... ; SELECT last_insert_rowid(); COMMIT"):
   lastrowid=row[0]

There is no errorhandler attribute.

Table Of Contents

Previous topic

Execution and tracing

Next topic

pysqlite differences

This Page

apsw-3.8.11.1-r1/doc/apsw.html0000664000000000000000000017771212565423346014373 0ustar rootroot APSW Module — APSW 3.8.11.1-r1 documentation

APSW Module

The module is the main interface to SQLite. Methods and data on the module have process wide effects. You can instantiate the Connection and zeroblob objects using Connection() and zeroblob() respectively.

API Reference

SQLITE_VERSION_NUMBER

The integer version number of SQLite that APSW was compiled against. For example SQLite 3.6.4 will have the value 3006004. This number may be different than the actual library in use if the library is shared and has been updated. Call sqlitelibversion() to get the actual library version.

apswversion() → string

Returns the APSW version.

compile_options

A tuple of the options used to compile SQLite. For example it will be something like this:

('ENABLE_LOCKING_STYLE=0', 'TEMP_STORE=1', 'THREADSAFE=1')

Calls: sqlite3_compileoption_get

complete(statement) → bool

Returns True if the input string comprises one or more complete SQL statements by looking for an unquoted trailing semi-colon.

An example use would be if you were prompting the user for SQL statements and needed to know if you had a whole statement, or needed to ask for another line:

statement=raw_input("SQL> ")
while not apsw.complete(statement):
   more=raw_input("  .. ")
   statement=statement+"\n"+more

Calls: sqlite3_complete

config(op[, *args])
Parameters:

Many operations don’t make sense from a Python program. The following configuration operations are supported: SQLITE_CONFIG_LOG, SQLITE_CONFIG_SINGLETHREAD, SQLITE_CONFIG_MULTITHREAD, SQLITE_CONFIG_SERIALIZED, SQLITE_CONFIG_URI, SQLITE_CONFIG_MEMSTATUS, SQLITE_CONFIG_COVERING_INDEX_SCAN, SQLITE_CONFIG_PCACHE_HDRSZ, and SQLITE_CONFIG_PMASZ.

See tips for an example of how to receive log messages (SQLITE_CONFIG_LOG)

Calls: sqlite3_config

connection_hooks

The purpose of the hooks is to allow the easy registration of functions, virtual tables or similar items with each Connection as it is created. The default value is an empty list. Whenever a Connection is created, each item in apsw.connection_hooks is invoked with a single parameter being the new Connection object. If the hook raises an exception then the creation of the Connection fails.

If you wanted to store your own defined functions in the database then you could define a hook that looked in the relevant tables, got the Python text and turned it into the functions.

enablesharedcache(bool)

If you use the same Connection across threads or use multiple connections accessing the same file, then SQLite can share the cache between them. It is not recommended that you use this.

Calls: sqlite3_enable_shared_cache

exceptionfor(int) → Exception

If you would like to raise an exception that corresponds to a particular SQLite error code then call this function. It also understands extended error codes.

For example to raise SQLITE_IOERR_ACCESS:

raise apsw.exceptionfor(apsw.SQLITE_IOERR_ACCESS)
fork_checker()

Note This method is not available on Windows as it does not support the fork system call.

SQLite does not allow the use of database connections across forked processes (see the SQLite FAQ Q6). (Forking creates a child process that is a duplicate of the parent including the state of all data structures in the program. If you do this to SQLite then parent and child would both consider themselves owners of open databases and silently corrupt each other’s work and interfere with each other’s locks.)

One example of how you may end up using fork is if you use the multiprocessing module which uses fork to make child processes.

If you do use fork or multiprocessing on a platform that supports fork then you must ensure database connections and their objects (cursors, backup, blobs etc) are not used in the parent process, or are all closed before calling fork or starting a Process. (Note you must call close to ensure the underlying SQLite objects are closed. It is also a good idea to call gc.collect(2) to ensure anything you may have missed is also deallocated.)

Once you run this method, extra checking code is inserted into SQLite’s mutex operations (at a very small performance penalty) that verifies objects are not used across processes. You will get a ForkingViolationError if you do so. Note that due to the way Python’s internals work, the exception will be delivered to sys.excepthook in addition to the normal exception mechanisms and may be reported by Python after the line where the issue actually arose. (Destructors of objects you didn’t close also run between lines.)

You should only call this method as the first line after importing APSW, as it has to shutdown and re-initialize SQLite. If you have any SQLite objects already allocated when calling the method then the program will later crash. The recommended use is to use the fork checking as part of your test suite.

format_sql_value(value) → string

Returns a Python string (unicode) representing the supplied value in SQL syntax. Python 2 note: You must supply unicode strings not plain strings.

initialize()

It is unlikely you will want to call this method as SQLite automatically initializes.

Calls: sqlite3_initialize

log(level, message)

Calls the SQLite logging interface. Note that you must format the message before passing it to this method:

apsw.log(apsw.SQLITE_NOMEM, "Need %d bytes of memory" % (1234,))

See tips for an example of how to receive log messages.

Calls: sqlite3_log

main()

Call this to run the interactive shell. It automatically passes in sys.argv[1:] and exits Python when done.

memoryhighwater(reset=False) → int

Returns the maximum amount of memory SQLite has used. If reset is True then the high water mark is reset to the current value.

See also

status()

Calls: sqlite3_memory_highwater

memoryused() → int

Returns the amount of memory SQLite is currently using.

See also

status()

Calls: sqlite3_memory_used

randomness(bytes) → data

Gets random data from SQLite’s random number generator.

Parameters:bytes – How many bytes to return
Return type:(Python 2) string, (Python 3) bytes

Calls: sqlite3_randomness

releasememory(bytes) → int

Requests SQLite try to free bytes bytes of memory. Returns how many bytes were freed.

Calls: sqlite3_release_memory

shutdown()

It is unlikely you will want to call this method and there is no need to do so. It is a really bad idea to call it unless you are absolutely sure all connections, blobs, cursors, vfs etc have been closed, deleted and garbage collected.

Calls: sqlite3_shutdown

softheaplimit(bytes) → oldlimit

Requests SQLite try to keep memory usage below bytes bytes and returns the previous setting.

Calls: sqlite3_soft_heap_limit64

sqlite3_sourceid() → string

Returns the exact checkin information for the SQLite 3 source being used.

Calls: sqlite3_sourceid

sqlitelibversion() → string

Returns the version of the SQLite library. This value is queried at run time from the library so if you use shared libraries it will be the version in the shared library.

Calls: sqlite3_libversion

status(op, reset=False) -> (int, int)

Returns current and highwater measurements.

Parameters:
  • op – A status parameter
  • reset – If True then the highwater is set to the current value
Returns:

A tuple of current value and highwater value

See also

Calls: sqlite3_status64

using_amalgamation

If True then SQLite amalgamation is in use (statically compiled into APSW). Using the amalgamation means that SQLite shared libraries are not used and will not affect your code.

vfsnames() -> list(string)

Returns a list of the currently installed vfs. The first item in the list is the default vfs.

SQLite constants

SQLite has many constants used in various interfaces. To use a constant such as SQLITE_OK, just use apsw.SQLITE_OK.

The same values can be used in different contexts. For example SQLITE_OK and SQLITE_CREATE_INDEX both have a value of zero. For each group of constants there is also a mapping (dict) available that you can supply a string to and get the corresponding numeric value, or supply a numeric value and get the corresponding string. These can help improve diagnostics/logging, calling other modules etc. For example:

apsw.mapping_authorizer_function["SQLITE_READ"] == 20
apsw.mapping_authorizer_function[20] == "SQLITE_READ"

mapping_access Flags for the xAccess VFS method

mapping_authorizer_function Authorizer Action Codes

mapping_authorizer_return Authorizer Return Codes

mapping_bestindex_constraints Virtual Table Constraint Operator Codes

mapping_config Configuration Options

mapping_conflict_resolution_modes Conflict resolution modes

mapping_db_config Database Connection Configuration Options

mapping_db_status Status Parameters for database connections

mapping_device_characteristics Device Characteristics

mapping_extended_result_codes Extended Result Codes

SQLITE_ABORT_ROLLBACK, SQLITE_AUTH_USER, SQLITE_BUSY_RECOVERY, SQLITE_BUSY_SNAPSHOT, SQLITE_CANTOPEN_CONVPATH, SQLITE_CANTOPEN_FULLPATH, SQLITE_CANTOPEN_ISDIR, SQLITE_CANTOPEN_NOTEMPDIR, SQLITE_CONSTRAINT_CHECK, SQLITE_CONSTRAINT_COMMITHOOK, SQLITE_CONSTRAINT_FOREIGNKEY, SQLITE_CONSTRAINT_FUNCTION, SQLITE_CONSTRAINT_NOTNULL, SQLITE_CONSTRAINT_PRIMARYKEY, SQLITE_CONSTRAINT_ROWID, SQLITE_CONSTRAINT_TRIGGER, SQLITE_CONSTRAINT_UNIQUE, SQLITE_CONSTRAINT_VTAB, SQLITE_CORRUPT_VTAB, SQLITE_IOERR_ACCESS, SQLITE_IOERR_BLOCKED, SQLITE_IOERR_CHECKRESERVEDLOCK, SQLITE_IOERR_CLOSE, SQLITE_IOERR_CONVPATH, SQLITE_IOERR_DELETE, SQLITE_IOERR_DELETE_NOENT, SQLITE_IOERR_DIR_CLOSE, SQLITE_IOERR_DIR_FSYNC, SQLITE_IOERR_FSTAT, SQLITE_IOERR_FSYNC, SQLITE_IOERR_GETTEMPPATH, SQLITE_IOERR_LOCK, SQLITE_IOERR_MMAP, SQLITE_IOERR_NOMEM, SQLITE_IOERR_RDLOCK, SQLITE_IOERR_READ, SQLITE_IOERR_SEEK, SQLITE_IOERR_SHMLOCK, SQLITE_IOERR_SHMMAP, SQLITE_IOERR_SHMOPEN, SQLITE_IOERR_SHMSIZE, SQLITE_IOERR_SHORT_READ, SQLITE_IOERR_TRUNCATE, SQLITE_IOERR_UNLOCK, SQLITE_IOERR_WRITE, SQLITE_LOCKED_SHAREDCACHE, SQLITE_NOTICE_RECOVER_ROLLBACK, SQLITE_NOTICE_RECOVER_WAL, SQLITE_READONLY_CANTLOCK, SQLITE_READONLY_DBMOVED, SQLITE_READONLY_RECOVERY, SQLITE_READONLY_ROLLBACK, SQLITE_WARNING_AUTOINDEX

mapping_file_control Standard File Control Opcodes

mapping_limits Run-Time Limit Categories

mapping_locking_level File Locking Levels

mapping_open_flags Flags For File Open Operations

mapping_result_codes Result Codes

mapping_status Status Parameters

mapping_sync Synchronization Type Flags

mapping_virtual_table_configuration_options Virtual Table Configuration Options

mapping_wal_checkpoint Checkpoint Mode Values

mapping_xshmlock_flags Flags for the xShmLock VFS method

Table Of Contents

Previous topic

Extensions

Next topic

Connections to a database

This Page

apsw-3.8.11.1-r1/doc/genindex.html0000664000000000000000000011450112565423350015200 0ustar rootroot Index — APSW 3.8.11.1-r1 documentation

Index

_ | A | B | C | D | E | F | G | H | I | L | M | N | O | P | R | S | T | U | V | W | X | Z

_

__enter__() (backup method)
(Connection method)
(blob method)
__exit__() (backup method)
(Connection method)
(blob method)

A

AbortError
apsw (module)
apswversion() (in module apsw)
AuthError

B

backup (class in apsw)
backup() (Connection method)
Begin() (VTTable method)
BestIndex() (VTTable method)
BindingsError
blob (class in apsw)
blobopen() (Connection method)
BusyError

C

CantOpenError
changes() (Connection method)
close() (backup method)
(Connection method)
(Cursor method)
(blob method)
Close() (VTCursor method)
cmdloop() (Shell method)
collationneeded() (Connection method)
Column() (VTCursor method)
Commit() (VTTable method)
compile_options (in module apsw)
complete() (in module apsw)
(Shell method)
complete_command() (Shell method)
complete_sql() (Shell method)
config() (Connection method)
(in module apsw)
Connect() (VTModule method)
Connection (class in apsw)
connection_hooks (in module apsw)
ConnectionClosedError
ConnectionNotClosedError
ConstraintError
CorruptError
Create() (VTModule method)
createaggregatefunction() (Connection method)
createcollation() (Connection method)
createmodule() (Connection method)
createscalarfunction() (Connection method)
Cursor (class in apsw)
cursor() (Connection method)
CursorClosedError

D

db (Shell attribute)
db_filename() (Connection method)
description (Cursor attribute)
Destroy() (VTTable method)
Disconnect() (VTTable method)
display_timing() (Shell method)
done (backup attribute)

E

EmptyError
enableloadextension() (Connection method)
enablesharedcache() (in module apsw)
environment variable
PYTHONPATH
Eof() (VTCursor method)
Error
excepthook() (VFS method)
(VFSFile method)
exceptionfor() (in module apsw)
ExecTraceAbort
execute() (Cursor method)
executemany() (Cursor method)
ExecutionCompleteError
extendedresult (Error attribute)
ExtensionLoadingError

F

fetchall() (Cursor method)
fetchone() (Cursor method)
filecontrol() (Connection method)
filename (Connection attribute)
filename() (URIFilename method)
Filter() (VTCursor method)
FindFunction() (VTTable method)
finish() (backup method)
fixup_backslashes() (Shell method)
fork_checker() (in module apsw)
ForkingViolationError
format_sql_value() (in module apsw)
FormatError
FullError

G

get_resource_usage() (Shell method)
getautocommit() (Connection method)
getcompleteline() (Shell method)
getconnection() (Cursor method)
getdescription() (Cursor method)
getexectrace() (Connection method)
(Cursor method)
getline() (Shell method)
getrowtrace() (Connection method)
(Cursor method)

H

handle_exception() (Shell method)
handle_interrupt() (Shell method)

I

IncompleteExecutionError
initialize() (in module apsw)
InternalError
interrupt() (Connection method)
InterruptError
IOError

L

last_insert_rowid() (Connection method)
length() (blob method)
(zeroblob method)
limit() (Connection method)
loadextension() (Connection method)
LockedError
log() (in module apsw)

M

main() (in module apsw)
memoryhighwater() (in module apsw)
memoryused() (in module apsw)
MismatchError
MisuseError

N

Next() (VTCursor method)
NoLFSError
NoMemError
NotADBError
NotFoundError

O

Open() (VTTable method)
open_flags (Connection attribute)
open_vfs (Connection attribute)
overloadfunction() (Connection method)

P

pagecount (backup attribute)
PermissionsError
pop_input() (Shell method)
pop_output() (Shell method)
process_args() (Shell method)
process_command() (Shell method)
process_complete_line() (Shell method)
process_sql() (Shell method)
process_unknown_args() (Shell method)
ProtocolError
push_input() (Shell method)
push_output() (Shell method)
Python Enhancement Proposals
PEP 0343, [1], [2], [3]
PEP 249, [1]
PEP 3149, [1], [2]
PEP 370, [1]
PYTHONPATH

R

randomness() (in module apsw)
RangeError
read() (blob method)
readinto() (blob method)
readonly() (Connection method)
ReadOnlyError
releasememory() (in module apsw)
remaining (backup attribute)
Rename() (VTTable method)
reopen() (blob method)
result (Error attribute)
Rollback() (VTTable method)
Rowid() (VTCursor method)

S

SchemaChangeError
seek() (blob method)
set_encoding() (Shell method)
setauthorizer() (Connection method)
setbusyhandler() (Connection method)
setbusytimeout() (Connection method)
setcommithook() (Connection method)
setexectrace() (Connection method)
(Cursor method)
setprofile() (Connection method)
setprogresshandler() (Connection method)
setrollbackhook() (Connection method)
setrowtrace() (Connection method)
(Cursor method)
setupdatehook() (Connection method)
setwalhook() (Connection method)
Shell (class in apsw)
Shell.Error
shutdown() (in module apsw)
softheaplimit() (in module apsw)
SQLError
sqlite3_backup_finish
sqlite3_backup_init
sqlite3_backup_pagecount
sqlite3_backup_remaining
sqlite3_backup_step
sqlite3_bind_blob
sqlite3_bind_double
sqlite3_bind_int64
sqlite3_bind_null
sqlite3_bind_text
sqlite3_bind_zeroblob
sqlite3_blob_bytes
sqlite3_blob_close
sqlite3_blob_open
sqlite3_blob_read, [1]
sqlite3_blob_reopen
sqlite3_blob_write
sqlite3_busy_handler
sqlite3_busy_timeout
sqlite3_changes
sqlite3_close
sqlite3_collation_needed
sqlite3_column_decltype
sqlite3_column_name
sqlite3_commit_hook
sqlite3_compileoption_get
sqlite3_complete
sqlite3_config
sqlite3_create_collation_v2
sqlite3_create_function_v2, [1]
sqlite3_create_module_v2
sqlite3_db_config
sqlite3_db_filename, [1]
sqlite3_db_readonly
sqlite3_db_status
sqlite3_enable_load_extension
sqlite3_enable_shared_cache
sqlite3_file_control
sqlite3_get_autocommit
sqlite3_initialize
sqlite3_interrupt
sqlite3_last_insert_rowid
sqlite3_libversion
sqlite3_limit
sqlite3_load_extension
sqlite3_log
sqlite3_memory_highwater
sqlite3_memory_used
sqlite3_open_v2
sqlite3_overload_function
sqlite3_prepare_v2
sqlite3_profile
sqlite3_progress_handler
sqlite3_randomness
sqlite3_release_memory
sqlite3_rollback_hook
sqlite3_set_authorizer
sqlite3_shutdown
sqlite3_soft_heap_limit64
sqlite3_sourceid
sqlite3_sourceid() (in module apsw)
sqlite3_status64
sqlite3_step
sqlite3_total_changes
sqlite3_update_hook
sqlite3_uri_boolean
sqlite3_uri_int64
sqlite3_uri_parameter
sqlite3_vfs_find
sqlite3_vfs_register
sqlite3_vfs_unregister
sqlite3_wal_autocheckpoint
sqlite3_wal_checkpoint_v2
sqlite3_wal_hook
sqlite3pointer() (Connection method)
SQLITE_VERSION_NUMBER (in module apsw)
sqlitelibversion() (in module apsw)
status() (Connection method)
(in module apsw)
step() (backup method)
Sync() (VTTable method)

T

tell() (blob method)
ThreadingViolationError
TooBigError
totalchanges() (Connection method)

U

unregister() (VFS method)
UpdateChangeRow() (VTTable method)
UpdateDeleteRow() (VTTable method)
UpdateInsertRow() (VTTable method)
uri_boolean() (URIFilename method)
uri_int() (URIFilename method)
uri_parameter() (URIFilename method)
URIFilename (class in apsw)
usage() (Shell method)
using_amalgamation (in module apsw)

V

VFS (class in apsw)
VFSFile (class in apsw)
VFSFileClosedError
vfsnames() (in module apsw)
VFSNotImplementedError
VTCursor (class in apsw)
VTModule (class in apsw)
VTTable (class in apsw)

W

wal_autocheckpoint() (Connection method)
wal_checkpoint() (Connection method)
write() (blob method)
(Shell method)

X

xAccess() (VFS method)
xCheckReservedLock() (VFSFile method)
xClose() (VFSFile method)
xCurrentTime() (VFS method)
xDelete() (VFS method)
xDeviceCharacteristics() (VFSFile method)
xDlClose() (VFS method)
xDlError() (VFS method)
xDlOpen() (VFS method)
xDlSym() (VFS method)
xFileControl() (VFSFile method)
xFileSize() (VFSFile method)
xFullPathname() (VFS method)
xGetLastError() (VFS method)
xGetSystemCall() (VFS method)
xLock() (VFSFile method)
xNextSystemCall() (VFS method)
xOpen() (VFS method)
xRandomness() (VFS method)
xRead() (VFSFile method)
xSectorSize() (VFSFile method)
xSetSystemCall() (VFS method)
xSleep() (VFS method)
xSync() (VFSFile method)
xTruncate() (VFSFile method)
xUnlock() (VFSFile method)
xWrite() (VFSFile method)

Z

zeroblob (class in apsw)
apsw-3.8.11.1-r1/doc/execution.html0000664000000000000000000007472512565423350015417 0ustar rootroot Execution and tracing — APSW 3.8.11.1-r1 documentation

Execution and tracing

Execution model

This section only matters if you give multiple SQL statements in one go to Cursor.execute. (Statements are seperated by semi-colons.)

SQLite does execution in two steps. First a statement is prepared, which verifies the syntax, tables and fields and converts the statement into an internal representation. The prepared statement is then run. Execution stops when a row is available, there is an error or the statement is complete.

The Cursor.execute() method automatically does the preparing and starts execution. If none of the statements return rows then execution will go to the end. If a row is returned then you need to call Cursor.next() to get the row values or use the cursor as an iterator. Execution will resume as necessary to satisfy next() calls.

However this means that if you don’t read the rows returned then the rest of your statements won’t be executed. APSW will detect unexecuted previous statements and generate an exception. For example:

>>> cursor.execute("select * from foo ; create table bar(x,y,z)")
>>> cursor.execute("create table bam(x,y,z)")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
apsw.IncompleteExecutionError: Error: there are still remaining sql statements to execute

Because I didn’t read the results of select * from foo then the following create table command didn’t have a chance to get executed. On the next execute that condition is detected and an exception raised.

Multi-threading and re-entrancy

ASPW lets you use SQLite in multi-threaded programs and will let other threads execute while SQLite is working. (Technically the GIL is released when sqlite3_prepare_v2, sqlite3_step or sqlite3_open_v2 are running, as well as all other functions that could take more than a trivial amount of time or use the SQLite mutex. The GIL is re-acquired while user defined functions, collations and the various hooks/handlers run.)

Note that you cannot use the same cursor object in multiple threads concurrently to execute statements. APSW will detect this and throw an exception. It is safe to use the object serially (eg calling Cursor.execute() in one thread and Cursor.next() in another. You also can’t do things like try to close() a Connection concurrently in two threads.

If you have multiple threads and/or multiple programs accessing the same database then there may be contention for the file. SQLite will return SQLITE_BUSY which will be raised as BusyError. You can call Connection.setbusytimeout() to set how long SQLite will retry for or Connection.setbusyhandler() to install your own busy handler. Note that SQLite won’t call the busy handler or timeout if it believes a deadlock has arisen. SQLite’s locking and concurrency is described here.

A cursor object can only be executing one query at a time. You cannot issue a new query from inside a trace function or from a user defined function or collation since these are called while executing a query. You can however make new cursors and use those without issue. You may want to remember the Connection object when you set your trace or user defined functions.

64 bit hosts, Python 2.5+

Prior to Python 2.5, you were limited to 32 bit quantities for items in Python such as the length of strings, number of items in a sequence etc. Python 2.5 and above use 64 bit limits on 64 bit hosts. APSW will work correctly with those items in Python 2.5 and above that use 64 bits. Unfortunately SQLite is limited to 32 bit quantities for strings, blobs, number of columns etc even when compiled for 64 bit. Consequently you will get a TooBig exception from APSW which checks if strings/buffers longer than 1GB or 2GB (depends on internal storage) are used. See SQLite ticket #2125 and SQLite ticket #3246 for more details.

Statement Cache

Each Connection maintains a cache mapping SQL queries to a prepared statement to avoid the overhead of repreparing queries that are executed multiple times. This is a classic tradeoff using more memory to reduce CPU consumption.

By default there are up to 100 entries in the cache. Once the cache is full, the least recently used item is discarded to make space for new items.

You should pick a larger cache size if you have more than 100 unique queries that you run. For example if you have 101 different queries you run in order then the cache will not help.

You can also specify zero which will disable the statement cache.

If you are using authorizers then you should disable the statement cache. This is because the authorizer callback is only called while statements are being prepared.

Tracing

You can install tracers on cursors or connections as an easy way of seeing exactly what gets executed and what is returned. The tracers can also abort execution and cause different values to be returned. This is very useful for diagnostics and testing without having to modify your main code.

Note

You cannot issue new execute statements against the cursor your tracer was called from. If you would like to make more queries in the tracer then do them from a new cursor object. For example:

def exectracer(cursor, sql, bindings):
  cursor.getconnection().cursor("insert into log values(?,?)", (sql,str(bindings)))
  return True

Execution Tracer

The execution tracer is called after an SQL statement has been prepared. (ie syntax errors will have caused an exception during preparation so you won’t see them with a tracer). It is called with three arguments.

cursor
The cursor executing the statement
sql
The SQL text being executed
bindings
The bindings being used. This may be None, a dictionary or a tuple.

If the tracer return value evaluates to False/None then execution is aborted with an ExecTraceAbort exception. See the example.

Execution tracers can be installed on a specific cursor by calling Cursor.setexectrace() or for all cursors by calling Connection.setexectrace(), with the cursor tracer taking priority.

If you use the Connection with statement and have a Connection execution tracer then your callback will also be called when APSW creates and releases/rollbacks savepoints. Instead of the first argument being a cursor, it will be the connection itself since there is no cursor involved.

Row Tracer

The row tracer is called before each row is returned. It is called with two arguments.

cursor
The cursor returning the row
row
A tuple of the values about to be returned

Whatever you return from the tracer is what is actually returned to the caller of execute(). If you return None then the whole row is skipped. See the example.

Row tracers can be installed on a specific cursor by calling Cursor.setrowtrace() or for all cursors by calling Connection.setrowtrace(), with the cursor tracer taking priority.

APSW Trace

APSW includes a tracing script as part of the source distribution named apswtrace.py, or you can get a copy directly from source control (choose “Raw File”). This script lets you easily trace SQL execution as well as providing a summary report without modifying your code. If it is installed anywhere on your PYTHONPATH then you can invoke it with -m:

$ python -m apswtrace [apswtrace options] yourscript.py [your options]

You can also invoke it directly:

$ python /path/to/apswtrace.py [apswtrace options] yourscript.py [your options]

All output is UTF-8 encoded. The following options are available:

$ python apswtrace.py --help
Usage: apswtrace.py [options] pythonscript.py [pythonscriptoptions]

This script runs a Python program that uses APSW and reports on SQL queries
without modifying the program.  This is done by using connection_hooks and
registering row and execution tracers.  See APSW documentation for more
details on the output.

Options:
  -h, --help            show this help message and exit
  -o OUTPUT, --output=OUTPUT
                        Where to send the output.  Use a filename, a single
                        dash for stdout, or the words stdout and stderr.
                        [stdout]
  -s, --sql             Log SQL statements as they are executed. [False]
  -r, --rows            Log returned rows as they are returned (turns on sql).
                        [False]
  -t, --timestamps      Include timestamps in logging
  -i, --thread          Include thread id in logging
  -l LENGTH, --length=LENGTH
                        Max amount of a string to print [30]
  --no-report           A summary report is normally generated at program
                        exit.  This turns off the report and saves memory.
  --report-items=N      How many items to report in top lists [15]
  --reports=REPORTS     Which reports to show
                        [summary,popular,aggregate,individual]

This is sample output with the following options: --sql, --rows, --timestamps, --thread

1e0e5a0 0.152 7fccea8456e0 OPEN: ":memory:" unix READWRITE|CREATE
1f72ac0 0.161 7fccea8456e0 OPEN: "testdb" unix READWRITE|CREATE
1f6b8d0 0.162 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb"
1f6b8d0 0.162 7fccea8456e0 SQL: create table foo(x,y,z)
1f6b8d0 0.239 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb"
1f6b8d0 0.239 7fccea8456e0 SQL: insert into foo values(?,?,?) BINDINGS: ("kjfhgk", "gkjlfdhgjkhsdfkjg", "gklsdfjgkldfjhnbnvc,mnxb,mnxcv..")
1f6b8d0 0.242 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb"
1f6b8d0 0.242 7fccea8456e0 SQL: insert into foo values(?,?,?) BINDINGS: ("gdfklhj", ":gjkhgfdsgfd", "gjkfhgjkhdfkjh")
1f6b8d0 0.244 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb"
1f6b8d0 0.245 7fccea8456e0 SQL: insert into foo values(?,?,?) BINDINGS: ("gdfjkhg", "gkjlfd", "")
1f6b8d0 0.247 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb"
1f6b8d0 0.247 7fccea8456e0 SQL: insert into foo values(?,?,?) BINDINGS: (1, 2, 30)
1f6b8d0 0.257 7fccea8456e0 CURSORFROM: 1f72ac0 DB: "testdb"
1f6b8d0 0.257 7fccea8456e0 SQL: select longest(x,y,z) from foo
1f6b8d0 0.257 7fccea8456e0 ROW: ("gklsdfjgkldfjhnbnvc,mnxb,mnxcv..")

Each row starts with the following fields:

id
This is the id of the Cursor or Connection. You can easily filter the log if you just want to find out what happened on a specific cursor or connection.
timestamp
This is time since the program started in seconds
threadid
The unique thread identifier

The remainder of the line has one of the following forms:

OPEN: “dbname” vfs open_flags
A Connection has been opened. The dbname is the filename exactly as given in the call to Connection. vfs is the name of the VFS used to open the database. open_flags is the set of flags supplied with the leading SQLITE_OPEN prefix omitted.
CURSORFROM: connectionid DB: “dbname”
A cursor has been allocated. The id at the begining of this row is of the new cursor. connectionid is the id of the Connection it was created from. The dbname is provided for convenience. This message is logged the first time a cursor issues a query.
SQL: query BINDINGS: bindings
A query was issued on a cursor.
ROW: row
A result row was returned by a cursor.

A report is also generated by default. This is example output from running the test suite. When calculating time for queries, your code execution time is included as well. For example if your query returned 10 rows and you slept for 1 second on reading each row then the time for the query will be recorded as 10 seconds. Because you can have multiple queries active at the same time, as well as across multiple threads, the total processing time can be larger than the program run time. The processing time is only recorded for queries that have no results or where you read all the result rows. Processing time also includes waiting time on busy connections.

APSW TRACE SUMMARY REPORT

Program run time                    83.073 seconds
Total connections                   1308
Total cursors                       3082
Number of threads used for queries  21
Total queries                       127973
Number of distinct queries          578
Number of rows returned             2369
Time spent processing queries       120.530 seconds

This shows how many times each query was run.

MOST POPULAR QUERIES

 121451 insert into foo values(?)
   1220 insert into abc values(1,2,?)
   1118 select x from foo
    909 select timesten(x) from foo where x=? order by x
    654 select * from foo
    426 update t1 set b=b||a||b
    146 begin
     88 create table foo(x,y)
     79 insert into foo values(1,2)
     76 rollback
     71 pragma locking_mode=exclusive
     71 insert into t1 values(2, 'abcdefghijklmnopqrstuvwxyz')
     71 insert into t1 values(1, 'abcdefghijklmnopqrstuvwxyz')
     71 insert into t1 select 4-a, b from t2
     71 insert into foo values(date('now'), date('now'))

This shows how many times a query was run and the sum of the processing times in seconds. The begin immediate query illustrates how time spent busy waiting is included.

LONGEST RUNNING - AGGREGATE

    413   94.305 select timesten(x) from foo where x=? order by x
 120637   12.941 select * from foo
     12    4.115 begin immediate
 121449    2.179 insert into foo values(?)
   1220    1.509 insert into abc values(1,2,?)
      3    1.380 create index foo_x on foo(x)
    426    0.715 update t1 set b=b||a||b
     38    0.420 insert into foo values(?,?)
     71    0.241 create table t1(a unique, b)
     88    0.206 create table foo(x,y)
     61    0.170 create table abc(a,b,c)
     27    0.165 insert into foo values(?,?,?)
      1    0.158 select row,x,snap(x) from foo
     80    0.150 insert into foo values(1,2)
     71    0.127 insert into foo values(date('now'), date('now'))

This shows the longest running queries with time in seconds.

LONGEST RUNNING - INDIVIDUAL

  3.001 begin immediate
  1.377 create index foo_x on foo(x)
  1.102 begin immediate
  0.944 select timesten(x) from foo where x=? order by x
  0.893 select timesten(x) from foo where x=? order by x
  0.817 select timesten(x) from foo where x=? order by x
  0.816 select timesten(x) from foo where x=? order by x
  0.786 select timesten(x) from foo where x=? order by x
  0.783 select timesten(x) from foo where x=? order by x
  0.713 select timesten(x) from foo where x=? order by x
  0.701 select timesten(x) from foo where x=? order by x
  0.651 select timesten(x) from foo where x=? order by x
  0.646 select timesten(x) from foo where x=? order by x
  0.631 select timesten(x) from foo where x=? order by x
  0.620 select timesten(x) from foo where x=? order by x

Table Of Contents

Previous topic

Types

Next topic

DBAPI notes

This Page

apsw-3.8.11.1-r1/doc/_static/0000755000000000000000000000000012576610040014127 5ustar rootrootapsw-3.8.11.1-r1/doc/_static/jquery.js0000664000000000000000000075571712366076556016053 0ustar rootroot/*! * jQuery JavaScript Library v1.7.2 * http://jquery.com/ * * Copyright 2011, John Resig * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * Includes Sizzle.js * http://sizzlejs.com/ * Copyright 2011, The Dojo Foundation * Released under the MIT, BSD, and GPL Licenses. * * Date: Wed Jul 30 14:06:55 UTC 2014 */ (function( window, undefined ) { // Use the correct document accordingly with window argument (sandbox) var document = window.document, navigator = window.navigator, location = window.location; var jQuery = (function() { // Define a local copy of jQuery var jQuery = function( selector, context ) { // The jQuery object is actually just the init constructor 'enhanced' return new jQuery.fn.init( selector, context, rootjQuery ); }, // Map over jQuery in case of overwrite _jQuery = window.jQuery, // Map over the $ in case of overwrite _$ = window.$, // A central reference to the root jQuery(document) rootjQuery, // A simple way to check for HTML strings or ID strings // Prioritize #id over to avoid XSS via location.hash (#9521) quickExpr = /^(?:[^#<]*(<[\w\W]+>)[^>]*$|#([\w\-]*)$)/, // Check if a string has a non-whitespace character in it rnotwhite = /\S/, // Used for trimming whitespace trimLeft = /^\s+/, trimRight = /\s+$/, // Match a standalone tag rsingleTag = /^<(\w+)\s*\/?>(?:<\/\1>)?$/, // JSON RegExp rvalidchars = /^[\],:{}\s]*$/, rvalidescape = /\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g, rvalidtokens = /"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, rvalidbraces = /(?:^|:|,)(?:\s*\[)+/g, // Useragent RegExp rwebkit = /(webkit)[ \/]([\w.]+)/, ropera = /(opera)(?:.*version)?[ \/]([\w.]+)/, rmsie = /(msie) ([\w.]+)/, rmozilla = /(mozilla)(?:.*? rv:([\w.]+))?/, // Matches dashed string for camelizing rdashAlpha = /-([a-z]|[0-9])/ig, rmsPrefix = /^-ms-/, // Used by jQuery.camelCase as callback to replace() fcamelCase = function( all, letter ) { return ( letter + "" ).toUpperCase(); }, // Keep a UserAgent string for use with jQuery.browser userAgent = navigator.userAgent, // For matching the engine and version of the browser browserMatch, // The deferred used on DOM ready readyList, // The ready event handler DOMContentLoaded, // Save a reference to some core methods toString = Object.prototype.toString, hasOwn = Object.prototype.hasOwnProperty, push = Array.prototype.push, slice = Array.prototype.slice, trim = String.prototype.trim, indexOf = Array.prototype.indexOf, // [[Class]] -> type pairs class2type = {}; jQuery.fn = jQuery.prototype = { constructor: jQuery, init: function( selector, context, rootjQuery ) { var match, elem, ret, doc; // Handle $(""), $(null), or $(undefined) if ( !selector ) { return this; } // Handle $(DOMElement) if ( selector.nodeType ) { this.context = this[0] = selector; this.length = 1; return this; } // The body element only exists once, optimize finding it if ( selector === "body" && !context && document.body ) { this.context = document; this[0] = document.body; this.selector = selector; this.length = 1; return this; } // Handle HTML strings if ( typeof selector === "string" ) { // Are we dealing with HTML string or an ID? if ( selector.charAt(0) === "<" && selector.charAt( selector.length - 1 ) === ">" && selector.length >= 3 ) { // Assume that strings that start and end with <> are HTML and skip the regex check match = [ null, selector, null ]; } else { match = quickExpr.exec( selector ); } // Verify a match, and that no context was specified for #id if ( match && (match[1] || !context) ) { // HANDLE: $(html) -> $(array) if ( match[1] ) { context = context instanceof jQuery ? context[0] : context; doc = ( context ? context.ownerDocument || context : document ); // If a single string is passed in and it's a single tag // just do a createElement and skip the rest ret = rsingleTag.exec( selector ); if ( ret ) { if ( jQuery.isPlainObject( context ) ) { selector = [ document.createElement( ret[1] ) ]; jQuery.fn.attr.call( selector, context, true ); } else { selector = [ doc.createElement( ret[1] ) ]; } } else { ret = jQuery.buildFragment( [ match[1] ], [ doc ] ); selector = ( ret.cacheable ? jQuery.clone(ret.fragment) : ret.fragment ).childNodes; } return jQuery.merge( this, selector ); // HANDLE: $("#id") } else { elem = document.getElementById( match[2] ); // Check parentNode to catch when Blackberry 4.6 returns // nodes that are no longer in the document #6963 if ( elem && elem.parentNode ) { // Handle the case where IE and Opera return items // by name instead of ID if ( elem.id !== match[2] ) { return rootjQuery.find( selector ); } // Otherwise, we inject the element directly into the jQuery object this.length = 1; this[0] = elem; } this.context = document; this.selector = selector; return this; } // HANDLE: $(expr, $(...)) } else if ( !context || context.jquery ) { return ( context || rootjQuery ).find( selector ); // HANDLE: $(expr, context) // (which is just equivalent to: $(context).find(expr) } else { return this.constructor( context ).find( selector ); } // HANDLE: $(function) // Shortcut for document ready } else if ( jQuery.isFunction( selector ) ) { return rootjQuery.ready( selector ); } if ( selector.selector !== undefined ) { this.selector = selector.selector; this.context = selector.context; } return jQuery.makeArray( selector, this ); }, // Start with an empty selector selector: "", // The current version of jQuery being used jquery: "1.7.2", // The default length of a jQuery object is 0 length: 0, // The number of elements contained in the matched element set size: function() { return this.length; }, toArray: function() { return slice.call( this, 0 ); }, // Get the Nth element in the matched element set OR // Get the whole matched element set as a clean array get: function( num ) { return num == null ? // Return a 'clean' array this.toArray() : // Return just the object ( num < 0 ? this[ this.length + num ] : this[ num ] ); }, // Take an array of elements and push it onto the stack // (returning the new matched element set) pushStack: function( elems, name, selector ) { // Build a new jQuery matched element set var ret = this.constructor(); if ( jQuery.isArray( elems ) ) { push.apply( ret, elems ); } else { jQuery.merge( ret, elems ); } // Add the old object onto the stack (as a reference) ret.prevObject = this; ret.context = this.context; if ( name === "find" ) { ret.selector = this.selector + ( this.selector ? " " : "" ) + selector; } else if ( name ) { ret.selector = this.selector + "." + name + "(" + selector + ")"; } // Return the newly-formed element set return ret; }, // Execute a callback for every element in the matched set. // (You can seed the arguments with an array of args, but this is // only used internally.) each: function( callback, args ) { return jQuery.each( this, callback, args ); }, ready: function( fn ) { // Attach the listeners jQuery.bindReady(); // Add the callback readyList.add( fn ); return this; }, eq: function( i ) { i = +i; return i === -1 ? this.slice( i ) : this.slice( i, i + 1 ); }, first: function() { return this.eq( 0 ); }, last: function() { return this.eq( -1 ); }, slice: function() { return this.pushStack( slice.apply( this, arguments ), "slice", slice.call(arguments).join(",") ); }, map: function( callback ) { return this.pushStack( jQuery.map(this, function( elem, i ) { return callback.call( elem, i, elem ); })); }, end: function() { return this.prevObject || this.constructor(null); }, // For internal use only. // Behaves like an Array's method, not like a jQuery method. push: push, sort: [].sort, splice: [].splice }; // Give the init function the jQuery prototype for later instantiation jQuery.fn.init.prototype = jQuery.fn; jQuery.extend = jQuery.fn.extend = function() { var options, name, src, copy, copyIsArray, clone, target = arguments[0] || {}, i = 1, length = arguments.length, deep = false; // Handle a deep copy situation if ( typeof target === "boolean" ) { deep = target; target = arguments[1] || {}; // skip the boolean and the target i = 2; } // Handle case when target is a string or something (possible in deep copy) if ( typeof target !== "object" && !jQuery.isFunction(target) ) { target = {}; } // extend jQuery itself if only one argument is passed if ( length === i ) { target = this; --i; } for ( ; i < length; i++ ) { // Only deal with non-null/undefined values if ( (options = arguments[ i ]) != null ) { // Extend the base object for ( name in options ) { src = target[ name ]; copy = options[ name ]; // Prevent never-ending loop if ( target === copy ) { continue; } // Recurse if we're merging plain objects or arrays if ( deep && copy && ( jQuery.isPlainObject(copy) || (copyIsArray = jQuery.isArray(copy)) ) ) { if ( copyIsArray ) { copyIsArray = false; clone = src && jQuery.isArray(src) ? src : []; } else { clone = src && jQuery.isPlainObject(src) ? src : {}; } // Never move original objects, clone them target[ name ] = jQuery.extend( deep, clone, copy ); // Don't bring in undefined values } else if ( copy !== undefined ) { target[ name ] = copy; } } } } // Return the modified object return target; }; jQuery.extend({ noConflict: function( deep ) { if ( window.$ === jQuery ) { window.$ = _$; } if ( deep && window.jQuery === jQuery ) { window.jQuery = _jQuery; } return jQuery; }, // Is the DOM ready to be used? Set to true once it occurs. isReady: false, // A counter to track how many items to wait for before // the ready event fires. See #6781 readyWait: 1, // Hold (or release) the ready event holdReady: function( hold ) { if ( hold ) { jQuery.readyWait++; } else { jQuery.ready( true ); } }, // Handle when the DOM is ready ready: function( wait ) { // Either a released hold or an DOMready/load event and not yet ready if ( (wait === true && !--jQuery.readyWait) || (wait !== true && !jQuery.isReady) ) { // Make sure body exists, at least, in case IE gets a little overzealous (ticket #5443). if ( !document.body ) { return setTimeout( jQuery.ready, 1 ); } // Remember that the DOM is ready jQuery.isReady = true; // If a normal DOM Ready event fired, decrement, and wait if need be if ( wait !== true && --jQuery.readyWait > 0 ) { return; } // If there are functions bound, to execute readyList.fireWith( document, [ jQuery ] ); // Trigger any bound ready events if ( jQuery.fn.trigger ) { jQuery( document ).trigger( "ready" ).off( "ready" ); } } }, bindReady: function() { if ( readyList ) { return; } readyList = jQuery.Callbacks( "once memory" ); // Catch cases where $(document).ready() is called after the // browser event has already occurred. if ( document.readyState === "complete" ) { // Handle it asynchronously to allow scripts the opportunity to delay ready return setTimeout( jQuery.ready, 1 ); } // Mozilla, Opera and webkit nightlies currently support this event if ( document.addEventListener ) { // Use the handy event callback document.addEventListener( "DOMContentLoaded", DOMContentLoaded, false ); // A fallback to window.onload, that will always work window.addEventListener( "load", jQuery.ready, false ); // If IE event model is used } else if ( document.attachEvent ) { // ensure firing before onload, // maybe late but safe also for iframes document.attachEvent( "onreadystatechange", DOMContentLoaded ); // A fallback to window.onload, that will always work window.attachEvent( "onload", jQuery.ready ); // If IE and not a frame // continually check to see if the document is ready var toplevel = false; try { toplevel = window.frameElement == null; } catch(e) {} if ( document.documentElement.doScroll && toplevel ) { doScrollCheck(); } } }, // See test/unit/core.js for details concerning isFunction. // Since version 1.3, DOM methods and functions like alert // aren't supported. They return false on IE (#2968). isFunction: function( obj ) { return jQuery.type(obj) === "function"; }, isArray: Array.isArray || function( obj ) { return jQuery.type(obj) === "array"; }, isWindow: function( obj ) { return obj != null && obj == obj.window; }, isNumeric: function( obj ) { return !isNaN( parseFloat(obj) ) && isFinite( obj ); }, type: function( obj ) { return obj == null ? String( obj ) : class2type[ toString.call(obj) ] || "object"; }, isPlainObject: function( obj ) { // Must be an Object. // Because of IE, we also have to check the presence of the constructor property. // Make sure that DOM nodes and window objects don't pass through, as well if ( !obj || jQuery.type(obj) !== "object" || obj.nodeType || jQuery.isWindow( obj ) ) { return false; } try { // Not own constructor property must be Object if ( obj.constructor && !hasOwn.call(obj, "constructor") && !hasOwn.call(obj.constructor.prototype, "isPrototypeOf") ) { return false; } } catch ( e ) { // IE8,9 Will throw exceptions on certain host objects #9897 return false; } // Own properties are enumerated firstly, so to speed up, // if last one is own, then all properties are own. var key; for ( key in obj ) {} return key === undefined || hasOwn.call( obj, key ); }, isEmptyObject: function( obj ) { for ( var name in obj ) { return false; } return true; }, error: function( msg ) { throw new Error( msg ); }, parseJSON: function( data ) { if ( typeof data !== "string" || !data ) { return null; } // Make sure leading/trailing whitespace is removed (IE can't handle it) data = jQuery.trim( data ); // Attempt to parse using the native JSON parser first if ( window.JSON && window.JSON.parse ) { return window.JSON.parse( data ); } // Make sure the incoming data is actual JSON // Logic borrowed from http://json.org/json2.js if ( rvalidchars.test( data.replace( rvalidescape, "@" ) .replace( rvalidtokens, "]" ) .replace( rvalidbraces, "")) ) { return ( new Function( "return " + data ) )(); } jQuery.error( "Invalid JSON: " + data ); }, // Cross-browser xml parsing parseXML: function( data ) { if ( typeof data !== "string" || !data ) { return null; } var xml, tmp; try { if ( window.DOMParser ) { // Standard tmp = new DOMParser(); xml = tmp.parseFromString( data , "text/xml" ); } else { // IE xml = new ActiveXObject( "Microsoft.XMLDOM" ); xml.async = "false"; xml.loadXML( data ); } } catch( e ) { xml = undefined; } if ( !xml || !xml.documentElement || xml.getElementsByTagName( "parsererror" ).length ) { jQuery.error( "Invalid XML: " + data ); } return xml; }, noop: function() {}, // Evaluates a script in a global context // Workarounds based on findings by Jim Driscoll // http://weblogs.java.net/blog/driscoll/archive/2009/09/08/eval-javascript-global-context globalEval: function( data ) { if ( data && rnotwhite.test( data ) ) { // We use execScript on Internet Explorer // We use an anonymous function so that context is window // rather than jQuery in Firefox ( window.execScript || function( data ) { window[ "eval" ].call( window, data ); } )( data ); } }, // Convert dashed to camelCase; used by the css and data modules // Microsoft forgot to hump their vendor prefix (#9572) camelCase: function( string ) { return string.replace( rmsPrefix, "ms-" ).replace( rdashAlpha, fcamelCase ); }, nodeName: function( elem, name ) { return elem.nodeName && elem.nodeName.toUpperCase() === name.toUpperCase(); }, // args is for internal usage only each: function( object, callback, args ) { var name, i = 0, length = object.length, isObj = length === undefined || jQuery.isFunction( object ); if ( args ) { if ( isObj ) { for ( name in object ) { if ( callback.apply( object[ name ], args ) === false ) { break; } } } else { for ( ; i < length; ) { if ( callback.apply( object[ i++ ], args ) === false ) { break; } } } // A special, fast, case for the most common use of each } else { if ( isObj ) { for ( name in object ) { if ( callback.call( object[ name ], name, object[ name ] ) === false ) { break; } } } else { for ( ; i < length; ) { if ( callback.call( object[ i ], i, object[ i++ ] ) === false ) { break; } } } } return object; }, // Use native String.trim function wherever possible trim: trim ? function( text ) { return text == null ? "" : trim.call( text ); } : // Otherwise use our own trimming functionality function( text ) { return text == null ? "" : text.toString().replace( trimLeft, "" ).replace( trimRight, "" ); }, // results is for internal usage only makeArray: function( array, results ) { var ret = results || []; if ( array != null ) { // The window, strings (and functions) also have 'length' // Tweaked logic slightly to handle Blackberry 4.7 RegExp issues #6930 var type = jQuery.type( array ); if ( array.length == null || type === "string" || type === "function" || type === "regexp" || jQuery.isWindow( array ) ) { push.call( ret, array ); } else { jQuery.merge( ret, array ); } } return ret; }, inArray: function( elem, array, i ) { var len; if ( array ) { if ( indexOf ) { return indexOf.call( array, elem, i ); } len = array.length; i = i ? i < 0 ? Math.max( 0, len + i ) : i : 0; for ( ; i < len; i++ ) { // Skip accessing in sparse arrays if ( i in array && array[ i ] === elem ) { return i; } } } return -1; }, merge: function( first, second ) { var i = first.length, j = 0; if ( typeof second.length === "number" ) { for ( var l = second.length; j < l; j++ ) { first[ i++ ] = second[ j ]; } } else { while ( second[j] !== undefined ) { first[ i++ ] = second[ j++ ]; } } first.length = i; return first; }, grep: function( elems, callback, inv ) { var ret = [], retVal; inv = !!inv; // Go through the array, only saving the items // that pass the validator function for ( var i = 0, length = elems.length; i < length; i++ ) { retVal = !!callback( elems[ i ], i ); if ( inv !== retVal ) { ret.push( elems[ i ] ); } } return ret; }, // arg is for internal usage only map: function( elems, callback, arg ) { var value, key, ret = [], i = 0, length = elems.length, // jquery objects are treated as arrays isArray = elems instanceof jQuery || length !== undefined && typeof length === "number" && ( ( length > 0 && elems[ 0 ] && elems[ length -1 ] ) || length === 0 || jQuery.isArray( elems ) ) ; // Go through the array, translating each of the items to their if ( isArray ) { for ( ; i < length; i++ ) { value = callback( elems[ i ], i, arg ); if ( value != null ) { ret[ ret.length ] = value; } } // Go through every key on the object, } else { for ( key in elems ) { value = callback( elems[ key ], key, arg ); if ( value != null ) { ret[ ret.length ] = value; } } } // Flatten any nested arrays return ret.concat.apply( [], ret ); }, // A global GUID counter for objects guid: 1, // Bind a function to a context, optionally partially applying any // arguments. proxy: function( fn, context ) { if ( typeof context === "string" ) { var tmp = fn[ context ]; context = fn; fn = tmp; } // Quick check to determine if target is callable, in the spec // this throws a TypeError, but we will just return undefined. if ( !jQuery.isFunction( fn ) ) { return undefined; } // Simulated bind var args = slice.call( arguments, 2 ), proxy = function() { return fn.apply( context, args.concat( slice.call( arguments ) ) ); }; // Set the guid of unique handler to the same of original handler, so it can be removed proxy.guid = fn.guid = fn.guid || proxy.guid || jQuery.guid++; return proxy; }, // Mutifunctional method to get and set values to a collection // The value/s can optionally be executed if it's a function access: function( elems, fn, key, value, chainable, emptyGet, pass ) { var exec, bulk = key == null, i = 0, length = elems.length; // Sets many values if ( key && typeof key === "object" ) { for ( i in key ) { jQuery.access( elems, fn, i, key[i], 1, emptyGet, value ); } chainable = 1; // Sets one value } else if ( value !== undefined ) { // Optionally, function values get executed if exec is true exec = pass === undefined && jQuery.isFunction( value ); if ( bulk ) { // Bulk operations only iterate when executing function values if ( exec ) { exec = fn; fn = function( elem, key, value ) { return exec.call( jQuery( elem ), value ); }; // Otherwise they run against the entire set } else { fn.call( elems, value ); fn = null; } } if ( fn ) { for (; i < length; i++ ) { fn( elems[i], key, exec ? value.call( elems[i], i, fn( elems[i], key ) ) : value, pass ); } } chainable = 1; } return chainable ? elems : // Gets bulk ? fn.call( elems ) : length ? fn( elems[0], key ) : emptyGet; }, now: function() { return ( new Date() ).getTime(); }, // Use of jQuery.browser is frowned upon. // More details: http://docs.jquery.com/Utilities/jQuery.browser uaMatch: function( ua ) { ua = ua.toLowerCase(); var match = rwebkit.exec( ua ) || ropera.exec( ua ) || rmsie.exec( ua ) || ua.indexOf("compatible") < 0 && rmozilla.exec( ua ) || []; return { browser: match[1] || "", version: match[2] || "0" }; }, sub: function() { function jQuerySub( selector, context ) { return new jQuerySub.fn.init( selector, context ); } jQuery.extend( true, jQuerySub, this ); jQuerySub.superclass = this; jQuerySub.fn = jQuerySub.prototype = this(); jQuerySub.fn.constructor = jQuerySub; jQuerySub.sub = this.sub; jQuerySub.fn.init = function init( selector, context ) { if ( context && context instanceof jQuery && !(context instanceof jQuerySub) ) { context = jQuerySub( context ); } return jQuery.fn.init.call( this, selector, context, rootjQuerySub ); }; jQuerySub.fn.init.prototype = jQuerySub.fn; var rootjQuerySub = jQuerySub(document); return jQuerySub; }, browser: {} }); // Populate the class2type map jQuery.each("Boolean Number String Function Array Date RegExp Object".split(" "), function(i, name) { class2type[ "[object " + name + "]" ] = name.toLowerCase(); }); browserMatch = jQuery.uaMatch( userAgent ); if ( browserMatch.browser ) { jQuery.browser[ browserMatch.browser ] = true; jQuery.browser.version = browserMatch.version; } // Deprecated, use jQuery.browser.webkit instead if ( jQuery.browser.webkit ) { jQuery.browser.safari = true; } // IE doesn't match non-breaking spaces with \s if ( rnotwhite.test( "\xA0" ) ) { trimLeft = /^[\s\xA0]+/; trimRight = /[\s\xA0]+$/; } // All jQuery objects should point back to these rootjQuery = jQuery(document); // Cleanup functions for the document ready method if ( document.addEventListener ) { DOMContentLoaded = function() { document.removeEventListener( "DOMContentLoaded", DOMContentLoaded, false ); jQuery.ready(); }; } else if ( document.attachEvent ) { DOMContentLoaded = function() { // Make sure body exists, at least, in case IE gets a little overzealous (ticket #5443). if ( document.readyState === "complete" ) { document.detachEvent( "onreadystatechange", DOMContentLoaded ); jQuery.ready(); } }; } // The DOM ready check for Internet Explorer function doScrollCheck() { if ( jQuery.isReady ) { return; } try { // If IE is used, use the trick by Diego Perini // http://javascript.nwbox.com/IEContentLoaded/ document.documentElement.doScroll("left"); } catch(e) { setTimeout( doScrollCheck, 1 ); return; } // and execute any waiting functions jQuery.ready(); } return jQuery; })(); // String to Object flags format cache var flagsCache = {}; // Convert String-formatted flags into Object-formatted ones and store in cache function createFlags( flags ) { var object = flagsCache[ flags ] = {}, i, length; flags = flags.split( /\s+/ ); for ( i = 0, length = flags.length; i < length; i++ ) { object[ flags[i] ] = true; } return object; } /* * Create a callback list using the following parameters: * * flags: an optional list of space-separated flags that will change how * the callback list behaves * * By default a callback list will act like an event callback list and can be * "fired" multiple times. * * Possible flags: * * once: will ensure the callback list can only be fired once (like a Deferred) * * memory: will keep track of previous values and will call any callback added * after the list has been fired right away with the latest "memorized" * values (like a Deferred) * * unique: will ensure a callback can only be added once (no duplicate in the list) * * stopOnFalse: interrupt callings when a callback returns false * */ jQuery.Callbacks = function( flags ) { // Convert flags from String-formatted to Object-formatted // (we check in cache first) flags = flags ? ( flagsCache[ flags ] || createFlags( flags ) ) : {}; var // Actual callback list list = [], // Stack of fire calls for repeatable lists stack = [], // Last fire value (for non-forgettable lists) memory, // Flag to know if list was already fired fired, // Flag to know if list is currently firing firing, // First callback to fire (used internally by add and fireWith) firingStart, // End of the loop when firing firingLength, // Index of currently firing callback (modified by remove if needed) firingIndex, // Add one or several callbacks to the list add = function( args ) { var i, length, elem, type, actual; for ( i = 0, length = args.length; i < length; i++ ) { elem = args[ i ]; type = jQuery.type( elem ); if ( type === "array" ) { // Inspect recursively add( elem ); } else if ( type === "function" ) { // Add if not in unique mode and callback is not in if ( !flags.unique || !self.has( elem ) ) { list.push( elem ); } } } }, // Fire callbacks fire = function( context, args ) { args = args || []; memory = !flags.memory || [ context, args ]; fired = true; firing = true; firingIndex = firingStart || 0; firingStart = 0; firingLength = list.length; for ( ; list && firingIndex < firingLength; firingIndex++ ) { if ( list[ firingIndex ].apply( context, args ) === false && flags.stopOnFalse ) { memory = true; // Mark as halted break; } } firing = false; if ( list ) { if ( !flags.once ) { if ( stack && stack.length ) { memory = stack.shift(); self.fireWith( memory[ 0 ], memory[ 1 ] ); } } else if ( memory === true ) { self.disable(); } else { list = []; } } }, // Actual Callbacks object self = { // Add a callback or a collection of callbacks to the list add: function() { if ( list ) { var length = list.length; add( arguments ); // Do we need to add the callbacks to the // current firing batch? if ( firing ) { firingLength = list.length; // With memory, if we're not firing then // we should call right away, unless previous // firing was halted (stopOnFalse) } else if ( memory && memory !== true ) { firingStart = length; fire( memory[ 0 ], memory[ 1 ] ); } } return this; }, // Remove a callback from the list remove: function() { if ( list ) { var args = arguments, argIndex = 0, argLength = args.length; for ( ; argIndex < argLength ; argIndex++ ) { for ( var i = 0; i < list.length; i++ ) { if ( args[ argIndex ] === list[ i ] ) { // Handle firingIndex and firingLength if ( firing ) { if ( i <= firingLength ) { firingLength--; if ( i <= firingIndex ) { firingIndex--; } } } // Remove the element list.splice( i--, 1 ); // If we have some unicity property then // we only need to do this once if ( flags.unique ) { break; } } } } } return this; }, // Control if a given callback is in the list has: function( fn ) { if ( list ) { var i = 0, length = list.length; for ( ; i < length; i++ ) { if ( fn === list[ i ] ) { return true; } } } return false; }, // Remove all callbacks from the list empty: function() { list = []; return this; }, // Have the list do nothing anymore disable: function() { list = stack = memory = undefined; return this; }, // Is it disabled? disabled: function() { return !list; }, // Lock the list in its current state lock: function() { stack = undefined; if ( !memory || memory === true ) { self.disable(); } return this; }, // Is it locked? locked: function() { return !stack; }, // Call all callbacks with the given context and arguments fireWith: function( context, args ) { if ( stack ) { if ( firing ) { if ( !flags.once ) { stack.push( [ context, args ] ); } } else if ( !( flags.once && memory ) ) { fire( context, args ); } } return this; }, // Call all the callbacks with the given arguments fire: function() { self.fireWith( this, arguments ); return this; }, // To know if the callbacks have already been called at least once fired: function() { return !!fired; } }; return self; }; var // Static reference to slice sliceDeferred = [].slice; jQuery.extend({ Deferred: function( func ) { var doneList = jQuery.Callbacks( "once memory" ), failList = jQuery.Callbacks( "once memory" ), progressList = jQuery.Callbacks( "memory" ), state = "pending", lists = { resolve: doneList, reject: failList, notify: progressList }, promise = { done: doneList.add, fail: failList.add, progress: progressList.add, state: function() { return state; }, // Deprecated isResolved: doneList.fired, isRejected: failList.fired, then: function( doneCallbacks, failCallbacks, progressCallbacks ) { deferred.done( doneCallbacks ).fail( failCallbacks ).progress( progressCallbacks ); return this; }, always: function() { deferred.done.apply( deferred, arguments ).fail.apply( deferred, arguments ); return this; }, pipe: function( fnDone, fnFail, fnProgress ) { return jQuery.Deferred(function( newDefer ) { jQuery.each( { done: [ fnDone, "resolve" ], fail: [ fnFail, "reject" ], progress: [ fnProgress, "notify" ] }, function( handler, data ) { var fn = data[ 0 ], action = data[ 1 ], returned; if ( jQuery.isFunction( fn ) ) { deferred[ handler ](function() { returned = fn.apply( this, arguments ); if ( returned && jQuery.isFunction( returned.promise ) ) { returned.promise().then( newDefer.resolve, newDefer.reject, newDefer.notify ); } else { newDefer[ action + "With" ]( this === deferred ? newDefer : this, [ returned ] ); } }); } else { deferred[ handler ]( newDefer[ action ] ); } }); }).promise(); }, // Get a promise for this deferred // If obj is provided, the promise aspect is added to the object promise: function( obj ) { if ( obj == null ) { obj = promise; } else { for ( var key in promise ) { obj[ key ] = promise[ key ]; } } return obj; } }, deferred = promise.promise({}), key; for ( key in lists ) { deferred[ key ] = lists[ key ].fire; deferred[ key + "With" ] = lists[ key ].fireWith; } // Handle state deferred.done( function() { state = "resolved"; }, failList.disable, progressList.lock ).fail( function() { state = "rejected"; }, doneList.disable, progressList.lock ); // Call given func if any if ( func ) { func.call( deferred, deferred ); } // All done! return deferred; }, // Deferred helper when: function( firstParam ) { var args = sliceDeferred.call( arguments, 0 ), i = 0, length = args.length, pValues = new Array( length ), count = length, pCount = length, deferred = length <= 1 && firstParam && jQuery.isFunction( firstParam.promise ) ? firstParam : jQuery.Deferred(), promise = deferred.promise(); function resolveFunc( i ) { return function( value ) { args[ i ] = arguments.length > 1 ? sliceDeferred.call( arguments, 0 ) : value; if ( !( --count ) ) { deferred.resolveWith( deferred, args ); } }; } function progressFunc( i ) { return function( value ) { pValues[ i ] = arguments.length > 1 ? sliceDeferred.call( arguments, 0 ) : value; deferred.notifyWith( promise, pValues ); }; } if ( length > 1 ) { for ( ; i < length; i++ ) { if ( args[ i ] && args[ i ].promise && jQuery.isFunction( args[ i ].promise ) ) { args[ i ].promise().then( resolveFunc(i), deferred.reject, progressFunc(i) ); } else { --count; } } if ( !count ) { deferred.resolveWith( deferred, args ); } } else if ( deferred !== firstParam ) { deferred.resolveWith( deferred, length ? [ firstParam ] : [] ); } return promise; } }); jQuery.support = (function() { var support, all, a, select, opt, input, fragment, tds, events, eventName, i, isSupported, div = document.createElement( "div" ), documentElement = document.documentElement; // Preliminary tests div.setAttribute("className", "t"); div.innerHTML = "
a"; all = div.getElementsByTagName( "*" ); a = div.getElementsByTagName( "a" )[ 0 ]; // Can't get basic test support if ( !all || !all.length || !a ) { return {}; } // First batch of supports tests select = document.createElement( "select" ); opt = select.appendChild( document.createElement("option") ); input = div.getElementsByTagName( "input" )[ 0 ]; support = { // IE strips leading whitespace when .innerHTML is used leadingWhitespace: ( div.firstChild.nodeType === 3 ), // Make sure that tbody elements aren't automatically inserted // IE will insert them into empty tables tbody: !div.getElementsByTagName("tbody").length, // Make sure that link elements get serialized correctly by innerHTML // This requires a wrapper element in IE htmlSerialize: !!div.getElementsByTagName("link").length, // Get the style information from getAttribute // (IE uses .cssText instead) style: /top/.test( a.getAttribute("style") ), // Make sure that URLs aren't manipulated // (IE normalizes it by default) hrefNormalized: ( a.getAttribute("href") === "/a" ), // Make sure that element opacity exists // (IE uses filter instead) // Use a regex to work around a WebKit issue. See #5145 opacity: /^0.55/.test( a.style.opacity ), // Verify style float existence // (IE uses styleFloat instead of cssFloat) cssFloat: !!a.style.cssFloat, // Make sure that if no value is specified for a checkbox // that it defaults to "on". // (WebKit defaults to "" instead) checkOn: ( input.value === "on" ), // Make sure that a selected-by-default option has a working selected property. // (WebKit defaults to false instead of true, IE too, if it's in an optgroup) optSelected: opt.selected, // Test setAttribute on camelCase class. If it works, we need attrFixes when doing get/setAttribute (ie6/7) getSetAttribute: div.className !== "t", // Tests for enctype support on a form(#6743) enctype: !!document.createElement("form").enctype, // Makes sure cloning an html5 element does not cause problems // Where outerHTML is undefined, this still works html5Clone: document.createElement("nav").cloneNode( true ).outerHTML !== "<:nav>", // Will be defined later submitBubbles: true, changeBubbles: true, focusinBubbles: false, deleteExpando: true, noCloneEvent: true, inlineBlockNeedsLayout: false, shrinkWrapBlocks: false, reliableMarginRight: true, pixelMargin: true }; // jQuery.boxModel DEPRECATED in 1.3, use jQuery.support.boxModel instead jQuery.boxModel = support.boxModel = (document.compatMode === "CSS1Compat"); // Make sure checked status is properly cloned input.checked = true; support.noCloneChecked = input.cloneNode( true ).checked; // Make sure that the options inside disabled selects aren't marked as disabled // (WebKit marks them as disabled) select.disabled = true; support.optDisabled = !opt.disabled; // Test to see if it's possible to delete an expando from an element // Fails in Internet Explorer try { delete div.test; } catch( e ) { support.deleteExpando = false; } if ( !div.addEventListener && div.attachEvent && div.fireEvent ) { div.attachEvent( "onclick", function() { // Cloning a node shouldn't copy over any // bound event handlers (IE does this) support.noCloneEvent = false; }); div.cloneNode( true ).fireEvent( "onclick" ); } // Check if a radio maintains its value // after being appended to the DOM input = document.createElement("input"); input.value = "t"; input.setAttribute("type", "radio"); support.radioValue = input.value === "t"; input.setAttribute("checked", "checked"); // #11217 - WebKit loses check when the name is after the checked attribute input.setAttribute( "name", "t" ); div.appendChild( input ); fragment = document.createDocumentFragment(); fragment.appendChild( div.lastChild ); // WebKit doesn't clone checked state correctly in fragments support.checkClone = fragment.cloneNode( true ).cloneNode( true ).lastChild.checked; // Check if a disconnected checkbox will retain its checked // value of true after appended to the DOM (IE6/7) support.appendChecked = input.checked; fragment.removeChild( input ); fragment.appendChild( div ); // Technique from Juriy Zaytsev // http://perfectionkills.com/detecting-event-support-without-browser-sniffing/ // We only care about the case where non-standard event systems // are used, namely in IE. Short-circuiting here helps us to // avoid an eval call (in setAttribute) which can cause CSP // to go haywire. See: https://developer.mozilla.org/en/Security/CSP if ( div.attachEvent ) { for ( i in { submit: 1, change: 1, focusin: 1 }) { eventName = "on" + i; isSupported = ( eventName in div ); if ( !isSupported ) { div.setAttribute( eventName, "return;" ); isSupported = ( typeof div[ eventName ] === "function" ); } support[ i + "Bubbles" ] = isSupported; } } fragment.removeChild( div ); // Null elements to avoid leaks in IE fragment = select = opt = div = input = null; // Run tests that need a body at doc ready jQuery(function() { var container, outer, inner, table, td, offsetSupport, marginDiv, conMarginTop, style, html, positionTopLeftWidthHeight, paddingMarginBorderVisibility, paddingMarginBorder, body = document.getElementsByTagName("body")[0]; if ( !body ) { // Return for frameset docs that don't have a body return; } conMarginTop = 1; paddingMarginBorder = "padding:0;margin:0;border:"; positionTopLeftWidthHeight = "position:absolute;top:0;left:0;width:1px;height:1px;"; paddingMarginBorderVisibility = paddingMarginBorder + "0;visibility:hidden;"; style = "style='" + positionTopLeftWidthHeight + paddingMarginBorder + "5px solid #000;"; html = "
" + "" + "
"; container = document.createElement("div"); container.style.cssText = paddingMarginBorderVisibility + "width:0;height:0;position:static;top:0;margin-top:" + conMarginTop + "px"; body.insertBefore( container, body.firstChild ); // Construct the test element div = document.createElement("div"); container.appendChild( div ); // Check if table cells still have offsetWidth/Height when they are set // to display:none and there are still other visible table cells in a // table row; if so, offsetWidth/Height are not reliable for use when // determining if an element has been hidden directly using // display:none (it is still safe to use offsets if a parent element is // hidden; don safety goggles and see bug #4512 for more information). // (only IE 8 fails this test) div.innerHTML = "
t
"; tds = div.getElementsByTagName( "td" ); isSupported = ( tds[ 0 ].offsetHeight === 0 ); tds[ 0 ].style.display = ""; tds[ 1 ].style.display = "none"; // Check if empty table cells still have offsetWidth/Height // (IE <= 8 fail this test) support.reliableHiddenOffsets = isSupported && ( tds[ 0 ].offsetHeight === 0 ); // Check if div with explicit width and no margin-right incorrectly // gets computed margin-right based on width of container. For more // info see bug #3333 // Fails in WebKit before Feb 2011 nightlies // WebKit Bug 13343 - getComputedStyle returns wrong value for margin-right if ( window.getComputedStyle ) { div.innerHTML = ""; marginDiv = document.createElement( "div" ); marginDiv.style.width = "0"; marginDiv.style.marginRight = "0"; div.style.width = "2px"; div.appendChild( marginDiv ); support.reliableMarginRight = ( parseInt( ( window.getComputedStyle( marginDiv, null ) || { marginRight: 0 } ).marginRight, 10 ) || 0 ) === 0; } if ( typeof div.style.zoom !== "undefined" ) { // Check if natively block-level elements act like inline-block // elements when setting their display to 'inline' and giving // them layout // (IE < 8 does this) div.innerHTML = ""; div.style.width = div.style.padding = "1px"; div.style.border = 0; div.style.overflow = "hidden"; div.style.display = "inline"; div.style.zoom = 1; support.inlineBlockNeedsLayout = ( div.offsetWidth === 3 ); // Check if elements with layout shrink-wrap their children // (IE 6 does this) div.style.display = "block"; div.style.overflow = "visible"; div.innerHTML = "
"; support.shrinkWrapBlocks = ( div.offsetWidth !== 3 ); } div.style.cssText = positionTopLeftWidthHeight + paddingMarginBorderVisibility; div.innerHTML = html; outer = div.firstChild; inner = outer.firstChild; td = outer.nextSibling.firstChild.firstChild; offsetSupport = { doesNotAddBorder: ( inner.offsetTop !== 5 ), doesAddBorderForTableAndCells: ( td.offsetTop === 5 ) }; inner.style.position = "fixed"; inner.style.top = "20px"; // safari subtracts parent border width here which is 5px offsetSupport.fixedPosition = ( inner.offsetTop === 20 || inner.offsetTop === 15 ); inner.style.position = inner.style.top = ""; outer.style.overflow = "hidden"; outer.style.position = "relative"; offsetSupport.subtractsBorderForOverflowNotVisible = ( inner.offsetTop === -5 ); offsetSupport.doesNotIncludeMarginInBodyOffset = ( body.offsetTop !== conMarginTop ); if ( window.getComputedStyle ) { div.style.marginTop = "1%"; support.pixelMargin = ( window.getComputedStyle( div, null ) || { marginTop: 0 } ).marginTop !== "1%"; } if ( typeof container.style.zoom !== "undefined" ) { container.style.zoom = 1; } body.removeChild( container ); marginDiv = div = container = null; jQuery.extend( support, offsetSupport ); }); return support; })(); var rbrace = /^(?:\{.*\}|\[.*\])$/, rmultiDash = /([A-Z])/g; jQuery.extend({ cache: {}, // Please use with caution uuid: 0, // Unique for each copy of jQuery on the page // Non-digits removed to match rinlinejQuery expando: "jQuery" + ( jQuery.fn.jquery + Math.random() ).replace( /\D/g, "" ), // The following elements throw uncatchable exceptions if you // attempt to add expando properties to them. noData: { "embed": true, // Ban all objects except for Flash (which handle expandos) "object": "clsid:D27CDB6E-AE6D-11cf-96B8-444553540000", "applet": true }, hasData: function( elem ) { elem = elem.nodeType ? jQuery.cache[ elem[jQuery.expando] ] : elem[ jQuery.expando ]; return !!elem && !isEmptyDataObject( elem ); }, data: function( elem, name, data, pvt /* Internal Use Only */ ) { if ( !jQuery.acceptData( elem ) ) { return; } var privateCache, thisCache, ret, internalKey = jQuery.expando, getByName = typeof name === "string", // We have to handle DOM nodes and JS objects differently because IE6-7 // can't GC object references properly across the DOM-JS boundary isNode = elem.nodeType, // Only DOM nodes need the global jQuery cache; JS object data is // attached directly to the object so GC can occur automatically cache = isNode ? jQuery.cache : elem, // Only defining an ID for JS objects if its cache already exists allows // the code to shortcut on the same path as a DOM node with no cache id = isNode ? elem[ internalKey ] : elem[ internalKey ] && internalKey, isEvents = name === "events"; // Avoid doing any more work than we need to when trying to get data on an // object that has no data at all if ( (!id || !cache[id] || (!isEvents && !pvt && !cache[id].data)) && getByName && data === undefined ) { return; } if ( !id ) { // Only DOM nodes need a new unique ID for each element since their data // ends up in the global cache if ( isNode ) { elem[ internalKey ] = id = ++jQuery.uuid; } else { id = internalKey; } } if ( !cache[ id ] ) { cache[ id ] = {}; // Avoids exposing jQuery metadata on plain JS objects when the object // is serialized using JSON.stringify if ( !isNode ) { cache[ id ].toJSON = jQuery.noop; } } // An object can be passed to jQuery.data instead of a key/value pair; this gets // shallow copied over onto the existing cache if ( typeof name === "object" || typeof name === "function" ) { if ( pvt ) { cache[ id ] = jQuery.extend( cache[ id ], name ); } else { cache[ id ].data = jQuery.extend( cache[ id ].data, name ); } } privateCache = thisCache = cache[ id ]; // jQuery data() is stored in a separate object inside the object's internal data // cache in order to avoid key collisions between internal data and user-defined // data. if ( !pvt ) { if ( !thisCache.data ) { thisCache.data = {}; } thisCache = thisCache.data; } if ( data !== undefined ) { thisCache[ jQuery.camelCase( name ) ] = data; } // Users should not attempt to inspect the internal events object using jQuery.data, // it is undocumented and subject to change. But does anyone listen? No. if ( isEvents && !thisCache[ name ] ) { return privateCache.events; } // Check for both converted-to-camel and non-converted data property names // If a data property was specified if ( getByName ) { // First Try to find as-is property data ret = thisCache[ name ]; // Test for null|undefined property data if ( ret == null ) { // Try to find the camelCased property ret = thisCache[ jQuery.camelCase( name ) ]; } } else { ret = thisCache; } return ret; }, removeData: function( elem, name, pvt /* Internal Use Only */ ) { if ( !jQuery.acceptData( elem ) ) { return; } var thisCache, i, l, // Reference to internal data cache key internalKey = jQuery.expando, isNode = elem.nodeType, // See jQuery.data for more information cache = isNode ? jQuery.cache : elem, // See jQuery.data for more information id = isNode ? elem[ internalKey ] : internalKey; // If there is already no cache entry for this object, there is no // purpose in continuing if ( !cache[ id ] ) { return; } if ( name ) { thisCache = pvt ? cache[ id ] : cache[ id ].data; if ( thisCache ) { // Support array or space separated string names for data keys if ( !jQuery.isArray( name ) ) { // try the string as a key before any manipulation if ( name in thisCache ) { name = [ name ]; } else { // split the camel cased version by spaces unless a key with the spaces exists name = jQuery.camelCase( name ); if ( name in thisCache ) { name = [ name ]; } else { name = name.split( " " ); } } } for ( i = 0, l = name.length; i < l; i++ ) { delete thisCache[ name[i] ]; } // If there is no data left in the cache, we want to continue // and let the cache object itself get destroyed if ( !( pvt ? isEmptyDataObject : jQuery.isEmptyObject )( thisCache ) ) { return; } } } // See jQuery.data for more information if ( !pvt ) { delete cache[ id ].data; // Don't destroy the parent cache unless the internal data object // had been the only thing left in it if ( !isEmptyDataObject(cache[ id ]) ) { return; } } // Browsers that fail expando deletion also refuse to delete expandos on // the window, but it will allow it on all other JS objects; other browsers // don't care // Ensure that `cache` is not a window object #10080 if ( jQuery.support.deleteExpando || !cache.setInterval ) { delete cache[ id ]; } else { cache[ id ] = null; } // We destroyed the cache and need to eliminate the expando on the node to avoid // false lookups in the cache for entries that no longer exist if ( isNode ) { // IE does not allow us to delete expando properties from nodes, // nor does it have a removeAttribute function on Document nodes; // we must handle all of these cases if ( jQuery.support.deleteExpando ) { delete elem[ internalKey ]; } else if ( elem.removeAttribute ) { elem.removeAttribute( internalKey ); } else { elem[ internalKey ] = null; } } }, // For internal use only. _data: function( elem, name, data ) { return jQuery.data( elem, name, data, true ); }, // A method for determining if a DOM node can handle the data expando acceptData: function( elem ) { if ( elem.nodeName ) { var match = jQuery.noData[ elem.nodeName.toLowerCase() ]; if ( match ) { return !(match === true || elem.getAttribute("classid") !== match); } } return true; } }); jQuery.fn.extend({ data: function( key, value ) { var parts, part, attr, name, l, elem = this[0], i = 0, data = null; // Gets all values if ( key === undefined ) { if ( this.length ) { data = jQuery.data( elem ); if ( elem.nodeType === 1 && !jQuery._data( elem, "parsedAttrs" ) ) { attr = elem.attributes; for ( l = attr.length; i < l; i++ ) { name = attr[i].name; if ( name.indexOf( "data-" ) === 0 ) { name = jQuery.camelCase( name.substring(5) ); dataAttr( elem, name, data[ name ] ); } } jQuery._data( elem, "parsedAttrs", true ); } } return data; } // Sets multiple values if ( typeof key === "object" ) { return this.each(function() { jQuery.data( this, key ); }); } parts = key.split( ".", 2 ); parts[1] = parts[1] ? "." + parts[1] : ""; part = parts[1] + "!"; return jQuery.access( this, function( value ) { if ( value === undefined ) { data = this.triggerHandler( "getData" + part, [ parts[0] ] ); // Try to fetch any internally stored data first if ( data === undefined && elem ) { data = jQuery.data( elem, key ); data = dataAttr( elem, key, data ); } return data === undefined && parts[1] ? this.data( parts[0] ) : data; } parts[1] = value; this.each(function() { var self = jQuery( this ); self.triggerHandler( "setData" + part, parts ); jQuery.data( this, key, value ); self.triggerHandler( "changeData" + part, parts ); }); }, null, value, arguments.length > 1, null, false ); }, removeData: function( key ) { return this.each(function() { jQuery.removeData( this, key ); }); } }); function dataAttr( elem, key, data ) { // If nothing was found internally, try to fetch any // data from the HTML5 data-* attribute if ( data === undefined && elem.nodeType === 1 ) { var name = "data-" + key.replace( rmultiDash, "-$1" ).toLowerCase(); data = elem.getAttribute( name ); if ( typeof data === "string" ) { try { data = data === "true" ? true : data === "false" ? false : data === "null" ? null : jQuery.isNumeric( data ) ? +data : rbrace.test( data ) ? jQuery.parseJSON( data ) : data; } catch( e ) {} // Make sure we set the data so it isn't changed later jQuery.data( elem, key, data ); } else { data = undefined; } } return data; } // checks a cache object for emptiness function isEmptyDataObject( obj ) { for ( var name in obj ) { // if the public data object is empty, the private is still empty if ( name === "data" && jQuery.isEmptyObject( obj[name] ) ) { continue; } if ( name !== "toJSON" ) { return false; } } return true; } function handleQueueMarkDefer( elem, type, src ) { var deferDataKey = type + "defer", queueDataKey = type + "queue", markDataKey = type + "mark", defer = jQuery._data( elem, deferDataKey ); if ( defer && ( src === "queue" || !jQuery._data(elem, queueDataKey) ) && ( src === "mark" || !jQuery._data(elem, markDataKey) ) ) { // Give room for hard-coded callbacks to fire first // and eventually mark/queue something else on the element setTimeout( function() { if ( !jQuery._data( elem, queueDataKey ) && !jQuery._data( elem, markDataKey ) ) { jQuery.removeData( elem, deferDataKey, true ); defer.fire(); } }, 0 ); } } jQuery.extend({ _mark: function( elem, type ) { if ( elem ) { type = ( type || "fx" ) + "mark"; jQuery._data( elem, type, (jQuery._data( elem, type ) || 0) + 1 ); } }, _unmark: function( force, elem, type ) { if ( force !== true ) { type = elem; elem = force; force = false; } if ( elem ) { type = type || "fx"; var key = type + "mark", count = force ? 0 : ( (jQuery._data( elem, key ) || 1) - 1 ); if ( count ) { jQuery._data( elem, key, count ); } else { jQuery.removeData( elem, key, true ); handleQueueMarkDefer( elem, type, "mark" ); } } }, queue: function( elem, type, data ) { var q; if ( elem ) { type = ( type || "fx" ) + "queue"; q = jQuery._data( elem, type ); // Speed up dequeue by getting out quickly if this is just a lookup if ( data ) { if ( !q || jQuery.isArray(data) ) { q = jQuery._data( elem, type, jQuery.makeArray(data) ); } else { q.push( data ); } } return q || []; } }, dequeue: function( elem, type ) { type = type || "fx"; var queue = jQuery.queue( elem, type ), fn = queue.shift(), hooks = {}; // If the fx queue is dequeued, always remove the progress sentinel if ( fn === "inprogress" ) { fn = queue.shift(); } if ( fn ) { // Add a progress sentinel to prevent the fx queue from being // automatically dequeued if ( type === "fx" ) { queue.unshift( "inprogress" ); } jQuery._data( elem, type + ".run", hooks ); fn.call( elem, function() { jQuery.dequeue( elem, type ); }, hooks ); } if ( !queue.length ) { jQuery.removeData( elem, type + "queue " + type + ".run", true ); handleQueueMarkDefer( elem, type, "queue" ); } } }); jQuery.fn.extend({ queue: function( type, data ) { var setter = 2; if ( typeof type !== "string" ) { data = type; type = "fx"; setter--; } if ( arguments.length < setter ) { return jQuery.queue( this[0], type ); } return data === undefined ? this : this.each(function() { var queue = jQuery.queue( this, type, data ); if ( type === "fx" && queue[0] !== "inprogress" ) { jQuery.dequeue( this, type ); } }); }, dequeue: function( type ) { return this.each(function() { jQuery.dequeue( this, type ); }); }, // Based off of the plugin by Clint Helfers, with permission. // http://blindsignals.com/index.php/2009/07/jquery-delay/ delay: function( time, type ) { time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; type = type || "fx"; return this.queue( type, function( next, hooks ) { var timeout = setTimeout( next, time ); hooks.stop = function() { clearTimeout( timeout ); }; }); }, clearQueue: function( type ) { return this.queue( type || "fx", [] ); }, // Get a promise resolved when queues of a certain type // are emptied (fx is the type by default) promise: function( type, object ) { if ( typeof type !== "string" ) { object = type; type = undefined; } type = type || "fx"; var defer = jQuery.Deferred(), elements = this, i = elements.length, count = 1, deferDataKey = type + "defer", queueDataKey = type + "queue", markDataKey = type + "mark", tmp; function resolve() { if ( !( --count ) ) { defer.resolveWith( elements, [ elements ] ); } } while( i-- ) { if (( tmp = jQuery.data( elements[ i ], deferDataKey, undefined, true ) || ( jQuery.data( elements[ i ], queueDataKey, undefined, true ) || jQuery.data( elements[ i ], markDataKey, undefined, true ) ) && jQuery.data( elements[ i ], deferDataKey, jQuery.Callbacks( "once memory" ), true ) )) { count++; tmp.add( resolve ); } } resolve(); return defer.promise( object ); } }); var rclass = /[\n\t\r]/g, rspace = /\s+/, rreturn = /\r/g, rtype = /^(?:button|input)$/i, rfocusable = /^(?:button|input|object|select|textarea)$/i, rclickable = /^a(?:rea)?$/i, rboolean = /^(?:autofocus|autoplay|async|checked|controls|defer|disabled|hidden|loop|multiple|open|readonly|required|scoped|selected)$/i, getSetAttribute = jQuery.support.getSetAttribute, nodeHook, boolHook, fixSpecified; jQuery.fn.extend({ attr: function( name, value ) { return jQuery.access( this, jQuery.attr, name, value, arguments.length > 1 ); }, removeAttr: function( name ) { return this.each(function() { jQuery.removeAttr( this, name ); }); }, prop: function( name, value ) { return jQuery.access( this, jQuery.prop, name, value, arguments.length > 1 ); }, removeProp: function( name ) { name = jQuery.propFix[ name ] || name; return this.each(function() { // try/catch handles cases where IE balks (such as removing a property on window) try { this[ name ] = undefined; delete this[ name ]; } catch( e ) {} }); }, addClass: function( value ) { var classNames, i, l, elem, setClass, c, cl; if ( jQuery.isFunction( value ) ) { return this.each(function( j ) { jQuery( this ).addClass( value.call(this, j, this.className) ); }); } if ( value && typeof value === "string" ) { classNames = value.split( rspace ); for ( i = 0, l = this.length; i < l; i++ ) { elem = this[ i ]; if ( elem.nodeType === 1 ) { if ( !elem.className && classNames.length === 1 ) { elem.className = value; } else { setClass = " " + elem.className + " "; for ( c = 0, cl = classNames.length; c < cl; c++ ) { if ( !~setClass.indexOf( " " + classNames[ c ] + " " ) ) { setClass += classNames[ c ] + " "; } } elem.className = jQuery.trim( setClass ); } } } } return this; }, removeClass: function( value ) { var classNames, i, l, elem, className, c, cl; if ( jQuery.isFunction( value ) ) { return this.each(function( j ) { jQuery( this ).removeClass( value.call(this, j, this.className) ); }); } if ( (value && typeof value === "string") || value === undefined ) { classNames = ( value || "" ).split( rspace ); for ( i = 0, l = this.length; i < l; i++ ) { elem = this[ i ]; if ( elem.nodeType === 1 && elem.className ) { if ( value ) { className = (" " + elem.className + " ").replace( rclass, " " ); for ( c = 0, cl = classNames.length; c < cl; c++ ) { className = className.replace(" " + classNames[ c ] + " ", " "); } elem.className = jQuery.trim( className ); } else { elem.className = ""; } } } } return this; }, toggleClass: function( value, stateVal ) { var type = typeof value, isBool = typeof stateVal === "boolean"; if ( jQuery.isFunction( value ) ) { return this.each(function( i ) { jQuery( this ).toggleClass( value.call(this, i, this.className, stateVal), stateVal ); }); } return this.each(function() { if ( type === "string" ) { // toggle individual class names var className, i = 0, self = jQuery( this ), state = stateVal, classNames = value.split( rspace ); while ( (className = classNames[ i++ ]) ) { // check each className given, space seperated list state = isBool ? state : !self.hasClass( className ); self[ state ? "addClass" : "removeClass" ]( className ); } } else if ( type === "undefined" || type === "boolean" ) { if ( this.className ) { // store className if set jQuery._data( this, "__className__", this.className ); } // toggle whole className this.className = this.className || value === false ? "" : jQuery._data( this, "__className__" ) || ""; } }); }, hasClass: function( selector ) { var className = " " + selector + " ", i = 0, l = this.length; for ( ; i < l; i++ ) { if ( this[i].nodeType === 1 && (" " + this[i].className + " ").replace(rclass, " ").indexOf( className ) > -1 ) { return true; } } return false; }, val: function( value ) { var hooks, ret, isFunction, elem = this[0]; if ( !arguments.length ) { if ( elem ) { hooks = jQuery.valHooks[ elem.type ] || jQuery.valHooks[ elem.nodeName.toLowerCase() ]; if ( hooks && "get" in hooks && (ret = hooks.get( elem, "value" )) !== undefined ) { return ret; } ret = elem.value; return typeof ret === "string" ? // handle most common string cases ret.replace(rreturn, "") : // handle cases where value is null/undef or number ret == null ? "" : ret; } return; } isFunction = jQuery.isFunction( value ); return this.each(function( i ) { var self = jQuery(this), val; if ( this.nodeType !== 1 ) { return; } if ( isFunction ) { val = value.call( this, i, self.val() ); } else { val = value; } // Treat null/undefined as ""; convert numbers to string if ( val == null ) { val = ""; } else if ( typeof val === "number" ) { val += ""; } else if ( jQuery.isArray( val ) ) { val = jQuery.map(val, function ( value ) { return value == null ? "" : value + ""; }); } hooks = jQuery.valHooks[ this.type ] || jQuery.valHooks[ this.nodeName.toLowerCase() ]; // If set returns undefined, fall back to normal setting if ( !hooks || !("set" in hooks) || hooks.set( this, val, "value" ) === undefined ) { this.value = val; } }); } }); jQuery.extend({ valHooks: { option: { get: function( elem ) { // attributes.value is undefined in Blackberry 4.7 but // uses .value. See #6932 var val = elem.attributes.value; return !val || val.specified ? elem.value : elem.text; } }, select: { get: function( elem ) { var value, i, max, option, index = elem.selectedIndex, values = [], options = elem.options, one = elem.type === "select-one"; // Nothing was selected if ( index < 0 ) { return null; } // Loop through all the selected options i = one ? index : 0; max = one ? index + 1 : options.length; for ( ; i < max; i++ ) { option = options[ i ]; // Don't return options that are disabled or in a disabled optgroup if ( option.selected && (jQuery.support.optDisabled ? !option.disabled : option.getAttribute("disabled") === null) && (!option.parentNode.disabled || !jQuery.nodeName( option.parentNode, "optgroup" )) ) { // Get the specific value for the option value = jQuery( option ).val(); // We don't need an array for one selects if ( one ) { return value; } // Multi-Selects return an array values.push( value ); } } // Fixes Bug #2551 -- select.val() broken in IE after form.reset() if ( one && !values.length && options.length ) { return jQuery( options[ index ] ).val(); } return values; }, set: function( elem, value ) { var values = jQuery.makeArray( value ); jQuery(elem).find("option").each(function() { this.selected = jQuery.inArray( jQuery(this).val(), values ) >= 0; }); if ( !values.length ) { elem.selectedIndex = -1; } return values; } } }, attrFn: { val: true, css: true, html: true, text: true, data: true, width: true, height: true, offset: true }, attr: function( elem, name, value, pass ) { var ret, hooks, notxml, nType = elem.nodeType; // don't get/set attributes on text, comment and attribute nodes if ( !elem || nType === 3 || nType === 8 || nType === 2 ) { return; } if ( pass && name in jQuery.attrFn ) { return jQuery( elem )[ name ]( value ); } // Fallback to prop when attributes are not supported if ( typeof elem.getAttribute === "undefined" ) { return jQuery.prop( elem, name, value ); } notxml = nType !== 1 || !jQuery.isXMLDoc( elem ); // All attributes are lowercase // Grab necessary hook if one is defined if ( notxml ) { name = name.toLowerCase(); hooks = jQuery.attrHooks[ name ] || ( rboolean.test( name ) ? boolHook : nodeHook ); } if ( value !== undefined ) { if ( value === null ) { jQuery.removeAttr( elem, name ); return; } else if ( hooks && "set" in hooks && notxml && (ret = hooks.set( elem, value, name )) !== undefined ) { return ret; } else { elem.setAttribute( name, "" + value ); return value; } } else if ( hooks && "get" in hooks && notxml && (ret = hooks.get( elem, name )) !== null ) { return ret; } else { ret = elem.getAttribute( name ); // Non-existent attributes return null, we normalize to undefined return ret === null ? undefined : ret; } }, removeAttr: function( elem, value ) { var propName, attrNames, name, l, isBool, i = 0; if ( value && elem.nodeType === 1 ) { attrNames = value.toLowerCase().split( rspace ); l = attrNames.length; for ( ; i < l; i++ ) { name = attrNames[ i ]; if ( name ) { propName = jQuery.propFix[ name ] || name; isBool = rboolean.test( name ); // See #9699 for explanation of this approach (setting first, then removal) // Do not do this for boolean attributes (see #10870) if ( !isBool ) { jQuery.attr( elem, name, "" ); } elem.removeAttribute( getSetAttribute ? name : propName ); // Set corresponding property to false for boolean attributes if ( isBool && propName in elem ) { elem[ propName ] = false; } } } } }, attrHooks: { type: { set: function( elem, value ) { // We can't allow the type property to be changed (since it causes problems in IE) if ( rtype.test( elem.nodeName ) && elem.parentNode ) { jQuery.error( "type property can't be changed" ); } else if ( !jQuery.support.radioValue && value === "radio" && jQuery.nodeName(elem, "input") ) { // Setting the type on a radio button after the value resets the value in IE6-9 // Reset value to it's default in case type is set after value // This is for element creation var val = elem.value; elem.setAttribute( "type", value ); if ( val ) { elem.value = val; } return value; } } }, // Use the value property for back compat // Use the nodeHook for button elements in IE6/7 (#1954) value: { get: function( elem, name ) { if ( nodeHook && jQuery.nodeName( elem, "button" ) ) { return nodeHook.get( elem, name ); } return name in elem ? elem.value : null; }, set: function( elem, value, name ) { if ( nodeHook && jQuery.nodeName( elem, "button" ) ) { return nodeHook.set( elem, value, name ); } // Does not return so that setAttribute is also used elem.value = value; } } }, propFix: { tabindex: "tabIndex", readonly: "readOnly", "for": "htmlFor", "class": "className", maxlength: "maxLength", cellspacing: "cellSpacing", cellpadding: "cellPadding", rowspan: "rowSpan", colspan: "colSpan", usemap: "useMap", frameborder: "frameBorder", contenteditable: "contentEditable" }, prop: function( elem, name, value ) { var ret, hooks, notxml, nType = elem.nodeType; // don't get/set properties on text, comment and attribute nodes if ( !elem || nType === 3 || nType === 8 || nType === 2 ) { return; } notxml = nType !== 1 || !jQuery.isXMLDoc( elem ); if ( notxml ) { // Fix name and attach hooks name = jQuery.propFix[ name ] || name; hooks = jQuery.propHooks[ name ]; } if ( value !== undefined ) { if ( hooks && "set" in hooks && (ret = hooks.set( elem, value, name )) !== undefined ) { return ret; } else { return ( elem[ name ] = value ); } } else { if ( hooks && "get" in hooks && (ret = hooks.get( elem, name )) !== null ) { return ret; } else { return elem[ name ]; } } }, propHooks: { tabIndex: { get: function( elem ) { // elem.tabIndex doesn't always return the correct value when it hasn't been explicitly set // http://fluidproject.org/blog/2008/01/09/getting-setting-and-removing-tabindex-values-with-javascript/ var attributeNode = elem.getAttributeNode("tabindex"); return attributeNode && attributeNode.specified ? parseInt( attributeNode.value, 10 ) : rfocusable.test( elem.nodeName ) || rclickable.test( elem.nodeName ) && elem.href ? 0 : undefined; } } } }); // Add the tabIndex propHook to attrHooks for back-compat (different case is intentional) jQuery.attrHooks.tabindex = jQuery.propHooks.tabIndex; // Hook for boolean attributes boolHook = { get: function( elem, name ) { // Align boolean attributes with corresponding properties // Fall back to attribute presence where some booleans are not supported var attrNode, property = jQuery.prop( elem, name ); return property === true || typeof property !== "boolean" && ( attrNode = elem.getAttributeNode(name) ) && attrNode.nodeValue !== false ? name.toLowerCase() : undefined; }, set: function( elem, value, name ) { var propName; if ( value === false ) { // Remove boolean attributes when set to false jQuery.removeAttr( elem, name ); } else { // value is true since we know at this point it's type boolean and not false // Set boolean attributes to the same name and set the DOM property propName = jQuery.propFix[ name ] || name; if ( propName in elem ) { // Only set the IDL specifically if it already exists on the element elem[ propName ] = true; } elem.setAttribute( name, name.toLowerCase() ); } return name; } }; // IE6/7 do not support getting/setting some attributes with get/setAttribute if ( !getSetAttribute ) { fixSpecified = { name: true, id: true, coords: true }; // Use this for any attribute in IE6/7 // This fixes almost every IE6/7 issue nodeHook = jQuery.valHooks.button = { get: function( elem, name ) { var ret; ret = elem.getAttributeNode( name ); return ret && ( fixSpecified[ name ] ? ret.nodeValue !== "" : ret.specified ) ? ret.nodeValue : undefined; }, set: function( elem, value, name ) { // Set the existing or create a new attribute node var ret = elem.getAttributeNode( name ); if ( !ret ) { ret = document.createAttribute( name ); elem.setAttributeNode( ret ); } return ( ret.nodeValue = value + "" ); } }; // Apply the nodeHook to tabindex jQuery.attrHooks.tabindex.set = nodeHook.set; // Set width and height to auto instead of 0 on empty string( Bug #8150 ) // This is for removals jQuery.each([ "width", "height" ], function( i, name ) { jQuery.attrHooks[ name ] = jQuery.extend( jQuery.attrHooks[ name ], { set: function( elem, value ) { if ( value === "" ) { elem.setAttribute( name, "auto" ); return value; } } }); }); // Set contenteditable to false on removals(#10429) // Setting to empty string throws an error as an invalid value jQuery.attrHooks.contenteditable = { get: nodeHook.get, set: function( elem, value, name ) { if ( value === "" ) { value = "false"; } nodeHook.set( elem, value, name ); } }; } // Some attributes require a special call on IE if ( !jQuery.support.hrefNormalized ) { jQuery.each([ "href", "src", "width", "height" ], function( i, name ) { jQuery.attrHooks[ name ] = jQuery.extend( jQuery.attrHooks[ name ], { get: function( elem ) { var ret = elem.getAttribute( name, 2 ); return ret === null ? undefined : ret; } }); }); } if ( !jQuery.support.style ) { jQuery.attrHooks.style = { get: function( elem ) { // Return undefined in the case of empty string // Normalize to lowercase since IE uppercases css property names return elem.style.cssText.toLowerCase() || undefined; }, set: function( elem, value ) { return ( elem.style.cssText = "" + value ); } }; } // Safari mis-reports the default selected property of an option // Accessing the parent's selectedIndex property fixes it if ( !jQuery.support.optSelected ) { jQuery.propHooks.selected = jQuery.extend( jQuery.propHooks.selected, { get: function( elem ) { var parent = elem.parentNode; if ( parent ) { parent.selectedIndex; // Make sure that it also works with optgroups, see #5701 if ( parent.parentNode ) { parent.parentNode.selectedIndex; } } return null; } }); } // IE6/7 call enctype encoding if ( !jQuery.support.enctype ) { jQuery.propFix.enctype = "encoding"; } // Radios and checkboxes getter/setter if ( !jQuery.support.checkOn ) { jQuery.each([ "radio", "checkbox" ], function() { jQuery.valHooks[ this ] = { get: function( elem ) { // Handle the case where in Webkit "" is returned instead of "on" if a value isn't specified return elem.getAttribute("value") === null ? "on" : elem.value; } }; }); } jQuery.each([ "radio", "checkbox" ], function() { jQuery.valHooks[ this ] = jQuery.extend( jQuery.valHooks[ this ], { set: function( elem, value ) { if ( jQuery.isArray( value ) ) { return ( elem.checked = jQuery.inArray( jQuery(elem).val(), value ) >= 0 ); } } }); }); var rformElems = /^(?:textarea|input|select)$/i, rtypenamespace = /^([^\.]*)?(?:\.(.+))?$/, rhoverHack = /(?:^|\s)hover(\.\S+)?\b/, rkeyEvent = /^key/, rmouseEvent = /^(?:mouse|contextmenu)|click/, rfocusMorph = /^(?:focusinfocus|focusoutblur)$/, rquickIs = /^(\w*)(?:#([\w\-]+))?(?:\.([\w\-]+))?$/, quickParse = function( selector ) { var quick = rquickIs.exec( selector ); if ( quick ) { // 0 1 2 3 // [ _, tag, id, class ] quick[1] = ( quick[1] || "" ).toLowerCase(); quick[3] = quick[3] && new RegExp( "(?:^|\\s)" + quick[3] + "(?:\\s|$)" ); } return quick; }, quickIs = function( elem, m ) { var attrs = elem.attributes || {}; return ( (!m[1] || elem.nodeName.toLowerCase() === m[1]) && (!m[2] || (attrs.id || {}).value === m[2]) && (!m[3] || m[3].test( (attrs[ "class" ] || {}).value )) ); }, hoverHack = function( events ) { return jQuery.event.special.hover ? events : events.replace( rhoverHack, "mouseenter$1 mouseleave$1" ); }; /* * Helper functions for managing events -- not part of the public interface. * Props to Dean Edwards' addEvent library for many of the ideas. */ jQuery.event = { add: function( elem, types, handler, data, selector ) { var elemData, eventHandle, events, t, tns, type, namespaces, handleObj, handleObjIn, quick, handlers, special; // Don't attach events to noData or text/comment nodes (allow plain objects tho) if ( elem.nodeType === 3 || elem.nodeType === 8 || !types || !handler || !(elemData = jQuery._data( elem )) ) { return; } // Caller can pass in an object of custom data in lieu of the handler if ( handler.handler ) { handleObjIn = handler; handler = handleObjIn.handler; selector = handleObjIn.selector; } // Make sure that the handler has a unique ID, used to find/remove it later if ( !handler.guid ) { handler.guid = jQuery.guid++; } // Init the element's event structure and main handler, if this is the first events = elemData.events; if ( !events ) { elemData.events = events = {}; } eventHandle = elemData.handle; if ( !eventHandle ) { elemData.handle = eventHandle = function( e ) { // Discard the second event of a jQuery.event.trigger() and // when an event is called after a page has unloaded return typeof jQuery !== "undefined" && (!e || jQuery.event.triggered !== e.type) ? jQuery.event.dispatch.apply( eventHandle.elem, arguments ) : undefined; }; // Add elem as a property of the handle fn to prevent a memory leak with IE non-native events eventHandle.elem = elem; } // Handle multiple events separated by a space // jQuery(...).bind("mouseover mouseout", fn); types = jQuery.trim( hoverHack(types) ).split( " " ); for ( t = 0; t < types.length; t++ ) { tns = rtypenamespace.exec( types[t] ) || []; type = tns[1]; namespaces = ( tns[2] || "" ).split( "." ).sort(); // If event changes its type, use the special event handlers for the changed type special = jQuery.event.special[ type ] || {}; // If selector defined, determine special event api type, otherwise given type type = ( selector ? special.delegateType : special.bindType ) || type; // Update special based on newly reset type special = jQuery.event.special[ type ] || {}; // handleObj is passed to all event handlers handleObj = jQuery.extend({ type: type, origType: tns[1], data: data, handler: handler, guid: handler.guid, selector: selector, quick: selector && quickParse( selector ), namespace: namespaces.join(".") }, handleObjIn ); // Init the event handler queue if we're the first handlers = events[ type ]; if ( !handlers ) { handlers = events[ type ] = []; handlers.delegateCount = 0; // Only use addEventListener/attachEvent if the special events handler returns false if ( !special.setup || special.setup.call( elem, data, namespaces, eventHandle ) === false ) { // Bind the global event handler to the element if ( elem.addEventListener ) { elem.addEventListener( type, eventHandle, false ); } else if ( elem.attachEvent ) { elem.attachEvent( "on" + type, eventHandle ); } } } if ( special.add ) { special.add.call( elem, handleObj ); if ( !handleObj.handler.guid ) { handleObj.handler.guid = handler.guid; } } // Add to the element's handler list, delegates in front if ( selector ) { handlers.splice( handlers.delegateCount++, 0, handleObj ); } else { handlers.push( handleObj ); } // Keep track of which events have ever been used, for event optimization jQuery.event.global[ type ] = true; } // Nullify elem to prevent memory leaks in IE elem = null; }, global: {}, // Detach an event or set of events from an element remove: function( elem, types, handler, selector, mappedTypes ) { var elemData = jQuery.hasData( elem ) && jQuery._data( elem ), t, tns, type, origType, namespaces, origCount, j, events, special, handle, eventType, handleObj; if ( !elemData || !(events = elemData.events) ) { return; } // Once for each type.namespace in types; type may be omitted types = jQuery.trim( hoverHack( types || "" ) ).split(" "); for ( t = 0; t < types.length; t++ ) { tns = rtypenamespace.exec( types[t] ) || []; type = origType = tns[1]; namespaces = tns[2]; // Unbind all events (on this namespace, if provided) for the element if ( !type ) { for ( type in events ) { jQuery.event.remove( elem, type + types[ t ], handler, selector, true ); } continue; } special = jQuery.event.special[ type ] || {}; type = ( selector? special.delegateType : special.bindType ) || type; eventType = events[ type ] || []; origCount = eventType.length; namespaces = namespaces ? new RegExp("(^|\\.)" + namespaces.split(".").sort().join("\\.(?:.*\\.)?") + "(\\.|$)") : null; // Remove matching events for ( j = 0; j < eventType.length; j++ ) { handleObj = eventType[ j ]; if ( ( mappedTypes || origType === handleObj.origType ) && ( !handler || handler.guid === handleObj.guid ) && ( !namespaces || namespaces.test( handleObj.namespace ) ) && ( !selector || selector === handleObj.selector || selector === "**" && handleObj.selector ) ) { eventType.splice( j--, 1 ); if ( handleObj.selector ) { eventType.delegateCount--; } if ( special.remove ) { special.remove.call( elem, handleObj ); } } } // Remove generic event handler if we removed something and no more handlers exist // (avoids potential for endless recursion during removal of special event handlers) if ( eventType.length === 0 && origCount !== eventType.length ) { if ( !special.teardown || special.teardown.call( elem, namespaces ) === false ) { jQuery.removeEvent( elem, type, elemData.handle ); } delete events[ type ]; } } // Remove the expando if it's no longer used if ( jQuery.isEmptyObject( events ) ) { handle = elemData.handle; if ( handle ) { handle.elem = null; } // removeData also checks for emptiness and clears the expando if empty // so use it instead of delete jQuery.removeData( elem, [ "events", "handle" ], true ); } }, // Events that are safe to short-circuit if no handlers are attached. // Native DOM events should not be added, they may have inline handlers. customEvent: { "getData": true, "setData": true, "changeData": true }, trigger: function( event, data, elem, onlyHandlers ) { // Don't do events on text and comment nodes if ( elem && (elem.nodeType === 3 || elem.nodeType === 8) ) { return; } // Event object or event type var type = event.type || event, namespaces = [], cache, exclusive, i, cur, old, ontype, special, handle, eventPath, bubbleType; // focus/blur morphs to focusin/out; ensure we're not firing them right now if ( rfocusMorph.test( type + jQuery.event.triggered ) ) { return; } if ( type.indexOf( "!" ) >= 0 ) { // Exclusive events trigger only for the exact event (no namespaces) type = type.slice(0, -1); exclusive = true; } if ( type.indexOf( "." ) >= 0 ) { // Namespaced trigger; create a regexp to match event type in handle() namespaces = type.split("."); type = namespaces.shift(); namespaces.sort(); } if ( (!elem || jQuery.event.customEvent[ type ]) && !jQuery.event.global[ type ] ) { // No jQuery handlers for this event type, and it can't have inline handlers return; } // Caller can pass in an Event, Object, or just an event type string event = typeof event === "object" ? // jQuery.Event object event[ jQuery.expando ] ? event : // Object literal new jQuery.Event( type, event ) : // Just the event type (string) new jQuery.Event( type ); event.type = type; event.isTrigger = true; event.exclusive = exclusive; event.namespace = namespaces.join( "." ); event.namespace_re = event.namespace? new RegExp("(^|\\.)" + namespaces.join("\\.(?:.*\\.)?") + "(\\.|$)") : null; ontype = type.indexOf( ":" ) < 0 ? "on" + type : ""; // Handle a global trigger if ( !elem ) { // TODO: Stop taunting the data cache; remove global events and always attach to document cache = jQuery.cache; for ( i in cache ) { if ( cache[ i ].events && cache[ i ].events[ type ] ) { jQuery.event.trigger( event, data, cache[ i ].handle.elem, true ); } } return; } // Clean up the event in case it is being reused event.result = undefined; if ( !event.target ) { event.target = elem; } // Clone any incoming data and prepend the event, creating the handler arg list data = data != null ? jQuery.makeArray( data ) : []; data.unshift( event ); // Allow special events to draw outside the lines special = jQuery.event.special[ type ] || {}; if ( special.trigger && special.trigger.apply( elem, data ) === false ) { return; } // Determine event propagation path in advance, per W3C events spec (#9951) // Bubble up to document, then to window; watch for a global ownerDocument var (#9724) eventPath = [[ elem, special.bindType || type ]]; if ( !onlyHandlers && !special.noBubble && !jQuery.isWindow( elem ) ) { bubbleType = special.delegateType || type; cur = rfocusMorph.test( bubbleType + type ) ? elem : elem.parentNode; old = null; for ( ; cur; cur = cur.parentNode ) { eventPath.push([ cur, bubbleType ]); old = cur; } // Only add window if we got to document (e.g., not plain obj or detached DOM) if ( old && old === elem.ownerDocument ) { eventPath.push([ old.defaultView || old.parentWindow || window, bubbleType ]); } } // Fire handlers on the event path for ( i = 0; i < eventPath.length && !event.isPropagationStopped(); i++ ) { cur = eventPath[i][0]; event.type = eventPath[i][1]; handle = ( jQuery._data( cur, "events" ) || {} )[ event.type ] && jQuery._data( cur, "handle" ); if ( handle ) { handle.apply( cur, data ); } // Note that this is a bare JS function and not a jQuery handler handle = ontype && cur[ ontype ]; if ( handle && jQuery.acceptData( cur ) && handle.apply( cur, data ) === false ) { event.preventDefault(); } } event.type = type; // If nobody prevented the default action, do it now if ( !onlyHandlers && !event.isDefaultPrevented() ) { if ( (!special._default || special._default.apply( elem.ownerDocument, data ) === false) && !(type === "click" && jQuery.nodeName( elem, "a" )) && jQuery.acceptData( elem ) ) { // Call a native DOM method on the target with the same name name as the event. // Can't use an .isFunction() check here because IE6/7 fails that test. // Don't do default actions on window, that's where global variables be (#6170) // IE<9 dies on focus/blur to hidden element (#1486) if ( ontype && elem[ type ] && ((type !== "focus" && type !== "blur") || event.target.offsetWidth !== 0) && !jQuery.isWindow( elem ) ) { // Don't re-trigger an onFOO event when we call its FOO() method old = elem[ ontype ]; if ( old ) { elem[ ontype ] = null; } // Prevent re-triggering of the same event, since we already bubbled it above jQuery.event.triggered = type; elem[ type ](); jQuery.event.triggered = undefined; if ( old ) { elem[ ontype ] = old; } } } } return event.result; }, dispatch: function( event ) { // Make a writable jQuery.Event from the native event object event = jQuery.event.fix( event || window.event ); var handlers = ( (jQuery._data( this, "events" ) || {} )[ event.type ] || []), delegateCount = handlers.delegateCount, args = [].slice.call( arguments, 0 ), run_all = !event.exclusive && !event.namespace, special = jQuery.event.special[ event.type ] || {}, handlerQueue = [], i, j, cur, jqcur, ret, selMatch, matched, matches, handleObj, sel, related; // Use the fix-ed jQuery.Event rather than the (read-only) native event args[0] = event; event.delegateTarget = this; // Call the preDispatch hook for the mapped type, and let it bail if desired if ( special.preDispatch && special.preDispatch.call( this, event ) === false ) { return; } // Determine handlers that should run if there are delegated events // Avoid non-left-click bubbling in Firefox (#3861) if ( delegateCount && !(event.button && event.type === "click") ) { // Pregenerate a single jQuery object for reuse with .is() jqcur = jQuery(this); jqcur.context = this.ownerDocument || this; for ( cur = event.target; cur != this; cur = cur.parentNode || this ) { // Don't process events on disabled elements (#6911, #8165) if ( cur.disabled !== true ) { selMatch = {}; matches = []; jqcur[0] = cur; for ( i = 0; i < delegateCount; i++ ) { handleObj = handlers[ i ]; sel = handleObj.selector; if ( selMatch[ sel ] === undefined ) { selMatch[ sel ] = ( handleObj.quick ? quickIs( cur, handleObj.quick ) : jqcur.is( sel ) ); } if ( selMatch[ sel ] ) { matches.push( handleObj ); } } if ( matches.length ) { handlerQueue.push({ elem: cur, matches: matches }); } } } } // Add the remaining (directly-bound) handlers if ( handlers.length > delegateCount ) { handlerQueue.push({ elem: this, matches: handlers.slice( delegateCount ) }); } // Run delegates first; they may want to stop propagation beneath us for ( i = 0; i < handlerQueue.length && !event.isPropagationStopped(); i++ ) { matched = handlerQueue[ i ]; event.currentTarget = matched.elem; for ( j = 0; j < matched.matches.length && !event.isImmediatePropagationStopped(); j++ ) { handleObj = matched.matches[ j ]; // Triggered event must either 1) be non-exclusive and have no namespace, or // 2) have namespace(s) a subset or equal to those in the bound event (both can have no namespace). if ( run_all || (!event.namespace && !handleObj.namespace) || event.namespace_re && event.namespace_re.test( handleObj.namespace ) ) { event.data = handleObj.data; event.handleObj = handleObj; ret = ( (jQuery.event.special[ handleObj.origType ] || {}).handle || handleObj.handler ) .apply( matched.elem, args ); if ( ret !== undefined ) { event.result = ret; if ( ret === false ) { event.preventDefault(); event.stopPropagation(); } } } } } // Call the postDispatch hook for the mapped type if ( special.postDispatch ) { special.postDispatch.call( this, event ); } return event.result; }, // Includes some event props shared by KeyEvent and MouseEvent // *** attrChange attrName relatedNode srcElement are not normalized, non-W3C, deprecated, will be removed in 1.8 *** props: "attrChange attrName relatedNode srcElement altKey bubbles cancelable ctrlKey currentTarget eventPhase metaKey relatedTarget shiftKey target timeStamp view which".split(" "), fixHooks: {}, keyHooks: { props: "char charCode key keyCode".split(" "), filter: function( event, original ) { // Add which for key events if ( event.which == null ) { event.which = original.charCode != null ? original.charCode : original.keyCode; } return event; } }, mouseHooks: { props: "button buttons clientX clientY fromElement offsetX offsetY pageX pageY screenX screenY toElement".split(" "), filter: function( event, original ) { var eventDoc, doc, body, button = original.button, fromElement = original.fromElement; // Calculate pageX/Y if missing and clientX/Y available if ( event.pageX == null && original.clientX != null ) { eventDoc = event.target.ownerDocument || document; doc = eventDoc.documentElement; body = eventDoc.body; event.pageX = original.clientX + ( doc && doc.scrollLeft || body && body.scrollLeft || 0 ) - ( doc && doc.clientLeft || body && body.clientLeft || 0 ); event.pageY = original.clientY + ( doc && doc.scrollTop || body && body.scrollTop || 0 ) - ( doc && doc.clientTop || body && body.clientTop || 0 ); } // Add relatedTarget, if necessary if ( !event.relatedTarget && fromElement ) { event.relatedTarget = fromElement === event.target ? original.toElement : fromElement; } // Add which for click: 1 === left; 2 === middle; 3 === right // Note: button is not normalized, so don't use it if ( !event.which && button !== undefined ) { event.which = ( button & 1 ? 1 : ( button & 2 ? 3 : ( button & 4 ? 2 : 0 ) ) ); } return event; } }, fix: function( event ) { if ( event[ jQuery.expando ] ) { return event; } // Create a writable copy of the event object and normalize some properties var i, prop, originalEvent = event, fixHook = jQuery.event.fixHooks[ event.type ] || {}, copy = fixHook.props ? this.props.concat( fixHook.props ) : this.props; event = jQuery.Event( originalEvent ); for ( i = copy.length; i; ) { prop = copy[ --i ]; event[ prop ] = originalEvent[ prop ]; } // Fix target property, if necessary (#1925, IE 6/7/8 & Safari2) if ( !event.target ) { event.target = originalEvent.srcElement || document; } // Target should not be a text node (#504, Safari) if ( event.target.nodeType === 3 ) { event.target = event.target.parentNode; } // For mouse/key events; add metaKey if it's not there (#3368, IE6/7/8) if ( event.metaKey === undefined ) { event.metaKey = event.ctrlKey; } return fixHook.filter? fixHook.filter( event, originalEvent ) : event; }, special: { ready: { // Make sure the ready event is setup setup: jQuery.bindReady }, load: { // Prevent triggered image.load events from bubbling to window.load noBubble: true }, focus: { delegateType: "focusin" }, blur: { delegateType: "focusout" }, beforeunload: { setup: function( data, namespaces, eventHandle ) { // We only want to do this special case on windows if ( jQuery.isWindow( this ) ) { this.onbeforeunload = eventHandle; } }, teardown: function( namespaces, eventHandle ) { if ( this.onbeforeunload === eventHandle ) { this.onbeforeunload = null; } } } }, simulate: function( type, elem, event, bubble ) { // Piggyback on a donor event to simulate a different one. // Fake originalEvent to avoid donor's stopPropagation, but if the // simulated event prevents default then we do the same on the donor. var e = jQuery.extend( new jQuery.Event(), event, { type: type, isSimulated: true, originalEvent: {} } ); if ( bubble ) { jQuery.event.trigger( e, null, elem ); } else { jQuery.event.dispatch.call( elem, e ); } if ( e.isDefaultPrevented() ) { event.preventDefault(); } } }; // Some plugins are using, but it's undocumented/deprecated and will be removed. // The 1.7 special event interface should provide all the hooks needed now. jQuery.event.handle = jQuery.event.dispatch; jQuery.removeEvent = document.removeEventListener ? function( elem, type, handle ) { if ( elem.removeEventListener ) { elem.removeEventListener( type, handle, false ); } } : function( elem, type, handle ) { if ( elem.detachEvent ) { elem.detachEvent( "on" + type, handle ); } }; jQuery.Event = function( src, props ) { // Allow instantiation without the 'new' keyword if ( !(this instanceof jQuery.Event) ) { return new jQuery.Event( src, props ); } // Event object if ( src && src.type ) { this.originalEvent = src; this.type = src.type; // Events bubbling up the document may have been marked as prevented // by a handler lower down the tree; reflect the correct value. this.isDefaultPrevented = ( src.defaultPrevented || src.returnValue === false || src.getPreventDefault && src.getPreventDefault() ) ? returnTrue : returnFalse; // Event type } else { this.type = src; } // Put explicitly provided properties onto the event object if ( props ) { jQuery.extend( this, props ); } // Create a timestamp if incoming event doesn't have one this.timeStamp = src && src.timeStamp || jQuery.now(); // Mark it as fixed this[ jQuery.expando ] = true; }; function returnFalse() { return false; } function returnTrue() { return true; } // jQuery.Event is based on DOM3 Events as specified by the ECMAScript Language Binding // http://www.w3.org/TR/2003/WD-DOM-Level-3-Events-20030331/ecma-script-binding.html jQuery.Event.prototype = { preventDefault: function() { this.isDefaultPrevented = returnTrue; var e = this.originalEvent; if ( !e ) { return; } // if preventDefault exists run it on the original event if ( e.preventDefault ) { e.preventDefault(); // otherwise set the returnValue property of the original event to false (IE) } else { e.returnValue = false; } }, stopPropagation: function() { this.isPropagationStopped = returnTrue; var e = this.originalEvent; if ( !e ) { return; } // if stopPropagation exists run it on the original event if ( e.stopPropagation ) { e.stopPropagation(); } // otherwise set the cancelBubble property of the original event to true (IE) e.cancelBubble = true; }, stopImmediatePropagation: function() { this.isImmediatePropagationStopped = returnTrue; this.stopPropagation(); }, isDefaultPrevented: returnFalse, isPropagationStopped: returnFalse, isImmediatePropagationStopped: returnFalse }; // Create mouseenter/leave events using mouseover/out and event-time checks jQuery.each({ mouseenter: "mouseover", mouseleave: "mouseout" }, function( orig, fix ) { jQuery.event.special[ orig ] = { delegateType: fix, bindType: fix, handle: function( event ) { var target = this, related = event.relatedTarget, handleObj = event.handleObj, selector = handleObj.selector, ret; // For mousenter/leave call the handler if related is outside the target. // NB: No relatedTarget if the mouse left/entered the browser window if ( !related || (related !== target && !jQuery.contains( target, related )) ) { event.type = handleObj.origType; ret = handleObj.handler.apply( this, arguments ); event.type = fix; } return ret; } }; }); // IE submit delegation if ( !jQuery.support.submitBubbles ) { jQuery.event.special.submit = { setup: function() { // Only need this for delegated form submit events if ( jQuery.nodeName( this, "form" ) ) { return false; } // Lazy-add a submit handler when a descendant form may potentially be submitted jQuery.event.add( this, "click._submit keypress._submit", function( e ) { // Node name check avoids a VML-related crash in IE (#9807) var elem = e.target, form = jQuery.nodeName( elem, "input" ) || jQuery.nodeName( elem, "button" ) ? elem.form : undefined; if ( form && !form._submit_attached ) { jQuery.event.add( form, "submit._submit", function( event ) { event._submit_bubble = true; }); form._submit_attached = true; } }); // return undefined since we don't need an event listener }, postDispatch: function( event ) { // If form was submitted by the user, bubble the event up the tree if ( event._submit_bubble ) { delete event._submit_bubble; if ( this.parentNode && !event.isTrigger ) { jQuery.event.simulate( "submit", this.parentNode, event, true ); } } }, teardown: function() { // Only need this for delegated form submit events if ( jQuery.nodeName( this, "form" ) ) { return false; } // Remove delegated handlers; cleanData eventually reaps submit handlers attached above jQuery.event.remove( this, "._submit" ); } }; } // IE change delegation and checkbox/radio fix if ( !jQuery.support.changeBubbles ) { jQuery.event.special.change = { setup: function() { if ( rformElems.test( this.nodeName ) ) { // IE doesn't fire change on a check/radio until blur; trigger it on click // after a propertychange. Eat the blur-change in special.change.handle. // This still fires onchange a second time for check/radio after blur. if ( this.type === "checkbox" || this.type === "radio" ) { jQuery.event.add( this, "propertychange._change", function( event ) { if ( event.originalEvent.propertyName === "checked" ) { this._just_changed = true; } }); jQuery.event.add( this, "click._change", function( event ) { if ( this._just_changed && !event.isTrigger ) { this._just_changed = false; jQuery.event.simulate( "change", this, event, true ); } }); } return false; } // Delegated event; lazy-add a change handler on descendant inputs jQuery.event.add( this, "beforeactivate._change", function( e ) { var elem = e.target; if ( rformElems.test( elem.nodeName ) && !elem._change_attached ) { jQuery.event.add( elem, "change._change", function( event ) { if ( this.parentNode && !event.isSimulated && !event.isTrigger ) { jQuery.event.simulate( "change", this.parentNode, event, true ); } }); elem._change_attached = true; } }); }, handle: function( event ) { var elem = event.target; // Swallow native change events from checkbox/radio, we already triggered them above if ( this !== elem || event.isSimulated || event.isTrigger || (elem.type !== "radio" && elem.type !== "checkbox") ) { return event.handleObj.handler.apply( this, arguments ); } }, teardown: function() { jQuery.event.remove( this, "._change" ); return rformElems.test( this.nodeName ); } }; } // Create "bubbling" focus and blur events if ( !jQuery.support.focusinBubbles ) { jQuery.each({ focus: "focusin", blur: "focusout" }, function( orig, fix ) { // Attach a single capturing handler while someone wants focusin/focusout var attaches = 0, handler = function( event ) { jQuery.event.simulate( fix, event.target, jQuery.event.fix( event ), true ); }; jQuery.event.special[ fix ] = { setup: function() { if ( attaches++ === 0 ) { document.addEventListener( orig, handler, true ); } }, teardown: function() { if ( --attaches === 0 ) { document.removeEventListener( orig, handler, true ); } } }; }); } jQuery.fn.extend({ on: function( types, selector, data, fn, /*INTERNAL*/ one ) { var origFn, type; // Types can be a map of types/handlers if ( typeof types === "object" ) { // ( types-Object, selector, data ) if ( typeof selector !== "string" ) { // && selector != null // ( types-Object, data ) data = data || selector; selector = undefined; } for ( type in types ) { this.on( type, selector, data, types[ type ], one ); } return this; } if ( data == null && fn == null ) { // ( types, fn ) fn = selector; data = selector = undefined; } else if ( fn == null ) { if ( typeof selector === "string" ) { // ( types, selector, fn ) fn = data; data = undefined; } else { // ( types, data, fn ) fn = data; data = selector; selector = undefined; } } if ( fn === false ) { fn = returnFalse; } else if ( !fn ) { return this; } if ( one === 1 ) { origFn = fn; fn = function( event ) { // Can use an empty set, since event contains the info jQuery().off( event ); return origFn.apply( this, arguments ); }; // Use same guid so caller can remove using origFn fn.guid = origFn.guid || ( origFn.guid = jQuery.guid++ ); } return this.each( function() { jQuery.event.add( this, types, fn, data, selector ); }); }, one: function( types, selector, data, fn ) { return this.on( types, selector, data, fn, 1 ); }, off: function( types, selector, fn ) { if ( types && types.preventDefault && types.handleObj ) { // ( event ) dispatched jQuery.Event var handleObj = types.handleObj; jQuery( types.delegateTarget ).off( handleObj.namespace ? handleObj.origType + "." + handleObj.namespace : handleObj.origType, handleObj.selector, handleObj.handler ); return this; } if ( typeof types === "object" ) { // ( types-object [, selector] ) for ( var type in types ) { this.off( type, selector, types[ type ] ); } return this; } if ( selector === false || typeof selector === "function" ) { // ( types [, fn] ) fn = selector; selector = undefined; } if ( fn === false ) { fn = returnFalse; } return this.each(function() { jQuery.event.remove( this, types, fn, selector ); }); }, bind: function( types, data, fn ) { return this.on( types, null, data, fn ); }, unbind: function( types, fn ) { return this.off( types, null, fn ); }, live: function( types, data, fn ) { jQuery( this.context ).on( types, this.selector, data, fn ); return this; }, die: function( types, fn ) { jQuery( this.context ).off( types, this.selector || "**", fn ); return this; }, delegate: function( selector, types, data, fn ) { return this.on( types, selector, data, fn ); }, undelegate: function( selector, types, fn ) { // ( namespace ) or ( selector, types [, fn] ) return arguments.length == 1? this.off( selector, "**" ) : this.off( types, selector, fn ); }, trigger: function( type, data ) { return this.each(function() { jQuery.event.trigger( type, data, this ); }); }, triggerHandler: function( type, data ) { if ( this[0] ) { return jQuery.event.trigger( type, data, this[0], true ); } }, toggle: function( fn ) { // Save reference to arguments for access in closure var args = arguments, guid = fn.guid || jQuery.guid++, i = 0, toggler = function( event ) { // Figure out which function to execute var lastToggle = ( jQuery._data( this, "lastToggle" + fn.guid ) || 0 ) % i; jQuery._data( this, "lastToggle" + fn.guid, lastToggle + 1 ); // Make sure that clicks stop event.preventDefault(); // and execute the function return args[ lastToggle ].apply( this, arguments ) || false; }; // link all the functions, so any of them can unbind this click handler toggler.guid = guid; while ( i < args.length ) { args[ i++ ].guid = guid; } return this.click( toggler ); }, hover: function( fnOver, fnOut ) { return this.mouseenter( fnOver ).mouseleave( fnOut || fnOver ); } }); jQuery.each( ("blur focus focusin focusout load resize scroll unload click dblclick " + "mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave " + "change select submit keydown keypress keyup error contextmenu").split(" "), function( i, name ) { // Handle event binding jQuery.fn[ name ] = function( data, fn ) { if ( fn == null ) { fn = data; data = null; } return arguments.length > 0 ? this.on( name, null, data, fn ) : this.trigger( name ); }; if ( jQuery.attrFn ) { jQuery.attrFn[ name ] = true; } if ( rkeyEvent.test( name ) ) { jQuery.event.fixHooks[ name ] = jQuery.event.keyHooks; } if ( rmouseEvent.test( name ) ) { jQuery.event.fixHooks[ name ] = jQuery.event.mouseHooks; } }); /*! * Sizzle CSS Selector Engine * Copyright 2011, The Dojo Foundation * Released under the MIT, BSD, and GPL Licenses. * More information: http://sizzlejs.com/ */ (function(){ var chunker = /((?:\((?:\([^()]+\)|[^()]+)+\)|\[(?:\[[^\[\]]*\]|['"][^'"]*['"]|[^\[\]'"]+)+\]|\\.|[^ >+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g, expando = "sizcache" + (Math.random() + '').replace('.', ''), done = 0, toString = Object.prototype.toString, hasDuplicate = false, baseHasDuplicate = true, rBackslash = /\\/g, rReturn = /\r\n/g, rNonWord = /\W/; // Here we check if the JavaScript engine is using some sort of // optimization where it does not always call our comparision // function. If that is the case, discard the hasDuplicate value. // Thus far that includes Google Chrome. [0, 0].sort(function() { baseHasDuplicate = false; return 0; }); var Sizzle = function( selector, context, results, seed ) { results = results || []; context = context || document; var origContext = context; if ( context.nodeType !== 1 && context.nodeType !== 9 ) { return []; } if ( !selector || typeof selector !== "string" ) { return results; } var m, set, checkSet, extra, ret, cur, pop, i, prune = true, contextXML = Sizzle.isXML( context ), parts = [], soFar = selector; // Reset the position of the chunker regexp (start from head) do { chunker.exec( "" ); m = chunker.exec( soFar ); if ( m ) { soFar = m[3]; parts.push( m[1] ); if ( m[2] ) { extra = m[3]; break; } } } while ( m ); if ( parts.length > 1 && origPOS.exec( selector ) ) { if ( parts.length === 2 && Expr.relative[ parts[0] ] ) { set = posProcess( parts[0] + parts[1], context, seed ); } else { set = Expr.relative[ parts[0] ] ? [ context ] : Sizzle( parts.shift(), context ); while ( parts.length ) { selector = parts.shift(); if ( Expr.relative[ selector ] ) { selector += parts.shift(); } set = posProcess( selector, set, seed ); } } } else { // Take a shortcut and set the context if the root selector is an ID // (but not if it'll be faster if the inner selector is an ID) if ( !seed && parts.length > 1 && context.nodeType === 9 && !contextXML && Expr.match.ID.test(parts[0]) && !Expr.match.ID.test(parts[parts.length - 1]) ) { ret = Sizzle.find( parts.shift(), context, contextXML ); context = ret.expr ? Sizzle.filter( ret.expr, ret.set )[0] : ret.set[0]; } if ( context ) { ret = seed ? { expr: parts.pop(), set: makeArray(seed) } : Sizzle.find( parts.pop(), parts.length === 1 && (parts[0] === "~" || parts[0] === "+") && context.parentNode ? context.parentNode : context, contextXML ); set = ret.expr ? Sizzle.filter( ret.expr, ret.set ) : ret.set; if ( parts.length > 0 ) { checkSet = makeArray( set ); } else { prune = false; } while ( parts.length ) { cur = parts.pop(); pop = cur; if ( !Expr.relative[ cur ] ) { cur = ""; } else { pop = parts.pop(); } if ( pop == null ) { pop = context; } Expr.relative[ cur ]( checkSet, pop, contextXML ); } } else { checkSet = parts = []; } } if ( !checkSet ) { checkSet = set; } if ( !checkSet ) { Sizzle.error( cur || selector ); } if ( toString.call(checkSet) === "[object Array]" ) { if ( !prune ) { results.push.apply( results, checkSet ); } else if ( context && context.nodeType === 1 ) { for ( i = 0; checkSet[i] != null; i++ ) { if ( checkSet[i] && (checkSet[i] === true || checkSet[i].nodeType === 1 && Sizzle.contains(context, checkSet[i])) ) { results.push( set[i] ); } } } else { for ( i = 0; checkSet[i] != null; i++ ) { if ( checkSet[i] && checkSet[i].nodeType === 1 ) { results.push( set[i] ); } } } } else { makeArray( checkSet, results ); } if ( extra ) { Sizzle( extra, origContext, results, seed ); Sizzle.uniqueSort( results ); } return results; }; Sizzle.uniqueSort = function( results ) { if ( sortOrder ) { hasDuplicate = baseHasDuplicate; results.sort( sortOrder ); if ( hasDuplicate ) { for ( var i = 1; i < results.length; i++ ) { if ( results[i] === results[ i - 1 ] ) { results.splice( i--, 1 ); } } } } return results; }; Sizzle.matches = function( expr, set ) { return Sizzle( expr, null, null, set ); }; Sizzle.matchesSelector = function( node, expr ) { return Sizzle( expr, null, null, [node] ).length > 0; }; Sizzle.find = function( expr, context, isXML ) { var set, i, len, match, type, left; if ( !expr ) { return []; } for ( i = 0, len = Expr.order.length; i < len; i++ ) { type = Expr.order[i]; if ( (match = Expr.leftMatch[ type ].exec( expr )) ) { left = match[1]; match.splice( 1, 1 ); if ( left.substr( left.length - 1 ) !== "\\" ) { match[1] = (match[1] || "").replace( rBackslash, "" ); set = Expr.find[ type ]( match, context, isXML ); if ( set != null ) { expr = expr.replace( Expr.match[ type ], "" ); break; } } } } if ( !set ) { set = typeof context.getElementsByTagName !== "undefined" ? context.getElementsByTagName( "*" ) : []; } return { set: set, expr: expr }; }; Sizzle.filter = function( expr, set, inplace, not ) { var match, anyFound, type, found, item, filter, left, i, pass, old = expr, result = [], curLoop = set, isXMLFilter = set && set[0] && Sizzle.isXML( set[0] ); while ( expr && set.length ) { for ( type in Expr.filter ) { if ( (match = Expr.leftMatch[ type ].exec( expr )) != null && match[2] ) { filter = Expr.filter[ type ]; left = match[1]; anyFound = false; match.splice(1,1); if ( left.substr( left.length - 1 ) === "\\" ) { continue; } if ( curLoop === result ) { result = []; } if ( Expr.preFilter[ type ] ) { match = Expr.preFilter[ type ]( match, curLoop, inplace, result, not, isXMLFilter ); if ( !match ) { anyFound = found = true; } else if ( match === true ) { continue; } } if ( match ) { for ( i = 0; (item = curLoop[i]) != null; i++ ) { if ( item ) { found = filter( item, match, i, curLoop ); pass = not ^ found; if ( inplace && found != null ) { if ( pass ) { anyFound = true; } else { curLoop[i] = false; } } else if ( pass ) { result.push( item ); anyFound = true; } } } } if ( found !== undefined ) { if ( !inplace ) { curLoop = result; } expr = expr.replace( Expr.match[ type ], "" ); if ( !anyFound ) { return []; } break; } } } // Improper expression if ( expr === old ) { if ( anyFound == null ) { Sizzle.error( expr ); } else { break; } } old = expr; } return curLoop; }; Sizzle.error = function( msg ) { throw new Error( "Syntax error, unrecognized expression: " + msg ); }; /** * Utility function for retreiving the text value of an array of DOM nodes * @param {Array|Element} elem */ var getText = Sizzle.getText = function( elem ) { var i, node, nodeType = elem.nodeType, ret = ""; if ( nodeType ) { if ( nodeType === 1 || nodeType === 9 || nodeType === 11 ) { // Use textContent || innerText for elements if ( typeof elem.textContent === 'string' ) { return elem.textContent; } else if ( typeof elem.innerText === 'string' ) { // Replace IE's carriage returns return elem.innerText.replace( rReturn, '' ); } else { // Traverse it's children for ( elem = elem.firstChild; elem; elem = elem.nextSibling) { ret += getText( elem ); } } } else if ( nodeType === 3 || nodeType === 4 ) { return elem.nodeValue; } } else { // If no nodeType, this is expected to be an array for ( i = 0; (node = elem[i]); i++ ) { // Do not traverse comment nodes if ( node.nodeType !== 8 ) { ret += getText( node ); } } } return ret; }; var Expr = Sizzle.selectors = { order: [ "ID", "NAME", "TAG" ], match: { ID: /#((?:[\w\u00c0-\uFFFF\-]|\\.)+)/, CLASS: /\.((?:[\w\u00c0-\uFFFF\-]|\\.)+)/, NAME: /\[name=['"]*((?:[\w\u00c0-\uFFFF\-]|\\.)+)['"]*\]/, ATTR: /\[\s*((?:[\w\u00c0-\uFFFF\-]|\\.)+)\s*(?:(\S?=)\s*(?:(['"])(.*?)\3|(#?(?:[\w\u00c0-\uFFFF\-]|\\.)*)|)|)\s*\]/, TAG: /^((?:[\w\u00c0-\uFFFF\*\-]|\\.)+)/, CHILD: /:(only|nth|last|first)-child(?:\(\s*(even|odd|(?:[+\-]?\d+|(?:[+\-]?\d*)?n\s*(?:[+\-]\s*\d+)?))\s*\))?/, POS: /:(nth|eq|gt|lt|first|last|even|odd)(?:\((\d*)\))?(?=[^\-]|$)/, PSEUDO: /:((?:[\w\u00c0-\uFFFF\-]|\\.)+)(?:\((['"]?)((?:\([^\)]+\)|[^\(\)]*)+)\2\))?/ }, leftMatch: {}, attrMap: { "class": "className", "for": "htmlFor" }, attrHandle: { href: function( elem ) { return elem.getAttribute( "href" ); }, type: function( elem ) { return elem.getAttribute( "type" ); } }, relative: { "+": function(checkSet, part){ var isPartStr = typeof part === "string", isTag = isPartStr && !rNonWord.test( part ), isPartStrNotTag = isPartStr && !isTag; if ( isTag ) { part = part.toLowerCase(); } for ( var i = 0, l = checkSet.length, elem; i < l; i++ ) { if ( (elem = checkSet[i]) ) { while ( (elem = elem.previousSibling) && elem.nodeType !== 1 ) {} checkSet[i] = isPartStrNotTag || elem && elem.nodeName.toLowerCase() === part ? elem || false : elem === part; } } if ( isPartStrNotTag ) { Sizzle.filter( part, checkSet, true ); } }, ">": function( checkSet, part ) { var elem, isPartStr = typeof part === "string", i = 0, l = checkSet.length; if ( isPartStr && !rNonWord.test( part ) ) { part = part.toLowerCase(); for ( ; i < l; i++ ) { elem = checkSet[i]; if ( elem ) { var parent = elem.parentNode; checkSet[i] = parent.nodeName.toLowerCase() === part ? parent : false; } } } else { for ( ; i < l; i++ ) { elem = checkSet[i]; if ( elem ) { checkSet[i] = isPartStr ? elem.parentNode : elem.parentNode === part; } } if ( isPartStr ) { Sizzle.filter( part, checkSet, true ); } } }, "": function(checkSet, part, isXML){ var nodeCheck, doneName = done++, checkFn = dirCheck; if ( typeof part === "string" && !rNonWord.test( part ) ) { part = part.toLowerCase(); nodeCheck = part; checkFn = dirNodeCheck; } checkFn( "parentNode", part, doneName, checkSet, nodeCheck, isXML ); }, "~": function( checkSet, part, isXML ) { var nodeCheck, doneName = done++, checkFn = dirCheck; if ( typeof part === "string" && !rNonWord.test( part ) ) { part = part.toLowerCase(); nodeCheck = part; checkFn = dirNodeCheck; } checkFn( "previousSibling", part, doneName, checkSet, nodeCheck, isXML ); } }, find: { ID: function( match, context, isXML ) { if ( typeof context.getElementById !== "undefined" && !isXML ) { var m = context.getElementById(match[1]); // Check parentNode to catch when Blackberry 4.6 returns // nodes that are no longer in the document #6963 return m && m.parentNode ? [m] : []; } }, NAME: function( match, context ) { if ( typeof context.getElementsByName !== "undefined" ) { var ret = [], results = context.getElementsByName( match[1] ); for ( var i = 0, l = results.length; i < l; i++ ) { if ( results[i].getAttribute("name") === match[1] ) { ret.push( results[i] ); } } return ret.length === 0 ? null : ret; } }, TAG: function( match, context ) { if ( typeof context.getElementsByTagName !== "undefined" ) { return context.getElementsByTagName( match[1] ); } } }, preFilter: { CLASS: function( match, curLoop, inplace, result, not, isXML ) { match = " " + match[1].replace( rBackslash, "" ) + " "; if ( isXML ) { return match; } for ( var i = 0, elem; (elem = curLoop[i]) != null; i++ ) { if ( elem ) { if ( not ^ (elem.className && (" " + elem.className + " ").replace(/[\t\n\r]/g, " ").indexOf(match) >= 0) ) { if ( !inplace ) { result.push( elem ); } } else if ( inplace ) { curLoop[i] = false; } } } return false; }, ID: function( match ) { return match[1].replace( rBackslash, "" ); }, TAG: function( match, curLoop ) { return match[1].replace( rBackslash, "" ).toLowerCase(); }, CHILD: function( match ) { if ( match[1] === "nth" ) { if ( !match[2] ) { Sizzle.error( match[0] ); } match[2] = match[2].replace(/^\+|\s*/g, ''); // parse equations like 'even', 'odd', '5', '2n', '3n+2', '4n-1', '-n+6' var test = /(-?)(\d*)(?:n([+\-]?\d*))?/.exec( match[2] === "even" && "2n" || match[2] === "odd" && "2n+1" || !/\D/.test( match[2] ) && "0n+" + match[2] || match[2]); // calculate the numbers (first)n+(last) including if they are negative match[2] = (test[1] + (test[2] || 1)) - 0; match[3] = test[3] - 0; } else if ( match[2] ) { Sizzle.error( match[0] ); } // TODO: Move to normal caching system match[0] = done++; return match; }, ATTR: function( match, curLoop, inplace, result, not, isXML ) { var name = match[1] = match[1].replace( rBackslash, "" ); if ( !isXML && Expr.attrMap[name] ) { match[1] = Expr.attrMap[name]; } // Handle if an un-quoted value was used match[4] = ( match[4] || match[5] || "" ).replace( rBackslash, "" ); if ( match[2] === "~=" ) { match[4] = " " + match[4] + " "; } return match; }, PSEUDO: function( match, curLoop, inplace, result, not ) { if ( match[1] === "not" ) { // If we're dealing with a complex expression, or a simple one if ( ( chunker.exec(match[3]) || "" ).length > 1 || /^\w/.test(match[3]) ) { match[3] = Sizzle(match[3], null, null, curLoop); } else { var ret = Sizzle.filter(match[3], curLoop, inplace, true ^ not); if ( !inplace ) { result.push.apply( result, ret ); } return false; } } else if ( Expr.match.POS.test( match[0] ) || Expr.match.CHILD.test( match[0] ) ) { return true; } return match; }, POS: function( match ) { match.unshift( true ); return match; } }, filters: { enabled: function( elem ) { return elem.disabled === false && elem.type !== "hidden"; }, disabled: function( elem ) { return elem.disabled === true; }, checked: function( elem ) { return elem.checked === true; }, selected: function( elem ) { // Accessing this property makes selected-by-default // options in Safari work properly if ( elem.parentNode ) { elem.parentNode.selectedIndex; } return elem.selected === true; }, parent: function( elem ) { return !!elem.firstChild; }, empty: function( elem ) { return !elem.firstChild; }, has: function( elem, i, match ) { return !!Sizzle( match[3], elem ).length; }, header: function( elem ) { return (/h\d/i).test( elem.nodeName ); }, text: function( elem ) { var attr = elem.getAttribute( "type" ), type = elem.type; // IE6 and 7 will map elem.type to 'text' for new HTML5 types (search, etc) // use getAttribute instead to test this case return elem.nodeName.toLowerCase() === "input" && "text" === type && ( attr === type || attr === null ); }, radio: function( elem ) { return elem.nodeName.toLowerCase() === "input" && "radio" === elem.type; }, checkbox: function( elem ) { return elem.nodeName.toLowerCase() === "input" && "checkbox" === elem.type; }, file: function( elem ) { return elem.nodeName.toLowerCase() === "input" && "file" === elem.type; }, password: function( elem ) { return elem.nodeName.toLowerCase() === "input" && "password" === elem.type; }, submit: function( elem ) { var name = elem.nodeName.toLowerCase(); return (name === "input" || name === "button") && "submit" === elem.type; }, image: function( elem ) { return elem.nodeName.toLowerCase() === "input" && "image" === elem.type; }, reset: function( elem ) { var name = elem.nodeName.toLowerCase(); return (name === "input" || name === "button") && "reset" === elem.type; }, button: function( elem ) { var name = elem.nodeName.toLowerCase(); return name === "input" && "button" === elem.type || name === "button"; }, input: function( elem ) { return (/input|select|textarea|button/i).test( elem.nodeName ); }, focus: function( elem ) { return elem === elem.ownerDocument.activeElement; } }, setFilters: { first: function( elem, i ) { return i === 0; }, last: function( elem, i, match, array ) { return i === array.length - 1; }, even: function( elem, i ) { return i % 2 === 0; }, odd: function( elem, i ) { return i % 2 === 1; }, lt: function( elem, i, match ) { return i < match[3] - 0; }, gt: function( elem, i, match ) { return i > match[3] - 0; }, nth: function( elem, i, match ) { return match[3] - 0 === i; }, eq: function( elem, i, match ) { return match[3] - 0 === i; } }, filter: { PSEUDO: function( elem, match, i, array ) { var name = match[1], filter = Expr.filters[ name ]; if ( filter ) { return filter( elem, i, match, array ); } else if ( name === "contains" ) { return (elem.textContent || elem.innerText || getText([ elem ]) || "").indexOf(match[3]) >= 0; } else if ( name === "not" ) { var not = match[3]; for ( var j = 0, l = not.length; j < l; j++ ) { if ( not[j] === elem ) { return false; } } return true; } else { Sizzle.error( name ); } }, CHILD: function( elem, match ) { var first, last, doneName, parent, cache, count, diff, type = match[1], node = elem; switch ( type ) { case "only": case "first": while ( (node = node.previousSibling) ) { if ( node.nodeType === 1 ) { return false; } } if ( type === "first" ) { return true; } node = elem; /* falls through */ case "last": while ( (node = node.nextSibling) ) { if ( node.nodeType === 1 ) { return false; } } return true; case "nth": first = match[2]; last = match[3]; if ( first === 1 && last === 0 ) { return true; } doneName = match[0]; parent = elem.parentNode; if ( parent && (parent[ expando ] !== doneName || !elem.nodeIndex) ) { count = 0; for ( node = parent.firstChild; node; node = node.nextSibling ) { if ( node.nodeType === 1 ) { node.nodeIndex = ++count; } } parent[ expando ] = doneName; } diff = elem.nodeIndex - last; if ( first === 0 ) { return diff === 0; } else { return ( diff % first === 0 && diff / first >= 0 ); } } }, ID: function( elem, match ) { return elem.nodeType === 1 && elem.getAttribute("id") === match; }, TAG: function( elem, match ) { return (match === "*" && elem.nodeType === 1) || !!elem.nodeName && elem.nodeName.toLowerCase() === match; }, CLASS: function( elem, match ) { return (" " + (elem.className || elem.getAttribute("class")) + " ") .indexOf( match ) > -1; }, ATTR: function( elem, match ) { var name = match[1], result = Sizzle.attr ? Sizzle.attr( elem, name ) : Expr.attrHandle[ name ] ? Expr.attrHandle[ name ]( elem ) : elem[ name ] != null ? elem[ name ] : elem.getAttribute( name ), value = result + "", type = match[2], check = match[4]; return result == null ? type === "!=" : !type && Sizzle.attr ? result != null : type === "=" ? value === check : type === "*=" ? value.indexOf(check) >= 0 : type === "~=" ? (" " + value + " ").indexOf(check) >= 0 : !check ? value && result !== false : type === "!=" ? value !== check : type === "^=" ? value.indexOf(check) === 0 : type === "$=" ? value.substr(value.length - check.length) === check : type === "|=" ? value === check || value.substr(0, check.length + 1) === check + "-" : false; }, POS: function( elem, match, i, array ) { var name = match[2], filter = Expr.setFilters[ name ]; if ( filter ) { return filter( elem, i, match, array ); } } } }; var origPOS = Expr.match.POS, fescape = function(all, num){ return "\\" + (num - 0 + 1); }; for ( var type in Expr.match ) { Expr.match[ type ] = new RegExp( Expr.match[ type ].source + (/(?![^\[]*\])(?![^\(]*\))/.source) ); Expr.leftMatch[ type ] = new RegExp( /(^(?:.|\r|\n)*?)/.source + Expr.match[ type ].source.replace(/\\(\d+)/g, fescape) ); } // Expose origPOS // "global" as in regardless of relation to brackets/parens Expr.match.globalPOS = origPOS; var makeArray = function( array, results ) { array = Array.prototype.slice.call( array, 0 ); if ( results ) { results.push.apply( results, array ); return results; } return array; }; // Perform a simple check to determine if the browser is capable of // converting a NodeList to an array using builtin methods. // Also verifies that the returned array holds DOM nodes // (which is not the case in the Blackberry browser) try { Array.prototype.slice.call( document.documentElement.childNodes, 0 )[0].nodeType; // Provide a fallback method if it does not work } catch( e ) { makeArray = function( array, results ) { var i = 0, ret = results || []; if ( toString.call(array) === "[object Array]" ) { Array.prototype.push.apply( ret, array ); } else { if ( typeof array.length === "number" ) { for ( var l = array.length; i < l; i++ ) { ret.push( array[i] ); } } else { for ( ; array[i]; i++ ) { ret.push( array[i] ); } } } return ret; }; } var sortOrder, siblingCheck; if ( document.documentElement.compareDocumentPosition ) { sortOrder = function( a, b ) { if ( a === b ) { hasDuplicate = true; return 0; } if ( !a.compareDocumentPosition || !b.compareDocumentPosition ) { return a.compareDocumentPosition ? -1 : 1; } return a.compareDocumentPosition(b) & 4 ? -1 : 1; }; } else { sortOrder = function( a, b ) { // The nodes are identical, we can exit early if ( a === b ) { hasDuplicate = true; return 0; // Fallback to using sourceIndex (in IE) if it's available on both nodes } else if ( a.sourceIndex && b.sourceIndex ) { return a.sourceIndex - b.sourceIndex; } var al, bl, ap = [], bp = [], aup = a.parentNode, bup = b.parentNode, cur = aup; // If the nodes are siblings (or identical) we can do a quick check if ( aup === bup ) { return siblingCheck( a, b ); // If no parents were found then the nodes are disconnected } else if ( !aup ) { return -1; } else if ( !bup ) { return 1; } // Otherwise they're somewhere else in the tree so we need // to build up a full list of the parentNodes for comparison while ( cur ) { ap.unshift( cur ); cur = cur.parentNode; } cur = bup; while ( cur ) { bp.unshift( cur ); cur = cur.parentNode; } al = ap.length; bl = bp.length; // Start walking down the tree looking for a discrepancy for ( var i = 0; i < al && i < bl; i++ ) { if ( ap[i] !== bp[i] ) { return siblingCheck( ap[i], bp[i] ); } } // We ended someplace up the tree so do a sibling check return i === al ? siblingCheck( a, bp[i], -1 ) : siblingCheck( ap[i], b, 1 ); }; siblingCheck = function( a, b, ret ) { if ( a === b ) { return ret; } var cur = a.nextSibling; while ( cur ) { if ( cur === b ) { return -1; } cur = cur.nextSibling; } return 1; }; } // Check to see if the browser returns elements by name when // querying by getElementById (and provide a workaround) (function(){ // We're going to inject a fake input element with a specified name var form = document.createElement("div"), id = "script" + (new Date()).getTime(), root = document.documentElement; form.innerHTML = ""; // Inject it into the root element, check its status, and remove it quickly root.insertBefore( form, root.firstChild ); // The workaround has to do additional checks after a getElementById // Which slows things down for other browsers (hence the branching) if ( document.getElementById( id ) ) { Expr.find.ID = function( match, context, isXML ) { if ( typeof context.getElementById !== "undefined" && !isXML ) { var m = context.getElementById(match[1]); return m ? m.id === match[1] || typeof m.getAttributeNode !== "undefined" && m.getAttributeNode("id").nodeValue === match[1] ? [m] : undefined : []; } }; Expr.filter.ID = function( elem, match ) { var node = typeof elem.getAttributeNode !== "undefined" && elem.getAttributeNode("id"); return elem.nodeType === 1 && node && node.nodeValue === match; }; } root.removeChild( form ); // release memory in IE root = form = null; })(); (function(){ // Check to see if the browser returns only elements // when doing getElementsByTagName("*") // Create a fake element var div = document.createElement("div"); div.appendChild( document.createComment("") ); // Make sure no comments are found if ( div.getElementsByTagName("*").length > 0 ) { Expr.find.TAG = function( match, context ) { var results = context.getElementsByTagName( match[1] ); // Filter out possible comments if ( match[1] === "*" ) { var tmp = []; for ( var i = 0; results[i]; i++ ) { if ( results[i].nodeType === 1 ) { tmp.push( results[i] ); } } results = tmp; } return results; }; } // Check to see if an attribute returns normalized href attributes div.innerHTML = ""; if ( div.firstChild && typeof div.firstChild.getAttribute !== "undefined" && div.firstChild.getAttribute("href") !== "#" ) { Expr.attrHandle.href = function( elem ) { return elem.getAttribute( "href", 2 ); }; } // release memory in IE div = null; })(); if ( document.querySelectorAll ) { (function(){ var oldSizzle = Sizzle, div = document.createElement("div"), id = "__sizzle__"; div.innerHTML = "

"; // Safari can't handle uppercase or unicode characters when // in quirks mode. if ( div.querySelectorAll && div.querySelectorAll(".TEST").length === 0 ) { return; } Sizzle = function( query, context, extra, seed ) { context = context || document; // Only use querySelectorAll on non-XML documents // (ID selectors don't work in non-HTML documents) if ( !seed && !Sizzle.isXML(context) ) { // See if we find a selector to speed up var match = /^(\w+$)|^\.([\w\-]+$)|^#([\w\-]+$)/.exec( query ); if ( match && (context.nodeType === 1 || context.nodeType === 9) ) { // Speed-up: Sizzle("TAG") if ( match[1] ) { return makeArray( context.getElementsByTagName( query ), extra ); // Speed-up: Sizzle(".CLASS") } else if ( match[2] && Expr.find.CLASS && context.getElementsByClassName ) { return makeArray( context.getElementsByClassName( match[2] ), extra ); } } if ( context.nodeType === 9 ) { // Speed-up: Sizzle("body") // The body element only exists once, optimize finding it if ( query === "body" && context.body ) { return makeArray( [ context.body ], extra ); // Speed-up: Sizzle("#ID") } else if ( match && match[3] ) { var elem = context.getElementById( match[3] ); // Check parentNode to catch when Blackberry 4.6 returns // nodes that are no longer in the document #6963 if ( elem && elem.parentNode ) { // Handle the case where IE and Opera return items // by name instead of ID if ( elem.id === match[3] ) { return makeArray( [ elem ], extra ); } } else { return makeArray( [], extra ); } } try { return makeArray( context.querySelectorAll(query), extra ); } catch(qsaError) {} // qSA works strangely on Element-rooted queries // We can work around this by specifying an extra ID on the root // and working up from there (Thanks to Andrew Dupont for the technique) // IE 8 doesn't work on object elements } else if ( context.nodeType === 1 && context.nodeName.toLowerCase() !== "object" ) { var oldContext = context, old = context.getAttribute( "id" ), nid = old || id, hasParent = context.parentNode, relativeHierarchySelector = /^\s*[+~]/.test( query ); if ( !old ) { context.setAttribute( "id", nid ); } else { nid = nid.replace( /'/g, "\\$&" ); } if ( relativeHierarchySelector && hasParent ) { context = context.parentNode; } try { if ( !relativeHierarchySelector || hasParent ) { return makeArray( context.querySelectorAll( "[id='" + nid + "'] " + query ), extra ); } } catch(pseudoError) { } finally { if ( !old ) { oldContext.removeAttribute( "id" ); } } } } return oldSizzle(query, context, extra, seed); }; for ( var prop in oldSizzle ) { Sizzle[ prop ] = oldSizzle[ prop ]; } // release memory in IE div = null; })(); } (function(){ var html = document.documentElement, matches = html.matchesSelector || html.mozMatchesSelector || html.webkitMatchesSelector || html.msMatchesSelector; if ( matches ) { // Check to see if it's possible to do matchesSelector // on a disconnected node (IE 9 fails this) var disconnectedMatch = !matches.call( document.createElement( "div" ), "div" ), pseudoWorks = false; try { // This should fail with an exception // Gecko does not error, returns false instead matches.call( document.documentElement, "[test!='']:sizzle" ); } catch( pseudoError ) { pseudoWorks = true; } Sizzle.matchesSelector = function( node, expr ) { // Make sure that attribute selectors are quoted expr = expr.replace(/\=\s*([^'"\]]*)\s*\]/g, "='$1']"); if ( !Sizzle.isXML( node ) ) { try { if ( pseudoWorks || !Expr.match.PSEUDO.test( expr ) && !/!=/.test( expr ) ) { var ret = matches.call( node, expr ); // IE 9's matchesSelector returns false on disconnected nodes if ( ret || !disconnectedMatch || // As well, disconnected nodes are said to be in a document // fragment in IE 9, so check for that node.document && node.document.nodeType !== 11 ) { return ret; } } } catch(e) {} } return Sizzle(expr, null, null, [node]).length > 0; }; } })(); (function(){ var div = document.createElement("div"); div.innerHTML = "
"; // Opera can't find a second classname (in 9.6) // Also, make sure that getElementsByClassName actually exists if ( !div.getElementsByClassName || div.getElementsByClassName("e").length === 0 ) { return; } // Safari caches class attributes, doesn't catch changes (in 3.2) div.lastChild.className = "e"; if ( div.getElementsByClassName("e").length === 1 ) { return; } Expr.order.splice(1, 0, "CLASS"); Expr.find.CLASS = function( match, context, isXML ) { if ( typeof context.getElementsByClassName !== "undefined" && !isXML ) { return context.getElementsByClassName(match[1]); } }; // release memory in IE div = null; })(); function dirNodeCheck( dir, cur, doneName, checkSet, nodeCheck, isXML ) { for ( var i = 0, l = checkSet.length; i < l; i++ ) { var elem = checkSet[i]; if ( elem ) { var match = false; elem = elem[dir]; while ( elem ) { if ( elem[ expando ] === doneName ) { match = checkSet[elem.sizset]; break; } if ( elem.nodeType === 1 && !isXML ){ elem[ expando ] = doneName; elem.sizset = i; } if ( elem.nodeName.toLowerCase() === cur ) { match = elem; break; } elem = elem[dir]; } checkSet[i] = match; } } } function dirCheck( dir, cur, doneName, checkSet, nodeCheck, isXML ) { for ( var i = 0, l = checkSet.length; i < l; i++ ) { var elem = checkSet[i]; if ( elem ) { var match = false; elem = elem[dir]; while ( elem ) { if ( elem[ expando ] === doneName ) { match = checkSet[elem.sizset]; break; } if ( elem.nodeType === 1 ) { if ( !isXML ) { elem[ expando ] = doneName; elem.sizset = i; } if ( typeof cur !== "string" ) { if ( elem === cur ) { match = true; break; } } else if ( Sizzle.filter( cur, [elem] ).length > 0 ) { match = elem; break; } } elem = elem[dir]; } checkSet[i] = match; } } } if ( document.documentElement.contains ) { Sizzle.contains = function( a, b ) { return a !== b && (a.contains ? a.contains(b) : true); }; } else if ( document.documentElement.compareDocumentPosition ) { Sizzle.contains = function( a, b ) { return !!(a.compareDocumentPosition(b) & 16); }; } else { Sizzle.contains = function() { return false; }; } Sizzle.isXML = function( elem ) { // documentElement is verified for cases where it doesn't yet exist // (such as loading iframes in IE - #4833) var documentElement = (elem ? elem.ownerDocument || elem : 0).documentElement; return documentElement ? documentElement.nodeName !== "HTML" : false; }; var posProcess = function( selector, context, seed ) { var match, tmpSet = [], later = "", root = context.nodeType ? [context] : context; // Position selectors must be done after the filter // And so must :not(positional) so we move all PSEUDOs to the end while ( (match = Expr.match.PSEUDO.exec( selector )) ) { later += match[0]; selector = selector.replace( Expr.match.PSEUDO, "" ); } selector = Expr.relative[selector] ? selector + "*" : selector; for ( var i = 0, l = root.length; i < l; i++ ) { Sizzle( selector, root[i], tmpSet, seed ); } return Sizzle.filter( later, tmpSet ); }; // EXPOSE // Override sizzle attribute retrieval Sizzle.attr = jQuery.attr; Sizzle.selectors.attrMap = {}; jQuery.find = Sizzle; jQuery.expr = Sizzle.selectors; jQuery.expr[":"] = jQuery.expr.filters; jQuery.unique = Sizzle.uniqueSort; jQuery.text = Sizzle.getText; jQuery.isXMLDoc = Sizzle.isXML; jQuery.contains = Sizzle.contains; })(); var runtil = /Until$/, rparentsprev = /^(?:parents|prevUntil|prevAll)/, // Note: This RegExp should be improved, or likely pulled from Sizzle rmultiselector = /,/, isSimple = /^.[^:#\[\.,]*$/, slice = Array.prototype.slice, POS = jQuery.expr.match.globalPOS, // methods guaranteed to produce a unique set when starting from a unique set guaranteedUnique = { children: true, contents: true, next: true, prev: true }; jQuery.fn.extend({ find: function( selector ) { var self = this, i, l; if ( typeof selector !== "string" ) { return jQuery( selector ).filter(function() { for ( i = 0, l = self.length; i < l; i++ ) { if ( jQuery.contains( self[ i ], this ) ) { return true; } } }); } var ret = this.pushStack( "", "find", selector ), length, n, r; for ( i = 0, l = this.length; i < l; i++ ) { length = ret.length; jQuery.find( selector, this[i], ret ); if ( i > 0 ) { // Make sure that the results are unique for ( n = length; n < ret.length; n++ ) { for ( r = 0; r < length; r++ ) { if ( ret[r] === ret[n] ) { ret.splice(n--, 1); break; } } } } } return ret; }, has: function( target ) { var targets = jQuery( target ); return this.filter(function() { for ( var i = 0, l = targets.length; i < l; i++ ) { if ( jQuery.contains( this, targets[i] ) ) { return true; } } }); }, not: function( selector ) { return this.pushStack( winnow(this, selector, false), "not", selector); }, filter: function( selector ) { return this.pushStack( winnow(this, selector, true), "filter", selector ); }, is: function( selector ) { return !!selector && ( typeof selector === "string" ? // If this is a positional selector, check membership in the returned set // so $("p:first").is("p:last") won't return true for a doc with two "p". POS.test( selector ) ? jQuery( selector, this.context ).index( this[0] ) >= 0 : jQuery.filter( selector, this ).length > 0 : this.filter( selector ).length > 0 ); }, closest: function( selectors, context ) { var ret = [], i, l, cur = this[0]; // Array (deprecated as of jQuery 1.7) if ( jQuery.isArray( selectors ) ) { var level = 1; while ( cur && cur.ownerDocument && cur !== context ) { for ( i = 0; i < selectors.length; i++ ) { if ( jQuery( cur ).is( selectors[ i ] ) ) { ret.push({ selector: selectors[ i ], elem: cur, level: level }); } } cur = cur.parentNode; level++; } return ret; } // String var pos = POS.test( selectors ) || typeof selectors !== "string" ? jQuery( selectors, context || this.context ) : 0; for ( i = 0, l = this.length; i < l; i++ ) { cur = this[i]; while ( cur ) { if ( pos ? pos.index(cur) > -1 : jQuery.find.matchesSelector(cur, selectors) ) { ret.push( cur ); break; } else { cur = cur.parentNode; if ( !cur || !cur.ownerDocument || cur === context || cur.nodeType === 11 ) { break; } } } } ret = ret.length > 1 ? jQuery.unique( ret ) : ret; return this.pushStack( ret, "closest", selectors ); }, // Determine the position of an element within // the matched set of elements index: function( elem ) { // No argument, return index in parent if ( !elem ) { return ( this[0] && this[0].parentNode ) ? this.prevAll().length : -1; } // index in selector if ( typeof elem === "string" ) { return jQuery.inArray( this[0], jQuery( elem ) ); } // Locate the position of the desired element return jQuery.inArray( // If it receives a jQuery object, the first element is used elem.jquery ? elem[0] : elem, this ); }, add: function( selector, context ) { var set = typeof selector === "string" ? jQuery( selector, context ) : jQuery.makeArray( selector && selector.nodeType ? [ selector ] : selector ), all = jQuery.merge( this.get(), set ); return this.pushStack( isDisconnected( set[0] ) || isDisconnected( all[0] ) ? all : jQuery.unique( all ) ); }, andSelf: function() { return this.add( this.prevObject ); } }); // A painfully simple check to see if an element is disconnected // from a document (should be improved, where feasible). function isDisconnected( node ) { return !node || !node.parentNode || node.parentNode.nodeType === 11; } jQuery.each({ parent: function( elem ) { var parent = elem.parentNode; return parent && parent.nodeType !== 11 ? parent : null; }, parents: function( elem ) { return jQuery.dir( elem, "parentNode" ); }, parentsUntil: function( elem, i, until ) { return jQuery.dir( elem, "parentNode", until ); }, next: function( elem ) { return jQuery.nth( elem, 2, "nextSibling" ); }, prev: function( elem ) { return jQuery.nth( elem, 2, "previousSibling" ); }, nextAll: function( elem ) { return jQuery.dir( elem, "nextSibling" ); }, prevAll: function( elem ) { return jQuery.dir( elem, "previousSibling" ); }, nextUntil: function( elem, i, until ) { return jQuery.dir( elem, "nextSibling", until ); }, prevUntil: function( elem, i, until ) { return jQuery.dir( elem, "previousSibling", until ); }, siblings: function( elem ) { return jQuery.sibling( ( elem.parentNode || {} ).firstChild, elem ); }, children: function( elem ) { return jQuery.sibling( elem.firstChild ); }, contents: function( elem ) { return jQuery.nodeName( elem, "iframe" ) ? elem.contentDocument || elem.contentWindow.document : jQuery.makeArray( elem.childNodes ); } }, function( name, fn ) { jQuery.fn[ name ] = function( until, selector ) { var ret = jQuery.map( this, fn, until ); if ( !runtil.test( name ) ) { selector = until; } if ( selector && typeof selector === "string" ) { ret = jQuery.filter( selector, ret ); } ret = this.length > 1 && !guaranteedUnique[ name ] ? jQuery.unique( ret ) : ret; if ( (this.length > 1 || rmultiselector.test( selector )) && rparentsprev.test( name ) ) { ret = ret.reverse(); } return this.pushStack( ret, name, slice.call( arguments ).join(",") ); }; }); jQuery.extend({ filter: function( expr, elems, not ) { if ( not ) { expr = ":not(" + expr + ")"; } return elems.length === 1 ? jQuery.find.matchesSelector(elems[0], expr) ? [ elems[0] ] : [] : jQuery.find.matches(expr, elems); }, dir: function( elem, dir, until ) { var matched = [], cur = elem[ dir ]; while ( cur && cur.nodeType !== 9 && (until === undefined || cur.nodeType !== 1 || !jQuery( cur ).is( until )) ) { if ( cur.nodeType === 1 ) { matched.push( cur ); } cur = cur[dir]; } return matched; }, nth: function( cur, result, dir, elem ) { result = result || 1; var num = 0; for ( ; cur; cur = cur[dir] ) { if ( cur.nodeType === 1 && ++num === result ) { break; } } return cur; }, sibling: function( n, elem ) { var r = []; for ( ; n; n = n.nextSibling ) { if ( n.nodeType === 1 && n !== elem ) { r.push( n ); } } return r; } }); // Implement the identical functionality for filter and not function winnow( elements, qualifier, keep ) { // Can't pass null or undefined to indexOf in Firefox 4 // Set to 0 to skip string check qualifier = qualifier || 0; if ( jQuery.isFunction( qualifier ) ) { return jQuery.grep(elements, function( elem, i ) { var retVal = !!qualifier.call( elem, i, elem ); return retVal === keep; }); } else if ( qualifier.nodeType ) { return jQuery.grep(elements, function( elem, i ) { return ( elem === qualifier ) === keep; }); } else if ( typeof qualifier === "string" ) { var filtered = jQuery.grep(elements, function( elem ) { return elem.nodeType === 1; }); if ( isSimple.test( qualifier ) ) { return jQuery.filter(qualifier, filtered, !keep); } else { qualifier = jQuery.filter( qualifier, filtered ); } } return jQuery.grep(elements, function( elem, i ) { return ( jQuery.inArray( elem, qualifier ) >= 0 ) === keep; }); } function createSafeFragment( document ) { var list = nodeNames.split( "|" ), safeFrag = document.createDocumentFragment(); if ( safeFrag.createElement ) { while ( list.length ) { safeFrag.createElement( list.pop() ); } } return safeFrag; } var nodeNames = "abbr|article|aside|audio|bdi|canvas|data|datalist|details|figcaption|figure|footer|" + "header|hgroup|mark|meter|nav|output|progress|section|summary|time|video", rinlinejQuery = / jQuery\d+="(?:\d+|null)"/g, rleadingWhitespace = /^\s+/, rxhtmlTag = /<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/ig, rtagName = /<([\w:]+)/, rtbody = /]", "i"), // checked="checked" or checked rchecked = /checked\s*(?:[^=]|=\s*.checked.)/i, rscriptType = /\/(java|ecma)script/i, rcleanScript = /^\s*", "" ], legend: [ 1, "
", "
" ], thead: [ 1, "", "
" ], tr: [ 2, "", "
" ], td: [ 3, "", "
" ], col: [ 2, "", "
" ], area: [ 1, "", "" ], _default: [ 0, "", "" ] }, safeFragment = createSafeFragment( document ); wrapMap.optgroup = wrapMap.option; wrapMap.tbody = wrapMap.tfoot = wrapMap.colgroup = wrapMap.caption = wrapMap.thead; wrapMap.th = wrapMap.td; // IE can't serialize and

Shell

The shell provides a convenient way for you to interact with SQLite, perform administration and supply SQL for execution. It is modelled after the shell that comes with SQLite which requires separate compilation and installation.

A number of the quirks and bugs in the SQLite shell are also addressed. It provides command line editing and completion. You can easily include it into your own program to provide SQLite interaction and add your own commands.

Commands

In addition to executing SQL, these are the commands available with their short help description. Use .help *command* eg (.help autoimport) to get more detailed information.

.autoimport FILENAME ?TABLE?  Imports filename creating a table and
                              automatically working out separators and data
                              types (alternative to .import command)
.backup ?DB? FILE             Backup DB (default "main") to FILE
.bail ON|OFF                  Stop after hitting an error (default OFF)
.colour SCHEME                Selects a colour scheme from default, off
.databases                    Lists names and files of attached databases
.dump ?TABLE? [TABLE...]      Dumps all or specified tables in SQL text format
.echo ON|OFF                  If ON then each SQL statement or command is
                              printed before execution (default OFF)
.encoding ENCODING            Set the encoding used for new files opened via
                              .output and imports
.exceptions ON|OFF            If ON then detailed tracebacks are shown on
                              exceptions (default OFF)
.exit                         Exit this program
.explain ON|OFF               Set output mode suitable for explain (default OFF)
.find what ?TABLE?            Searches all columns of all tables for a value
.header(s) ON|OFF             Display the column names in output (default OFF)
.help ?COMMAND?               Shows list of commands and their usage.  If
                              COMMAND is specified then shows detail about that
                              COMMAND.  ('.help all' will show detailed help
                              about all commands.)
.import FILE TABLE            Imports separated data from FILE into TABLE
.indices TABLE                Lists all indices on table TABLE
.load FILE ?ENTRY?            Loads a SQLite extension library
.mode MODE ?TABLE?            Sets output mode to one of column csv html insert
                              json line list python tabs tcl
.nullvalue STRING             Print STRING in place of null values
.output FILENAME              Send output to FILENAME (or stdout)
.print STRING                 print the literal STRING
.prompt MAIN ?CONTINUE?       Changes the prompts for first line and
                              continuation lines
.quit                         Exit this program
.read FILENAME                Processes SQL and commands in FILENAME (or Python
                              if FILENAME ends with .py)
.restore ?DB? FILE            Restore database from FILE into DB (default
                              "main")
.schema ?TABLE? [TABLE...]    Shows SQL for table
.separator STRING             Change separator for output mode and .import
.show                         Show the current values for various settings.
.tables ?PATTERN?             Lists names of tables matching LIKE pattern
.timeout MS                   Try opening locked tables for MS milliseconds
.timer ON|OFF                 Control printing of time and resource usage after
                              each query
.width NUM NUM ...            Set the column widths for "column" mode

Command Line Usage

You can use the shell directly from the command line. Invoke it like this:

$ python -c "import apsw;apsw.main()"  [options and arguments]

The following command line options are accepted:

Usage: program [OPTIONS] FILENAME [SQL|CMD] [SQL|CMD]...
FILENAME is the name of a SQLite database. A new database is
created if the file does not exist.
OPTIONS include:
   -init filename       read/process named file
   -echo                print commands before execution
   -[no]header          turn headers on or off
   -bail                stop after hitting an error
   -interactive         force interactive I/O
   -batch               force batch I/O
   -column              set output mode to 'column'
   -csv                 set output mode to 'csv'
   -html                set output mode to 'html'
   -line                set output mode to 'line'
   -list                set output mode to 'list'
   -python              set output mode to 'python'
   -separator 'x'       set output field separator (|)
   -nullvalue 'text'    set text string for NULL values
   -version             show SQLite version
   -encoding 'name'     the encoding to use for files
                        opened via .import, .read & .output
   -nocolour            disables colour output to screen

Notes

To interrupt the shell press Control-C. (On Windows if you press Control-Break then the program will be instantly aborted.)

For Windows users you won’t have command line editing and completion unless you install a readline module. Fortunately there is one at http://ipython.scipy.org/moin/PyReadline/Intro which works. However if this Shell offers no completions it will start matching filenames even if they make no sense in the context.

For Windows users you won’t get colour output unless you install colorama

Example

All examples of using the SQLite shell should work as is, plus you get extra features and functionality like colour, command line completion and better dumps.

You can also use the shell programmatically (or even interactively and programmatically at the same time). See the example for using the API.

Unicode

SQLite only works with Unicode strings. All data supplied to it should be Unicode and all data retrieved is Unicode. (APSW functions the same way because of this.)

At the technical level there is a difference between bytes and characters. Bytes are how data is stored in files and transmitted over the network. In order to turn bytes into characters and characters into bytes an encoding has to be used. Some example encodings are ASCII, UTF-8, ISO8859-1, SJIS etc. (With the exception of UTF-8/16/32, other encodings can only map a very small subset of Unicode.)

If the shell reads data that is not valid for the input encoding or cannot convert Unicode to the output encoding then you will get an error.

When the shell starts Python automatically detects the encodings to use for console input and output. (For example on Unix like systems the LC_CTYPE environment variable is sometimes used. On Windows it can find out the code page.) You can override this autodetection by setting the PYTHONIOENCODING environment variable.

There is also a .encoding command. This sets what encoding is used for any subsequent .read, .import and .output commands but does not affect existing open files and console. When other programs offer you a choice for encoding the best value to pick is UTF8 as it allows full representation of Unicode.

In addition to specifying the encoding, you can also specify the error handling when a character needs to be output but is not present in the encoding. The default is ‘strict’ which results in an error. ‘replace’ will replace the character with ‘?’ or something similar while ‘xmlcharrefreplace’ uses xml entities. To specify the error handling add a colon and error after the encoding - eg:

.encoding iso-8859-1:replace

The same method is used when setting PYTHONIOENCODING.

This Joel on Software article contains an excellent overview of character sets, code pages and Unicode.

Shell class

This is the API should you want to integrate the code into your shell. Not shown here are the functions that implement various commands. They are named after the command. For example .exit is implemented by command_exit. You can add new commands by having your subclass have the relevant functions. The doc string of the function is used by the help command. Output modes work in a similar way. For example there is an output_html method and again doc strings are used by the help function and you add more by just implementing an appropriately named method.

Note that in addition to extending the shell, you can also use the .read command supplying a filename with a .py extension. You can then monkey patch the shell as needed.

class Shell(stdin=None, stdout=None, stderr=None, encoding='utf8', args=None, db=None)

Implements a SQLite shell

Parameters:
  • stdin – Where to read input from (default sys.stdin)
  • stdout – Where to send output (default sys.stdout)
  • stderr – Where to send errors (default sys.stderr)
  • encoding – Default encoding for files opened/created by the Shell. If you want stdin/out/err to use a particular encoding then you need to provide them already configured that way.
  • args – This should be program arguments only (ie if passing in sys.argv do not include sys.argv[0] which is the program name. You can also pass in None and then call process_args() if you want to catch any errors in handling the arguments yourself.
  • db – A existing Connection you wish to use

The commands and behaviour are modelled after the interactive shell that is part of SQLite.

You can inherit from this class to embed in your own code and user interface. Internally everything is handled as unicode. Conversions only happen at the point of input or output which you can override in your own code.

This implementation fixes a number of bugs/quirks present in the sqlite shell. Its control-C handling is also friendlier. Some examples of issues not present in this implementation:

Errors and diagnostics are only ever sent to error output (self.stderr) and never to the regular output (self.stdout). This means using shell output is always easy and consistent.

Shell commands begin with a dot (eg .help). They are implemented as a method named after the command (eg command_help). The method is passed one parameter which is the list of arguments to the command.

Output modes are implemented by functions named after the mode (eg output_column).

When you request help the help information is automatically generated from the docstrings for the command and output functions.

You should not use a Shell object concurrently from multiple threads. It is one huge set of state information which would become inconsistent if used simultaneously, and then give baffling errors. It is safe to call methods one at a time from different threads. ie it doesn’t care what thread calls methods as long as you don’t call more than one concurrently.

exception Error

Class raised on errors. The expectation is that the error will be displayed by the shell as text so there are no specific subclasses as the distinctions between different types of errors doesn’t matter.

Shell.cmdloop(intro=None)

Runs the main interactive command loop.

Parameters:intro – Initial text banner to display instead of the default. Make sure you newline terminate it.
Shell.complete(token, state)

Return a possible completion for readline

This function is called with state starting at zero to get the first completion, then one/two/three etc until you return None. The best implementation is to generate the list when state==0, save it, and provide members on each increase.

The default implementation extracts the current full input from readline and then calls complete_command() or complete_sql() as appropriate saving the results for subsequent calls.

Shell.complete_command(line, token, beg, end)

Provide some completions for dot commands

Parameters:
  • line – The current complete input line
  • token – The word readline is looking for matches
  • beg – Integer offset of token in line
  • end – Integer end of token in line
Returns:

A list of completions, or an empty list if none

Shell.complete_sql(line, token, beg, end)

Provide some completions for SQL

Parameters:
  • line – The current complete input line
  • token – The word readline is looking for matches
  • beg – Integer offset of token in line
  • end – Integer end of token in line
Returns:

A list of completions, or an empty list if none

Shell.db

The current Connection

Shell.display_timing(b4, after)

Writes the difference between b4 and after to self.stderr. The data is dictionaries returned from get_resource_usage().

Shell.fixup_backslashes(s)

Implements the various backlash sequences in s such as turning backslash t into a tab.

This function is needed because shlex does not do it for us.

Shell.get_resource_usage()

Return a dict of various numbers (ints or floats). The .timer command shows the difference between before and after results of what this returns by calling display_timing()

Shell.getcompleteline()

Returns a complete input.

For dot commands it will be one line. For SQL statements it will be as many as is necessary to have a complete() statement (ie semicolon terminated). Returns None on end of file.

Shell.getline(prompt='')

Returns a single line of input (may be incomplete SQL) from self.stdin.

If EOF is reached then return None. Do not include trailing newline in return.

Shell.handle_exception()

Handles the current exception, printing a message to stderr as appropriate. It will reraise the exception if necessary (eg if bail is true)

Shell.handle_interrupt()

Deal with keyboard interrupt (typically Control-C). It will interrupt() the database and print”^C” if interactive.

Shell.pop_input()

Restore most recently pushed input parameters (interactive, self.stdin, linenumber etc). Use this if implementing a command like read. Push the current input, read the file and then pop the input to go back to before.

Shell.pop_output()

Restores most recently pushed output. There are many output parameters such as nullvalue, mode (list/tcl/html/insert etc), column widths, header etc. If you temporarily need to change some settings then push_output(), change the settings and then pop the old ones back.

A simple example is implementing a command like .dump. Push the current output, change the mode to insert so we get SQL inserts printed and then pop to go back to what was there before.

Shell.process_args(args)

Process command line options specified in args. It is safe to call this multiple times. We try to be compatible with SQLite shell argument parsing.

Parameters:args – A list of string options. Do not include the program as args[0]
Returns:A tuple of (databasefilename, initfiles, sqlncommands). This is provided for informational purposes only - they have already been acted upon. An example use is that the SQLite shell does not enter the main interactive loop if any sql/commands were provided.

The first non-option is the database file name. Each remaining non-option is treated as a complete input (ie it isn’t joined with others looking for a trailing semi-colon).

The SQLite shell uses single dash in front of options. We allow both single and double dashes. When an unrecognized argument is encountered then process_unknown_args() is called.

Shell.process_command(cmd)

Processes a dot command. It is split into parts using the shlex.split function which is roughly the same method used by Unix/POSIX shells.

Shell.process_complete_line(command)

Given some text will call the appropriate method to process it (eg process_sql() or process_command())

Shell.process_sql(sql, bindings=None, internal=False, summary=None)

Processes SQL text consisting of one or more statements

Parameters:
  • sql – SQL to execute
  • bindings – bindings for the sql
  • internal – If True then this is an internal execution (eg the .tables or .database command). When executing internal sql timings are not shown nor is the SQL echoed.
  • summary – If not None then should be a tuple of two items. If the sql returns any data then the first item is printed before the first row, and the second item is printed after the last row. An example usage is the .find command which shows table names.
Shell.process_unknown_args(args)

This is called when process_args() encounters an argument it doesn’t understand. Override this method if you want to be able to understand additional command line arguments.

Parameters:args – A list of the remaining arguments. The initial one will have had the leading dashes removed (eg if it was –foo on the command line then args[0] will be “foo”
Returns:None if you don’t recognize the argument either. Otherwise return the list of remaining arguments after you have processed yours.
Shell.push_input()

Saves the current input paramaters to a stack. See pop_input().

Shell.push_output()

Saves the current output settings onto a stack. See pop_output() for more details as to why you would use this.

Shell.set_encoding(enc)

Saves enc as the default encoding, after verifying that it is valid. You can also include :error to specify error handling - eg ‘cp437:replace’

Raises an exception on invalid encoding or error

Shell.usage()

Returns the usage message. Make sure it is newline terminated

Shell.write(dest, text)

Writes text to dest. dest will typically be one of self.stdout or self.stderr.

Table Of Contents

Previous topic

Virtual File System (VFS)

Next topic

Exceptions

This Page

apsw-3.8.11.1-r1/doc/connection.html0000664000000000000000000021717612565423346015557 0ustar rootroot Connections to a database — APSW 3.8.11.1-r1 documentation

Connections to a database

A Connection encapsulates access to a database. You then use cursors to issue queries against the database.

You can have multple Connections open against the same database in the same process, across threads and in other processes.

Connection class

class Connection(filename, flags=SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, vfs=None, statementcachesize=100)

This object wraps a sqlite3 pointer.

Opens the named database. You can use :memory: to get a private temporary in-memory database that is not shared with any other connections.

Parameters:
  • flags – One or more of the open flags orred together
  • vfs – The name of the vfs to use. If None then the default vfs will be used.
  • statementcachesize – Use zero to disable the statement cache, or a number larger than the total distinct SQL statements you execute frequently.

Calls: sqlite3_open_v2

Connection.__enter__() → context

You can use the database as a context manager as defined in PEP 0343. When you use with a transaction is started. If the block finishes with an exception then the transaction is rolled back, otherwise it is committed. For example:

with connection:
    connection.cursor().execute("....")
    with connection:
        # nested is supported
        call_function(connection)
        connection.cursor().execute("...")
        with connection as db:
            # You can also use 'as'
            call_function2(db)
            db.cursor().execute("...")

Behind the scenes the savepoint functionality introduced in SQLite 3.6.8 is used.

Connection.__exit__() → False

Implements context manager in conjunction with __enter__(). Any exception that happened in the with block is raised after commiting or rolling back the savepoint.

Connection.backup(databasename, sourceconnection, sourcedatabasename) → backup

Opens a backup object. All data will be copied from source database to this database.

Parameters:
  • databasename – Name of the database. This will be main for the main connection and the name you specified for attached databases.
  • sourceconnection – The Connection to copy a database from.
  • sourcedatabasename – Name of the database in the source (eg main).
Return type:

backup

See also

Calls: sqlite3_backup_init

Connection.blobopen(database, table, column, rowid, writeable) → blob

Opens a blob for incremental I/O.

Parameters:
  • database

    Name of the database. This will be main for the main connection and the name you specified for attached databases.

  • table – The name of the table
  • column – The name of the column
  • rowid – The id that uniquely identifies the row.
  • writeable – If True then you can read and write the blob. If False then you can only read it.
Return type:

blob

Calls: sqlite3_blob_open

Connection.changes() → int

Returns the number of database rows that were changed (or inserted or deleted) by the most recently completed INSERT, UPDATE, or DELETE statement.

Calls: sqlite3_changes

Connection.close([force=False])

Closes the database. If there are any outstanding cursors, blobs or backups then they are closed too. It is normally not necessary to call this method as the database is automatically closed when there are no more references. It is ok to call the method multiple times.

If your user defined functions or collations have direct or indirect references to the Connection then it won’t be automatically garbage collected because of circular referencing that can’t be automatically broken. Calling close will free all those objects and what they reference.

SQLite is designed to survive power failures at even the most awkward moments. Consequently it doesn’t matter if it is closed when the process is exited, or even if the exit is graceful or abrupt. In the worst case of having a transaction in progress, that transaction will be rolled back by the next program to open the database, reverting the database to a know good state.

If force is True then any exceptions are ignored.

Calls: sqlite3_close

Connection.collationneeded(callable)

callable will be called if a statement requires a collation that hasn’t been registered. Your callable will be passed two parameters. The first is the connection object. The second is the name of the collation. If you have the collation code available then call Connection.createcollation().

This is useful for creating collations on demand. For example you may include the locale in the collation name, but since there are thousands of locales in popular use it would not be useful to prereigster them all. Using collationneeded() tells you when you need to register them.

Calls: sqlite3_collation_needed

Connection.config(op[, *args])
Parameters:

Calls: sqlite3_db_config

Connection.createaggregatefunction(name, factory[, numargs=-1])

Registers an aggregate function. Aggregate functions operate on all the relevant rows such as counting how many there are.

Parameters:
  • name – The string name of the function. It should be less than 255 characters
  • callable – The function that will be called
  • numargs – How many arguments the function takes, with -1 meaning any number

When a query starts, the factory will be called and must return a tuple of 3 items:

a context object
This can be of any type
a step function
This function is called once for each row. The first parameter will be the context object and the remaining parameters will be from the SQL statement. Any value returned will be ignored.
a final function
This function is called at the very end with the context object as a parameter. The value returned is set as the return for the function. The final function is always called even if an exception was raised by the step function. This allows you to ensure any resources are cleaned up.

Note

You can register the same named function but with different callables and numargs. See createscalarfunction() for an example.

Calls: sqlite3_create_function_v2

Connection.createcollation(name, callback)

You can control how SQLite sorts (termed collation) when giving the COLLATE term to a SELECT. For example your collation could take into account locale or do numeric sorting.

The callback will be called with two items. It should return -1 if the first is less then the second, 0 if they are equal, and 1 if first is greater:

def mycollation(one, two):
    if one < two:
        return -1
    if one == two:
        return 0
    if one > two:
        return 1

See also

Calls: sqlite3_create_collation_v2

Connection.createmodule(name, datasource)

Registers a virtual table. See Virtual Tables for details.

See also

Calls: sqlite3_create_module_v2

Connection.createscalarfunction(name, callable[, numargs=-1, deterministic=False])

Registers a scalar function. Scalar functions operate on one set of paramaters once.

Parameters:
  • name – The string name of the function. It should be less than 255 characters
  • callable – The function that will be called
  • numargs – How many arguments the function takes, with -1 meaning any number
  • deterministic – When True this means the function always returns the same result for the same input arguments. SQLite’s query planner can perform additional optimisations for deterministic functions. For example a random() function is not deterministic while one that returns the length of a string is.

Note

You can register the same named function but with different callable and numargs. For example:

connection.createscalarfunction("toip", ipv4convert, 4)
connection.createscalarfunction("toip", ipv6convert, 16)
connection.createscalarfunction("toip", strconvert, -1)

The one with the correct numargs will be called and only if that doesn’t exist then the one with negative numargs will be called.

Calls: sqlite3_create_function_v2

Connection.cursor() → Cursor

Creates a new Cursor object on this database.

Return type:Cursor
Connection.db_filename(name) → String

Returns the full filename of the named (attached) database. The main database is named “main”.

Calls: sqlite3_db_filename

Connection.enableloadextension(enable)

Enables/disables extension loading which is disabled by default.

Parameters:enable – If True then extension loading is enabled, else it is disabled.

See also

Calls: sqlite3_enable_load_extension

Connection.filecontrol(dbname, op, pointer) → bool

Calls the xFileControl() method on the Virtual File System (VFS) implementing file access for the database.

Parameters:
  • dbname – The name of the database to affect (eg “main”, “temp”, attached name)
  • op – A numeric code with values less than 100 reserved for SQLite internal use.
  • pointer – A number which is treated as a void pointer at the C level.
Returns:

True or False indicating if the VFS understood the op.

If you want data returned back then the pointer needs to point to something mutable. Here is an example using ctypes of passing a Python dictionary to xFileControl() which can then modify the dictionary to set return values:

obj={"foo": 1, 2: 3}                 # object we want to pass
objwrap=ctypes.py_object(obj)        # objwrap must live before and after the call else
                                     # it gets garbage collected
connection.filecontrol(
         "main",                     # which db
         123,                        # our op code
         ctypes.addressof(objwrap))  # get pointer

The xFileControl() method then looks like this:

def xFileControl(self, op, pointer):
    if op==123:                      # our op code
        obj=ctypes.py_object.from_address(pointer).value
        # play with obj - you can use id() to verify it is the same
        print obj["foo"]
        obj["result"]="it worked"
        return True
    else:
        # pass to parent/superclass
        return super(MyFile, self).xFileControl(op, pointer)

This is how you set the chunk size by which the database grows. Do not combine it into one line as the c_int would be garbage collected before the filecontrol call is made:

chunksize=ctypes.c_int(32768)
connection.filecontrol("main", apsw.SQLITE_FCNTL_CHUNK_SIZE, ctypes.addressof(chunksize))

Calls: sqlite3_file_control

Connection.filename

The filename of the database.

Calls: sqlite3_db_filename

Connection.getautocommit() → bool

Returns if the Connection is in auto commit mode (ie not in a transaction).

Calls: sqlite3_get_autocommit

Connection.getexectrace() → callable or None

Returns the currently installed (via setexectrace()) execution tracer.

See also

Connection.getrowtrace() → callable or None

Returns the currently installed (via setrowtrace()) row tracer.

See also

Connection.interrupt()

Causes any pending operations on the database to abort at the earliest opportunity. You can call this from any thread. For example you may have a long running query when the user presses the stop button in your user interface. InterruptError will be raised in the query that got interrupted.

Calls: sqlite3_interrupt

Connection.last_insert_rowid() → int

Returns the integer key of the most recent insert in the database.

Calls: sqlite3_last_insert_rowid

Connection.limit(id[, newval]) → int

If called with one parameter then the current limit for that id is returned. If called with two then the limit is set to newval.

Parameters:
  • id – One of the runtime limit ids
  • newval – The new limit. This is a 32 bit signed integer even on 64 bit platforms.
Returns:

The limit in place on entry to the call.

See also

Calls: sqlite3_limit

Connection.loadextension(filename[, entrypoint])

Loads filename as an extension

Parameters:
  • filename – The file to load. This must be Unicode or Unicode compatible
  • entrypoint – The initialization method to call. If this parameter is not supplied then the SQLite default of sqlite3_extension_init is used.
Raises ExtensionLoadingError:
 

If the extension could not be loaded. The exception string includes more details.

Calls: sqlite3_load_extension

Connection.open_flags

The integer flags used to open the database.

Connection.open_vfs

The string name of the vfs used to open the database.

Connection.overloadfunction(name, nargs)

Registers a placeholder function so that a virtual table can provide an implementation via VTTable.FindFunction().

Parameters:
  • name – Function name
  • nargs – How many arguments the function takes

Due to SQLite ticket #3507 underlying errors will not be returned.

Calls: sqlite3_overload_function

Connection.readonly(name) → bool

True or False if the named (attached) database was opened readonly or file permissions don’t allow writing. The main database is named “main”.

An exception is raised if the database doesn’t exist.

Calls: sqlite3_db_readonly

Connection.setauthorizer(callable)

While preparing statements, SQLite will call any defined authorizer to see if a particular action is ok to be part of the statement.

Typical usage would be if you are running user supplied SQL and want to prevent harmful operations. You should also set the statementcachesize to zero.

The authorizer callback has 5 parameters:

  • An operation code
  • A string (or None) dependent on the operation (listed as 3rd)
  • A string (or None) dependent on the operation (listed as 4th)
  • A string name of the database (or None)
  • Name of the innermost trigger or view doing the access (or None)

The authorizer callback should return one of SQLITE_OK, SQLITE_DENY or SQLITE_IGNORE. (SQLITE_DENY is returned if there is an error in your Python code).

Calls: sqlite3_set_authorizer

Connection.setbusyhandler(callable)

Sets the busy handler to callable. callable will be called with one integer argument which is the number of prior calls to the busy callback for the same lock. If the busy callback returns something that evaluates to False, then SQLite returns SQLITE_BUSY to the calling code.. If the callback returns something that evaluates to True, then SQLite tries to open the table again and the cycle repeats.

If you previously called setbusytimeout() then calling this overrides that.

Calls: sqlite3_busy_handler

Connection.setbusytimeout(millseconds)

If the database is locked such as when another connection is making changes, SQLite will keep retrying. This sets the maximum amount of time SQLite will keep retrying before giving up. If the database is still busy then apsw.BusyError will be returned.

Parameters:milliseconds – Maximum thousandths of a second to wait.

If you previously called setbusyhandler() then calling this overrides that.

Calls: sqlite3_busy_timeout

Connection.setcommithook(callable)

callable will be called just before a commit. It should return zero for the commit to go ahead and non-zero for it to be turned into a rollback. In the case of an exception in your callable, a non-zero (ie rollback) value is returned.

See also

Calls: sqlite3_commit_hook

Connection.setexectrace(callable)

callable is called with the cursor, statement and bindings for each execute() or executemany() on this Connection, unless the Cursor installed its own tracer. Your execution tracer can also abort execution of a statement.

If callable is None then any existing execution tracer is removed.

Connection.setprofile(callable)

Sets a callable which is invoked at the end of execution of each statement and passed the statement string and how long it took to execute. (The execution time is in nanoseconds.) Note that it is called only on completion. If for example you do a SELECT and only read the first result, then you won’t reach the end of the statement.

Calls: sqlite3_profile

Connection.setprogresshandler(callable[, nsteps=20])

Sets a callable which is invoked every nsteps SQLite inststructions. The callable should return a non-zero value to abort or zero to continue. (If there is an error in your Python callable then non-zero will be returned).

See also

Calls: sqlite3_progress_handler

Connection.setrollbackhook(callable)

Sets a callable which is invoked during a rollback. If callable is None then any existing rollback hook is removed.

The callable is called with no parameters and the return value is ignored.

Calls: sqlite3_rollback_hook

Connection.setrowtrace(callable)

callable is called with the cursor and row being returned for cursors associated with this Connection, unless the Cursor installed its own tracer. You can change the data that is returned or cause the row to be skipped altogether.

If callable is None then any existing row tracer is removed.

Connection.setupdatehook(callable)

Calls callable whenever a row is updated, deleted or inserted. If callable is None then any existing update hook is removed. The update hook cannot make changes to the database while the query is still executing, but can record them for later use or apply them in a different connection.

The update hook is called with 4 paramaters:

type (int)
SQLITE_INSERT, SQLITE_DELETE or SQLITE_UPDATE
database name (string)
This is main for the database or the name specified in ATTACH
table name (string)
The table on which the update happened
rowid (64 bit integer)
The affected row

See also

Calls: sqlite3_update_hook

Connection.setwalhook(callable)

callable will be called just after data is committed in Write Ahead Logging mode. It should return SQLITE_OK or an error code. The callback is called with 3 parameters:

  • The Connection
  • The database name (eg “main” or the name of an attached database)
  • The number of pages in the wal log

You can pass in None in order to clear an existing hook.

Calls: sqlite3_wal_hook

Connection.sqlite3pointer() → int

Returns the underlying sqlite3 * for the connection. This method is useful if there are other C level libraries in the same process and you want them to use the APSW connection handle. The value is returned as a number using PyLong_FromVoidPtr() under the hood. You should also ensure that you increment the reference count on the Connection for as long as the other libraries are using the pointer. It is also a very good idea to call sqlitelibversion() and ensure it is the same as the other libraries.

Connection.status(op, reset=False) -> (int, int)

Returns current and highwater measurements for the database.

Parameters:
  • op – A status parameter
  • reset – If True then the highwater is set to the current value
Returns:

A tuple of current value and highwater value

See also

The status() example which works in exactly the same way.

Calls: sqlite3_db_status

Connection.totalchanges() → int

Returns the total number of database rows that have be modified, inserted, or deleted since the database connection was opened.

Calls: sqlite3_total_changes

Connection.wal_autocheckpoint(n)

Sets how often the Write Ahead Logging checkpointing is run.

Parameters:n – A number representing the checkpointing interval or zero/negative to disable auto checkpointing.

Calls: sqlite3_wal_autocheckpoint

Connection.wal_checkpoint(dbname=None, mode=apsw.SQLITE_CHECKPOINT_PASSIVE) -> ( int, int)

Does a WAL checkpoint. Has no effect if the database(s) are not in WAL mode.

Parameters:
  • dbname – The name of the database or all databases if None
  • mode – One of the checkpoint modes.
Returns:

A tuple of the size of the WAL log in frames and the number of frames checkpointed as described in the documentation.

Calls: sqlite3_wal_checkpoint_v2

Table Of Contents

Previous topic

APSW Module

Next topic

Cursors (executing SQL)

This Page

apsw-3.8.11.1-r1/doc/blob.html0000664000000000000000000005135312565423346014327 0ustar rootroot Blob Input/Output — APSW 3.8.11.1-r1 documentation

Blob Input/Output

A blob is a SQLite datatype representing a sequence of bytes. It can be zero or more bytes in size.

SQLite blobs have an absolute maximum size of 2GB and a default maximum size of 1GB.

An alternate approach to using blobs is to store the data in files and store the filename in the database. Doing so loses the ACID properties of SQLite.

zeroblob class

class zeroblob(size)

If you want to insert a blob into a row, you previously needed to supply the entire blob in one go. To read just one byte also required retrieving the blob in its entireity. For example to insert a 100MB file you would have done:

largedata=open("largefile", "rb").read()
cur.execute("insert into foo values(?)", (buffer(largedata),))

SQLite 3.5 allowed for incremental Blob I/O so you can read and write blobs in small amounts. You cannot change the size of a blob so you need to reserve space which you do through zeroblob which creates a blob of the specified size but full of zero bytes. For example you would reserve space for your 100MB one of these two ways:

cur.execute("insert into foo values(zeroblob(100000000))")
cur.execute("insert into foo values(?),
             (apsw.zeroblob(100000000),))

This class is used for the second way. Once a blob exists in the database, you then use the blob class to read and write its contents.

zeroblob.length() → int

Size of zero blob in bytes.

blob class

class blob

This object is created by Connection.blobopen() and provides access to a blob in the database. It behaves like a Python file. At the C level it wraps a sqlite3_blob.

Note

You cannot change the size of a blob using this object. You should create it with the correct size in advance either by using zeroblob or the zeroblob() function.

See the example.

blob.__enter__() → context

You can use a blob as a context manager as defined in PEP 0343. When you use with statement, the blob is always closed on exit from the block, even if an exception occurred in the block.

For example:

with connection.blobopen() as blob:
    blob.write("...")
    res=blob.read(1024)
blob.__exit__() → False

Implements context manager in conjunction with __enter__(). Any exception that happened in the with block is raised after closing the blob.

blob.close([force=False])

Closes the blob. Note that even if an error occurs the blob is still closed.

Note

In some cases errors that technically occurred in the read() and write() routines may not be reported until close is called. Similarly errors that occurred in those methods (eg calling write() on a read-only blob) may also be re-reported in close(). (This behaviour is what the underlying SQLite APIs do - it is not APSW doing it.)

It is okay to call close() multiple times.

Parameters:force – Ignores any errors during close.

Calls: sqlite3_blob_close

blob.length() → int

Returns the size of the blob in bytes.

Calls: sqlite3_blob_bytes

blob.read([nbytes]) → bytes

Reads amount of data requested, or till end of file, whichever is earlier. Attempting to read beyond the end of the blob returns the empty string/bytes, in the same manner as end of file on normal file objects.

Return type:(Python 2) string (Python 3) bytes

Calls: sqlite3_blob_read

blob.readinto(buffer[, offset=0, length=remaining-buffer]) → None

Reads from the blob into a buffer you have supplied. This method is useful if you already have a buffer like object that data is being assembled in, and avoids allocating results in blob.read() and then copying into buffer.

Parameters:
  • buffer – A writable buffer like object. In Python 2.6 onwards there is a bytearray type that is very useful. array.array also works.
  • offset – The position to start writing into the buffer defaulting to the beginning.
  • length – How much of the blob to read. The default is the remaining space left in the buffer. Note that if there is more space available than blob left then you will get a ValueError exception.

Calls: sqlite3_blob_read

blob.reopen(rowid)

Change this blob object to point to a different row. It can be faster than closing an existing blob an opening a new one.

Calls: sqlite3_blob_reopen

blob.seek(offset[, whence=0]) → None

Changes current position to offset biased by whence.

Parameters:
  • offset – New position to seek to. Can be positive or negative number.
  • whence – Use 0 if offset is relative to the begining of the blob, 1 if offset is relative to the current position, and 2 if offset is relative to the end of the blob.
Raises ValueError:
 

If the resulting offset is before the begining (less than zero) or beyond the end of the blob.

blob.tell() → int

Returns the current offset.

blob.write(data) → None

Writes the data to the blob.

Parameters:

data – (Python 2) buffer or string. (Python 3) buffer or bytes.

Raises:
  • TypeError – Wrong data type
  • ValueError – If the data would go beyond the end of the blob. You cannot increase the size of a blob by writing beyond the end. You need to use zeroblob to set the desired size first when inserting the blob.

Calls: sqlite3_blob_write

Table Of Contents

Previous topic

Cursors (executing SQL)

Next topic

Backup

This Page

apsw-3.8.11.1-r1/doc/benchmarking.html0000664000000000000000000002367712565423346016051 0ustar rootroot Benchmarking — APSW 3.8.11.1-r1 documentation

Benchmarking

Before you do any benchmarking with APSW or other ways of accessing SQLite, you must understand how and when SQLite does transactions. See transaction control. APSW does not alter SQLite’s behaviour with transactions.

Some access layers try to interpret your SQL and manage transactions behind your back, which may or may not work well with SQLite also doing its own transactions. You should always manage your transactions yourself. For example to insert 1,000 rows wrap it in a single transaction else you will have 1,000 transactions. The best clue that you have one transaction per statement is having a maximum of 60 statements per second. You need two drive rotations to do a transaction - the data has to be committed to the main file and the journal - and 7200 RPM drives do 120 rotations a second. On the other hand if you don’t put in the transaction boundaries yourself and get more than 60 statements a second, then your access mechanism is silently starting transactions for you. This topic also comes up fairly frequently in the SQLite mailing list archives.

speedtest

APSW includes a speed testing script as part of the source distribution. You can use the script to compare SQLite performance across different versions of SQLite, different host systems (hard drives and controllers matter) as well as between pysqlite and APSW. The underlying queries are based on SQLite’s speed test.

$ python speedtest.py --help
Usage: speedtest.py [options]

Options:
  -h, --help           show this help message and exit
  --apsw               Include apsw in testing (False)
  --pysqlite           Include pysqlite in testing (False)
  --correctness        Do a correctness test
  --scale=SCALE        How many statements to execute.  Each unit takes about
                       2 seconds per test on memory only databases. [Default
                       10]
  --database=DATABASE  The database file to use [Default :memory:]
  --tests=TESTS        What tests to run [Default
                       bigstmt,statements,statements_nobindings]
  --iterations=N       How many times to run the tests [Default 4]
  --tests-detail       Print details of what the tests do.  (Does not run the
                       tests)
  --dump-sql=FILENAME  Name of file to dump SQL to.  This is useful for
                       feeding into the SQLite command line shell.
  --sc-size=N          Size of the statement cache. APSW will disable cache
                       with value of zero.  Pysqlite ensures a minimum of 5
                       [Default 100]
  --unicode=UNICODE    Percentage of text that is unicode characters [Default
                       0]
  --data-size=SIZE     Maximum size in characters of data items - keep this
                       number small unless you are on 64 bits and have lots of
                       memory with a small scale - you can easily consume
                       multiple gigabytes [Default same as original TCL
                       speedtest]


$ python speedtest.py --tests-detail
bigstmt:

  Supplies the SQL as a single string consisting of multiple
  statements.  apsw handles this normally via cursor.execute while
  pysqlite requires that cursor.executescript is called.  The string
  will be several kilobytes and with a factor of 50 will be in the
  megabyte range.  This is the kind of query you would run if you were
  restoring a database from a dump.  (Note that pysqlite silently
  ignores returned data which also makes it execute faster).

statements:

  Runs the SQL queries but uses bindings (? parameters). eg::

    for i in range(3):
       cursor.execute("insert into table foo values(?)", (i,))

  This test has many hits of the statement cache.

statements_nobindings:

  Runs the SQL queries but doesn't use bindings. eg::

    cursor.execute("insert into table foo values(0)")
    cursor.execute("insert into table foo values(1)")
    cursor.execute("insert into table foo values(2)")

  This test has no statement cache hits and shows the overhead of
       having a statement cache.

  In theory all the tests above should run in almost identical time
  as well as when using the SQLite command line shell.  This tool
  shows you what happens in practise.

Table Of Contents

Previous topic

pysqlite differences

Next topic

Copyright and License

This Page

apsw-3.8.11.1-r1/MANIFEST.in0000644000000000000000000000122712257624704013504 0ustar rootroot# The C source include src/apsw.c include src/apswbuffer.c include src/apswversion.h include src/backup.c include src/blob.c include src/connection.c include src/cursor.c include src/exceptions.c include src/pyutil.c include src/statementcache.c include src/traceback.c include src/testextension.c include src/util.c include src/vfs.c include src/vtable.c # See https://github.com/rogerbinns/apsw/issues/89 include MANIFEST.in # other files include checksums include mingwsetup.bat include setup.py include tools/speedtest.py include tools/apswtrace.py # shell is not needed at runtime - we compile it into the C source include tools/shell.py include tests.py apsw-3.8.11.1-r1/tests.py0000664000000000000000000121115412565422256013467 0ustar rootroot#!/usr/bin/env python # See the accompanying LICENSE file. # APSW test suite - runs under both Python 2 and Python 3 hence a lot # of wierd constructs to be simultaneously compatible with both. # (2to3 is not used). import apsw import sys import os import codecs write=sys.stdout.write def print_version_info(write=write): write(" Python "+sys.executable+" "+str(sys.version_info)+"\n") write("Testing with APSW file "+apsw.__file__+"\n") write(" APSW version "+apsw.apswversion()+"\n") write(" SQLite lib version "+apsw.sqlitelibversion()+"\n") write("SQLite headers version "+str(apsw.SQLITE_VERSION_NUMBER)+"\n") write(" Using amalgamation "+str(apsw.using_amalgamation)+"\n") if [int(x) for x in apsw.sqlitelibversion().split(".")]<[3,7,8]: write("You are using an earlier version of SQLite than recommended\n") sys.stdout.flush() # sigh iswindows=sys.platform in ('win32', 'win64') py3=sys.version_info>=(3,0) # prefix for test files (eg if you want it on tmpfs) TESTFILEPREFIX=os.environ.get("APSWTESTPREFIX", "") def read_whole_file(name, mode, encoding=None): if encoding: f=codecs.open(name, mode, encoding) else: f=open(name, mode) try: return f.read() finally: f.close() # If two is present then one is encoding def write_whole_file(name, mode, one, two=None): if two: f=codecs.open(name, mode, one) data=two else: f=open(name, mode) data=one try: f.write(data) finally: f.close() # unittest stuff from here on import unittest import math import random import time import threading import glob import pickle import shutil import getpass if py3: import queue as Queue else: import Queue import traceback import re import gc try: import ctypes import _ctypes except: ctypes=None _ctypes=None # yay is64bit=ctypes and ctypes.sizeof(ctypes.c_size_t)>=8 # Unicode string/bytes prefix if py3: UPREFIX="" BPREFIX="b" else: UPREFIX="u" BPREFIX="" # Return a unicode string - x should have been raw def u(x): return eval(UPREFIX+"'''"+x+"'''") # Return a bytes (py3)/buffer (py2) - x should have been raw def b(x): if py3: return eval(BPREFIX+"'''"+x+"'''") return eval("buffer('''"+x+"''')") # return bytes (py3)/string (py2) - x should have been raw # Use this instead of b for file i/o data as py2 uses str def BYTES(x): if py3: return b(x) return eval("'''"+x+"'''") def l(x): if py3: return eval(x) return eval(x+"L") # Various py3 things if py3: long=int if not py3: # emulation of py3 next built-in. In py2 the iternext method is exposed # as object.next() but in py3 it is object.__next__(). def next(iterator, *args): if len(args)>1: raise TypeError("bad args") try: return iterator.next() except StopIteration: if len(args): return args[0] raise # Make next switch between the iterator and fetchone alternately _realnext=next _nextcounter=0 def next(cursor, *args): global _nextcounter _nextcounter+=1 if _nextcounter%2: return _realnext(cursor, *args) res=cursor.fetchone() if res is None: if args: return args[0] return None return res # py3 has a useless sys.excepthook mainly to avoid allocating any # memory as the exception could have been running out of memory. So # we use our own which is also valueable on py2 as it says it is an # unraiseable exception (with testcode you sometimes can't tell if it # is unittest showing you an exception or the unraiseable). It is # mainly VFS code that needs to raise these. def ehook(etype, evalue, etraceback): sys.stderr.write("Unraiseable exception "+str(etype)+":"+str(evalue)+"\n") traceback.print_tb(etraceback) sys.excepthook=ehook # exec is a huge amount of fun having different syntax if py3: def execwrapper(astring, theglobals, thelocals): thelocals=thelocals.copy() thelocals["astring"]=astring exec("exec(astring, theglobals, thelocals)") else: def execwrapper(astring, theglobals, thelocals): thelocals=thelocals.copy() thelocals["astring"]=astring exec ("exec astring in theglobals, thelocals") # helper functions def randomintegers(howmany): for i in range(howmany): yield (random.randint(0,9999999999),) def randomstring(length): l=list("abcdefghijklmnopqrstuvwxyz0123456789") while len(l)30: # 3 seconds we have been at this! # So give up and give it a stupid name. The sooner # this so called operating system withers into obscurity # the better n=list("abcdefghijklmnopqrstuvwxyz") random.shuffle(n) n="".join(n) try: os.rename(name, "windowssucks-"+n+".deletememanually") except: pass break # Make windows happy time.sleep(0.1) gc.collect() if os.path.exists(newname): bgdelq.put(newname) # Give bg thread a chance to run time.sleep(0.1) # Monkey patching FTW if not hasattr(unittest.TestCase, "assertTrue"): unittest.TestCase.assertTrue=unittest.TestCase.assert_ openflags=apsw.SQLITE_OPEN_READWRITE|apsw.SQLITE_OPEN_CREATE|apsw.SQLITE_OPEN_URI # main test class/code class APSW(unittest.TestCase): connection_nargs={ # number of args for function. those not listed take zero 'createaggregatefunction': 2, 'createcollation': 2, 'createscalarfunction': 3, 'collationneeded': 1, 'setauthorizer': 1, 'setbusyhandler': 1, 'setbusytimeout': 1, 'setcommithook': 1, 'setprofile': 1, 'setrollbackhook': 1, 'setupdatehook': 1, 'setprogresshandler': 2, 'enableloadextension': 1, 'createmodule': 2, 'filecontrol': 3, 'setexectrace': 1, 'setrowtrace': 1, '__enter__': 0, '__exit__': 3, 'backup': 3, 'wal_autocheckpoint': 1, 'setwalhook': 1, 'readonly': 1, 'db_filename': 1 } cursor_nargs={ 'execute': 1, 'executemany': 2, 'setexectrace': 1, 'setrowtrace': 1, } blob_nargs={ 'write': 1, 'read': 1, 'readinto': 1, 'reopen': 1, 'seek': 2 } def deltempfiles(self): for name in ("testdb", "testdb2", "testdb3", "testfile", "testfile2", "testdb2x", "test-shell-1", "test-shell-1.py", "test-shell-in", "test-shell-out", "test-shell-err"): for i in "-wal", "-journal", "": if os.path.exists(TESTFILEPREFIX+name+i): deletefile(TESTFILEPREFIX+name+i) saved_connection_hooks=[] def setUp(self): # clean out database and journals from last runs self.saved_connection_hooks.append(apsw.connection_hooks) gc.collect() self.deltempfiles() self.db=apsw.Connection(TESTFILEPREFIX+"testdb", flags=openflags) def tearDown(self): if self.db is not None: self.db.close(True) del self.db apsw.connection_hooks=self.saved_connection_hooks.pop() # back to original value gc.collect() self.deltempfiles() def assertTableExists(self, tablename): self.assertEqual(next(self.db.cursor().execute("select count(*) from ["+tablename+"]"))[0], 0) def assertTableNotExists(self, tablename): # you get SQLError if the table doesn't exist! self.assertRaises(apsw.SQLError, self.db.cursor().execute, "select count(*) from ["+tablename+"]") def assertTablesEqual(self, dbl, left, dbr, right): # Ensure tables have the same contents. Rowids can be # different and select gives unordered results so this is # quite challenging l=dbl.cursor() r=dbr.cursor() # check same number of rows lcount=l.execute("select count(*) from ["+left+"]").fetchall()[0][0] rcount=r.execute("select count(*) from ["+right+"]").fetchall()[0][0] self.assertEqual(lcount, rcount) # check same number and names and order for columns lnames=[row[1] for row in l.execute("pragma table_info(["+left+"])")] rnames=[row[1] for row in r.execute("pragma table_info(["+left+"])")] self.assertEqual(lnames, rnames) # read in contents, sort and compare lcontents=l.execute("select * from ["+left+"]").fetchall() rcontents=r.execute("select * from ["+right+"]").fetchall() lcontents.sort() rcontents.sort() self.assertEqual(lcontents, rcontents) def assertRaisesUnraisable(self, exc, func, *args, **kwargs): orig=sys.excepthook try: called=[] def ehook(t,v,tb): called.append( (t,v,tb) ) sys.excepthook=ehook try: try: return func(*args, **kwargs) except: # This ensures frames have their local variables # cleared before we put the original excepthook # back. Clearing the variables results in some # more SQLite operations which also can raise # unraisables. traceback.clear_frames was # introduced in Python 3.4 and unittest was # updated to call it in assertRaises. See issue # 164 if hasattr(traceback, "clear_frames"): traceback.clear_frames(sys.exc_info()[2]) raise finally: if len(called)<1: self.fail("Call %s(*%s, **%s) did not do any unraiseable" % (func, args, kwargs) ) self.assertEqual(exc, called[0][0]) # check it was the correct type finally: sys.excepthook=orig def testSanity(self): "Check all parts compiled and are present" # check some error codes etc are present - picked first middle and last from lists in code apsw.SQLError apsw.MisuseError apsw.NotADBError apsw.ThreadingViolationError apsw.BindingsError apsw.ExecTraceAbort apsw.SQLITE_FCNTL_SIZE_HINT apsw.mapping_file_control["SQLITE_FCNTL_SIZE_HINT"]==apsw.SQLITE_FCNTL_SIZE_HINT apsw.URIFilename self.assertTrue(len(apsw.sqlite3_sourceid())>10) def testConnection(self): "Test connection opening" # bad keyword arg self.assertRaises(TypeError, apsw.Connection, ":memory:", user="nobody") # wrong types self.assertRaises(TypeError, apsw.Connection, 3) # non-unicode if not py3: self.assertRaises(UnicodeDecodeError, apsw.Connection, "\xef\x22\xd3\x9e") # bad file (cwd) self.assertRaises(apsw.CantOpenError, apsw.Connection, ".") # bad open flags can't be tested as sqlite accepts them all - ticket #3037 # self.assertRaises(apsw.CantOpenError, apsw.Connection, "", flags=65535) # bad vfs self.assertRaises(TypeError, apsw.Connection, "foo", vfs=3, flags=-1) self.assertRaises(apsw.SQLError, apsw.Connection, "foo", vfs="jhjkds") def testConnectionFileControl(self): "Verify sqlite3_file_control" # Note that testVFS deals with success cases and the actual vfs backend self.assertRaises(TypeError, self.db.filecontrol, 1, 2) self.assertRaises(TypeError, self.db.filecontrol, "main", 1001, "foo") self.assertRaises(OverflowError, self.db.filecontrol, "main", 1001, l("45236748972389749283")) self.assertEqual(self.db.filecontrol("main", 1001, 25), False) def testConnectionConfig(self): "Test Connection.config function" self.assertRaises(TypeError, self.db.config) self.assertRaises(TypeError, self.db.config, "three") x=long(0x7fffffff) self.assertRaises(OverflowError, self.db.config, x*x*x*x*x) self.assertRaises(ValueError, self.db.config, 82397) self.assertRaises(TypeError, self.db.config, apsw.SQLITE_DBCONFIG_ENABLE_FKEY, "banana") for i in apsw.SQLITE_DBCONFIG_ENABLE_FKEY, apsw.SQLITE_DBCONFIG_ENABLE_TRIGGER: self.assertEqual(1, self.db.config(i, 1)) self.assertEqual(1, self.db.config(i, -1)) self.assertEqual(0, self.db.config(i, 0)) def testMemoryLeaks(self): "MemoryLeaks: Run with a memory profiler such as valgrind and debug Python" # make and toss away a bunch of db objects, cursors, functions etc - if you use memory profiling then # simple memory leaks will show up c=self.db.cursor() c.execute("create table foo(x)") vals=[ [1], [None], [math.pi], ["kjkljkljl"], [u(r"\u1234\u345432432423423kjgjklhdfgkjhsdfjkghdfjskh")], [b(r"78696ghgjhgjhkgjkhgjhg\xfe\xdf")] ] c.executemany("insert into foo values(?)", vals) for i in range(MEMLEAKITERATIONS): db=apsw.Connection(TESTFILEPREFIX+"testdb") db.createaggregatefunction("aggfunc", lambda x: x) db.createscalarfunction("scalarfunc", lambda x: x) db.setbusyhandler(lambda x: False) db.setbusytimeout(1000) db.setcommithook(lambda x=1: 0) db.setrollbackhook(lambda x=2: 1) db.setupdatehook(lambda x=3: 2) db.setwalhook(lambda *args: 0) db.collationneeded(lambda x: 4) def rt1(c,r): db.setrowtrace(rt2) return r def rt2(c,r): c.setrowtrace(rt1) return r def et1(c,s,b): db.setexectrace(et2) return True def et2(c,s,b): c.setexectrace(et1) return True for i in range(120): c2=db.cursor() c2.setrowtrace(rt1) c2.setexectrace(et1) for row in c2.execute("select * from foo"+" "*i): # spaces on end defeat statement cache pass del c2 db.close() def testBindings(self): "Check bindings work correctly" c=self.db.cursor() c.execute("create table foo(x,y,z)") vals=( ("(?,?,?)", (1,2,3)), ("(?,?,?)", [1,2,3]), ("(?,?,?)", range(1,4)), ("(:a,$b,:c)", {'a': 1, 'b': 2, 'c': 3}), ("(1,?,3)", (2,)), ("(1,$a,$c)", {'a': 2, 'b': 99, 'c': 3}), # some unicode fun (u(r"($\N{LATIN SMALL LETTER E WITH CIRCUMFLEX},:\N{LATIN SMALL LETTER A WITH TILDE},$\N{LATIN SMALL LETTER O WITH DIAERESIS})"), (1,2,3)), (u(r"($\N{LATIN SMALL LETTER E WITH CIRCUMFLEX},:\N{LATIN SMALL LETTER A WITH TILDE},$\N{LATIN SMALL LETTER O WITH DIAERESIS})"), {u(r"\N{LATIN SMALL LETTER E WITH CIRCUMFLEX}"): 1, u(r"\N{LATIN SMALL LETTER A WITH TILDE}"): 2, u(r"\N{LATIN SMALL LETTER O WITH DIAERESIS}"): 3,}) ) for str,bindings in vals: c.execute("insert into foo values"+str, bindings) self.assertEqual(next(c.execute("select * from foo")), (1,2,3)) c.execute("delete from foo") # currently missing dict keys come out as null c.execute("insert into foo values(:a,:b,$c)", {'a': 1, 'c':3}) # 'b' deliberately missing self.assertEqual((1,None,3), next(c.execute("select * from foo"))) c.execute("delete from foo") # these ones should cause errors vals=( (apsw.BindingsError, "(?,?,?)", (1,2)), # too few (apsw.BindingsError, "(?,?,?)", (1,2,3,4)), # too many (apsw.BindingsError, "(?,?,?)", None), # none at all (apsw.BindingsError, "(?,?,?)", {'a': 1}), # ? type, dict bindings (note that the reverse will work since all # named bindings are also implicitly numbered (TypeError, "(?,?,?)", 2), # not a dict or sequence (TypeError, "(:a,:b,:c)", {'a': 1, 'b': 2, 'c': self}), # bad type for c ) for exc,str,bindings in vals: self.assertRaises(exc, c.execute, "insert into foo values"+str, bindings) # with multiple statements c.execute("insert into foo values(?,?,?); insert into foo values(?,?,?)", (99,100,101,102,103,104)) self.assertRaises(apsw.BindingsError, c.execute, "insert into foo values(?,?,?); insert into foo values(?,?,?); insert some more", (100,100,101,1000,103)) # too few self.assertRaises(apsw.BindingsError, c.execute, "insert into foo values(?,?,?); insert into foo values(?,?,?)", (101,100,101,1000,103,104,105)) # too many # check the relevant statements did or didn't execute as appropriate self.assertEqual(next(self.db.cursor().execute("select count(*) from foo where x=99"))[0], 1) self.assertEqual(next(self.db.cursor().execute("select count(*) from foo where x=102"))[0], 1) self.assertEqual(next(self.db.cursor().execute("select count(*) from foo where x=100"))[0], 1) self.assertEqual(next(self.db.cursor().execute("select count(*) from foo where x=1000"))[0], 0) self.assertEqual(next(self.db.cursor().execute("select count(*) from foo where x=101"))[0], 1) self.assertEqual(next(self.db.cursor().execute("select count(*) from foo where x=105"))[0], 0) # check there are some bindings! self.assertRaises(apsw.BindingsError, c.execute, "create table bar(x,y,z);insert into bar values(?,?,?)") # across executemany vals=( (1,2,3), (4,5,6), (7,8,9) ) c.executemany("insert into foo values(?,?,?);", vals) for x,y,z in vals: self.assertEqual(next(c.execute("select * from foo where x=?",(x,))), (x,y,z)) # with an iterator def myvals(): for i in range(10): yield {'a': i, 'b': i*10, 'c': i*100} c.execute("delete from foo") c.executemany("insert into foo values($a,:b,$c)", myvals()) c.execute("delete from foo") # errors for executemany self.assertRaises(TypeError, c.executemany, "statement", 12, 34, 56) # incorrect num params self.assertRaises(TypeError, c.executemany, "statement", 12) # wrong type self.assertRaises(apsw.SQLError, c.executemany, "syntax error", [(1,)]) # error in prepare def myiter(): yield 1/0 self.assertRaises(ZeroDivisionError, c.executemany, "statement", myiter()) # immediate error in iterator def myiter(): yield self self.assertRaises(TypeError, c.executemany, "statement", myiter()) # immediate bad type self.assertRaises(TypeError, c.executemany, "select ?", ((self,), (1))) # bad val c.executemany("statement", ()) # empty sequence # error in iterator after a while def myvals(): for i in range(2): yield {'a': i, 'b': i*10, 'c': i*100} 1/0 self.assertRaises(ZeroDivisionError, c.executemany, "insert into foo values($a,:b,$c)", myvals()) self.assertEqual(next(c.execute("select count(*) from foo"))[0], 2) c.execute("delete from foo") # return bad type from iterator after a while def myvals(): for i in range(2): yield {'a': i, 'b': i*10, 'c': i*100} yield self self.assertRaises(TypeError, c.executemany, "insert into foo values($a,:b,$c)", myvals()) self.assertEqual(next(c.execute("select count(*) from foo"))[0], 2) c.execute("delete from foo") # some errors in executemany self.assertRaises(apsw.BindingsError, c.executemany, "insert into foo values(?,?,?)", ( (1,2,3), (1,2,3,4))) self.assertRaises(apsw.BindingsError, c.executemany, "insert into foo values(?,?,?)", ( (1,2,3), (1,2))) # incomplete execution across executemany c.executemany("select * from foo; select ?", ( (1,), (2,) )) # we don't read self.assertRaises(apsw.IncompleteExecutionError, c.executemany, "begin") # set type (pysqlite error with this) if sys.version_info>=(2, 4, 0): c.execute("create table xxset(x,y,z)") c.execute("insert into xxset values(?,?,?)", set((1,2,3))) c.executemany("insert into xxset values(?,?,?)", (set((4,5,6)),)) result=[(1,2,3), (4,5,6)] for i,v in enumerate(c.execute("select * from xxset order by x")): self.assertEqual(v, result[i]) def testCursor(self): "Check functionality of the cursor" c=self.db.cursor() # shouldn't be able to manually create self.assertRaises(TypeError, type(c)) # give bad params self.assertRaises(TypeError, c.execute) self.assertRaises(TypeError, c.execute, "foo", "bar", "bam") # empty statements c.execute("") c.execute(" ;\n\t\r;;") # unicode self.assertEqual(3, next(c.execute(u("select 3")))[0]) if not py3: self.assertRaises(UnicodeDecodeError, c.execute, "\x99\xaa\xbb\xcc") # does it work? c.execute("create table foo(x,y,z)") # table should be empty entry=-1 for entry,values in enumerate(c.execute("select * from foo")): pass self.assertEqual(entry,-1, "No rows should have been returned") # add ten rows for i in range(10): c.execute("insert into foo values(1,2,3)") for entry,values in enumerate(c.execute("select * from foo")): # check we get back out what we put in self.assertEqual(values, (1,2,3)) self.assertEqual(entry, 9, "There should have been ten rows") # does getconnection return the right object self.assertTrue(c.getconnection() is self.db) # check getdescription - note column with space in name and [] syntax to quote it cols=( ("x a space", "integer"), ("y", "text"), ("z", "foo"), ("a", "char"), (u(r"\N{LATIN SMALL LETTER E WITH CIRCUMFLEX}\N{LATIN SMALL LETTER A WITH TILDE}"), u(r"\N{LATIN SMALL LETTER O WITH DIAERESIS}\N{LATIN SMALL LETTER U WITH CIRCUMFLEX}")), ) c.execute("drop table foo; create table foo (%s)" % (", ".join(["[%s] %s" % (n,t) for n,t in cols]),)) c.execute("insert into foo([x a space]) values(1)") for row in c.execute("select * from foo"): self.assertEqual(cols, c.getdescription()) self.assertEqual(cols, tuple([d[:2] for d in c.description])) self.assertEqual((None,None,None,None,None), c.description[0][2:]) self.assertEqual(list(map(len, c.description)), [7]*len(cols)) # check description caching isn't broken cols2=cols[1:4] for row in c.execute("select y,z,a from foo"): self.assertEqual(cols2, c.getdescription()) self.assertEqual(cols2, tuple([d[:2] for d in c.description])) self.assertEqual((None,None,None,None,None), c.description[0][2:]) self.assertEqual(list(map(len, c.description)), [7]*len(cols2)) # execution is complete ... self.assertRaises(apsw.ExecutionCompleteError, c.getdescription) self.assertRaises(apsw.ExecutionCompleteError, lambda: c.description) self.assertRaises(StopIteration, lambda xx=0: _realnext(c)) self.assertRaises(StopIteration, lambda xx=0: _realnext(c)) # fetchone is used throughout, check end behaviour self.assertEqual(None, c.fetchone()) self.assertEqual(None, c.fetchone()) self.assertEqual(None, c.fetchone()) # nulls for getdescription for row in c.execute("pragma user_version"): self.assertEqual(c.getdescription(), ( ('user_version', None), )) # incomplete c.execute("select * from foo; create table bar(x)") # we don't bother reading leaving self.assertRaises(apsw.IncompleteExecutionError, c.execute, "select * from foo") # execution incomplete self.assertTableNotExists("bar") # autocommit self.assertEqual(True, self.db.getautocommit()) c.execute("begin immediate") self.assertEqual(False, self.db.getautocommit()) # pragma c.execute("pragma user_version") c.execute("pragma pure=nonsense") # error self.assertRaises(apsw.SQLError, c.execute, "create table bar(x,y,z); this is a syntax error; create table bam(x,y,z)") self.assertTableExists("bar") self.assertTableNotExists("bam") # fetchall self.assertEqual(c.fetchall(), []) self.assertEqual(c.execute("select 3; select 4").fetchall(), [(3,), (4,)] ) def testTypes(self): "Check type information is maintained" c=self.db.cursor() c.execute("create table foo(row,x)") vals=test_types_vals for i,v in enumerate(vals): c.execute("insert into foo values(?,?)", (i, v)) # add function to test conversion back as well def snap(*args): return args[0] self.db.createscalarfunction("snap", snap) # now see what we got out count=0 for row,v,fv in c.execute("select row,x,snap(x) from foo"): count+=1 if type(vals[row]) is float: self.assertAlmostEqual(vals[row], v) self.assertAlmostEqual(vals[row], fv) else: self.assertEqual(vals[row], v) self.assertEqual(vals[row], fv) self.assertEqual(count, len(vals)) # check some out of bounds conditions # integer greater than signed 64 quantity (SQLite only supports up to that) self.assertRaises(OverflowError, c.execute, "insert into foo values(9999,?)", (922337203685477580799,)) self.assertRaises(OverflowError, c.execute, "insert into foo values(9999,?)", (-922337203685477580799,)) # invalid character data - non-ascii data must be provided in unicode if not py3: # py3 - all strings are unicode so not a problem self.assertRaises(UnicodeDecodeError, c.execute, "insert into foo values(9999,?)", ("\xfe\xfb\x80\x92",)) # not valid types for SQLite self.assertRaises(TypeError, c.execute, "insert into foo values(9999,?)", (apsw,)) # a module self.assertRaises(TypeError, c.execute, "insert into foo values(9999,?)", (type,)) # type self.assertRaises(TypeError, c.execute, "insert into foo values(9999,?)", (dir,)) # function # check nothing got inserted self.assertEqual(0, next(c.execute("select count(*) from foo where row=9999"))[0]) # playing with default encoding and non-ascii strings - py2 only if py3: return enc=sys.getdefaultencoding() reload(sys) # gets setdefaultencoding function back try: for v in vals: if type(v)!=unicode: continue def encoding(*args): return v.encode("utf8") # returns as str not unicode self.db.createscalarfunction("encoding", encoding) sys.setdefaultencoding("utf8") for row in c.execute("select encoding(3)"): self.assertEqual(v, row[0]) c.execute("insert into foo values(1234,?)", (v.encode("utf8"),)) for row in c.execute("select x from foo where rowid="+str(self.db.last_insert_rowid())): self.assertEqual(v, row[0]) finally: sys.setdefaultencoding(enc) def testFormatSQLValue(self): "Verify text formatting of values" vals=( (3, "3"), (3.1, "3.1"), (-3, "-3"), (-3.1, "-3.1"), (9223372036854775807, "9223372036854775807"), (-9223372036854775808, "-9223372036854775808"), (None, "NULL"), ("ABC", "'ABC'"), (u(r"\N{BLACK STAR} \N{WHITE STAR} \N{LIGHTNING} \N{COMET} "), "'"+u(r"\N{BLACK STAR} \N{WHITE STAR} \N{LIGHTNING} \N{COMET} ")+"'"), ("", "''"), ("'", "''''"), ("'a", "'''a'"), ("a'", "'a'''"), ("''", "''''''"), ("'"*20000, "'"+"'"*40000+"'"), ("\0", "''||X'00'||''"), ("AB\0C", "'AB'||X'00'||'C'"), ("A'B'\0C", "'A''B'''||X'00'||'C'"), ("\0A'B", "''||X'00'||'A''B'"), ("A'B\0", "'A''B'||X'00'||''"), (b(r"AB\0C"), "X'41420043'"), ) for vin, vout in vals: if not py3: if isinstance(vin, str): vin=unicode(vin) out=apsw.format_sql_value(vin) if not py3: self.assertEqual(out, unicode(vout)) else: self.assertEqual(out, vout) # Errors if not py3: self.assertRaises(TypeError, apsw.format_sql_value, "plain string") self.assertRaises(TypeError, apsw.format_sql_value, apsw) self.assertRaises(TypeError, apsw.format_sql_value) def testWAL(self): "Test WAL functions" # note that it is harmless calling wal functions on a db not in wal mode self.assertRaises(TypeError, self.db.wal_autocheckpoint) self.assertRaises(TypeError, self.db.wal_autocheckpoint, "a strinbg") self.db.wal_autocheckpoint(8912) self.assertRaises(TypeError, self.db.wal_checkpoint, -1) self.db.wal_checkpoint() self.db.wal_checkpoint("main") if sys.version_info>(2,4): # 2.3 barfs internally v=self.db.wal_checkpoint(mode=apsw.SQLITE_CHECKPOINT_PASSIVE) self.assertTrue(isinstance(v, tuple) and len(v)==2 and isinstance(v[0], int) and isinstance(v[1], int)) self.assertRaises(apsw.MisuseError, self.db.wal_checkpoint, mode=876786) self.assertRaises(TypeError, self.db.setwalhook) self.assertRaises(TypeError, self.db.setwalhook, 12) self.db.setwalhook(None) # check we can set wal mode self.assertEqual("wal", self.db.cursor().execute("pragma journal_mode=wal").fetchall()[0][0]) # errors in wal callback def zerodiv(*args): 1/0 self.db.setwalhook(zerodiv) self.assertRaises(ZeroDivisionError, self.db.cursor().execute, "create table one(x)") # the error happens after the wal commit so the table should exist self.assertTableExists("one") def badreturn(*args): return "three" self.db.setwalhook(badreturn) self.assertRaises(TypeError, self.db.cursor().execute, "create table two(x)") self.assertTableExists("two") expectdbname="" def walhook(conn, dbname, pages): self.assertTrue(conn is self.db) self.assertTrue(pages>0) self.assertEqual(dbname, expectdbname) return apsw.SQLITE_OK expectdbname="main" self.db.setwalhook(walhook) self.db.cursor().execute("create table three(x)") self.db.cursor().execute("attach '%stestdb2?psow=0' as fred" % ("file:"+TESTFILEPREFIX,) ) self.assertEqual("wal", self.db.cursor().execute("pragma fred.journal_mode=wal").fetchall()[0][0]) expectdbname="fred" self.db.cursor().execute("create table fred.three(x)") def testAuthorizer(self): "Verify the authorizer works" retval=apsw.SQLITE_DENY def authorizer(operation, paramone, paramtwo, databasename, triggerorview): # we fail creates of tables starting with "private" if operation==apsw.SQLITE_CREATE_TABLE and paramone.startswith("private"): return retval return apsw.SQLITE_OK c=self.db.cursor() # this should succeed c.execute("create table privateone(x)") # this should fail self.assertRaises(TypeError, self.db.setauthorizer, 12) # must be callable self.db.setauthorizer(authorizer) for val in apsw.SQLITE_DENY, long(apsw.SQLITE_DENY), 0x800276889000212112: retval=val if val<100: self.assertRaises(apsw.AuthError, c.execute, "create table privatetwo(x)") else: self.assertRaises(OverflowError, c.execute, "create table privatetwo(x)") # this should succeed self.db.setauthorizer(None) c.execute("create table privatethree(x)") self.assertTableExists("privateone") self.assertTableNotExists("privatetwo") self.assertTableExists("privatethree") # error in callback def authorizer(operation, *args): if operation==apsw.SQLITE_CREATE_TABLE: 1/0 return apsw.SQLITE_OK self.db.setauthorizer(authorizer) self.assertRaises(ZeroDivisionError, c.execute, "create table shouldfail(x)") self.assertTableNotExists("shouldfail") # bad return type in callback def authorizer(operation, *args): return "a silly string" self.db.setauthorizer(authorizer) self.assertRaises(TypeError, c.execute, "create table shouldfail(x); select 3+5") self.db.setauthorizer(None) # otherwise next line will fail! self.assertTableNotExists("shouldfail") # back to normal self.db.setauthorizer(None) c.execute("create table shouldsucceed(x)") self.assertTableExists("shouldsucceed") def testExecTracing(self): "Verify tracing of executed statements and bindings" self.db.setexectrace(None) c=self.db.cursor() cmds=[] # this is maniulated in tracefunc def tracefunc(cursor, cmd, bindings): cmds.append( (cmd, bindings) ) return True c.execute("create table one(x,y,z)") self.assertEqual(len(cmds),0) self.assertRaises(TypeError, c.setexectrace, 12) # must be callable self.assertRaises(TypeError, self.db.setexectrace, 12) # must be callable c.setexectrace(tracefunc) statements=[ ("insert into one values(?,?,?)", (1,2,3)), ("insert into one values(:a,$b,$c)", {'a': 1, 'b': "string", 'c': None}), ] for cmd,values in statements: c.execute(cmd, values) self.assertEqual(cmds, statements) self.assertTrue(c.getexectrace() is tracefunc) c.setexectrace(None) self.assertTrue(c.getexectrace() is None) c.execute("create table bar(x,y,z)") # cmds should be unchanged self.assertEqual(cmds, statements) # tracefunc can abort execution count=next(c.execute("select count(*) from one"))[0] def tracefunc(cursor, cmd, bindings): return False # abort c.setexectrace(tracefunc) self.assertRaises(apsw.ExecTraceAbort, c.execute, "insert into one values(1,2,3)") # table should not have been modified c.setexectrace(None) self.assertEqual(count, next(c.execute("select count(*) from one"))[0]) # error in tracefunc def tracefunc(cursor, cmd, bindings): 1/0 c.setexectrace(tracefunc) self.assertRaises(ZeroDivisionError, c.execute, "insert into one values(1,2,3)") c.setexectrace(None) self.assertEqual(count, next(c.execute("select count(*) from one"))[0]) # test across executemany and multiple statments counter=[0] def tracefunc(cursor, cmd, bindings): counter[0]=counter[0]+1 return True c.setexectrace(tracefunc) c.execute("create table two(x);insert into two values(1); insert into two values(2); insert into two values(?); insert into two values(?)", (3, 4)) self.assertEqual(counter[0], 5) counter[0]=0 c.executemany("insert into two values(?); insert into two values(?)", [[n,n+1] for n in range(5)]) self.assertEqual(counter[0], 10) # error in func but only after a while c.execute("delete from two") counter[0]=0 def tracefunc(cursor, cmd, bindings): counter[0]=counter[0]+1 if counter[0]>3: 1/0 return True c.setexectrace(tracefunc) self.assertRaises(ZeroDivisionError, c.execute, "insert into two values(1); insert into two values(2); insert into two values(?); insert into two values(?)", (3, 4)) self.assertEqual(counter[0], 4) c.setexectrace(None) # check the first statements got executed self.assertEqual(3, next(c.execute("select max(x) from two"))[0]) # executemany def tracefunc(cursor, cmd, bindings): 1/0 c.setexectrace(tracefunc) self.assertRaises(ZeroDivisionError, c.executemany, "select ?", [(1,)]) c.setexectrace(None) # tracefunc with wrong number of arguments def tracefunc(a,b,c,d,e,f): 1/0 c.setexectrace(tracefunc) self.assertRaises(TypeError, c.execute, "select max(x) from two") def tracefunc(*args): return BadIsTrue() c.setexectrace(tracefunc) self.assertRaises(ZeroDivisionError, c.execute, "select max(x) from two") # connection based tracing self.assertEqual( self.db.getexectrace(), None ) traced=[False, False] def contrace(*args): traced[0]=True return True def curtrace(*args): traced[1]=True return True c.setexectrace(curtrace) c.execute("select 3") self.assertEqual( traced, [False, True] ) traced=[False, False] self.db.setexectrace(contrace) c.execute("select 3") self.assertEqual( traced, [False, True] ) traced=[False, False] c.setexectrace(None) c.execute("select 3") self.assertEqual( traced, [False, False] ) traced=[False, False] self.db.cursor().execute("select 3") self.assertEqual( traced, [True, False] ) self.assertEqual( self.db.getexectrace(), contrace ) self.assertEqual( c.getexectrace(), None ) self.assertEqual( self.db.cursor().getexectrace(), None ) c.setexectrace(curtrace) self.assertEqual( c.getexectrace(), curtrace ) def testRowTracing(self): "Verify row tracing" self.db.setrowtrace(None) c=self.db.cursor() c.execute("create table foo(x,y,z)") vals=(1,2,3) c.execute("insert into foo values(?,?,?)", vals) def tracefunc(cursor, row): return tuple([7 for i in row]) # should get original row back self.assertEqual(next(c.execute("select * from foo")), vals) self.assertRaises(TypeError, c.setrowtrace, 12) # must be callable c.setrowtrace(tracefunc) self.assertTrue(c.getrowtrace() is tracefunc) # all values replaced with 7 self.assertEqual(next(c.execute("select * from foo")), tuple([7]*len(vals))) def tracefunc(cursor, row): return (7,) # a single 7 c.setrowtrace(tracefunc) self.assertEqual(next(c.execute("select * from foo")), (7,)) # no alteration again c.setrowtrace(None) self.assertEqual(next(c.execute("select * from foo")), vals) # error in function def tracefunc(*result): 1/0 c.setrowtrace(tracefunc) try: for row in c.execute("select * from foo"): self.fail("Should have had exception") break except ZeroDivisionError: pass c.setrowtrace(None) self.assertEqual(next(c.execute("select * from foo")), vals) # returning null c.execute("create table bar(x)") c.executemany("insert into bar values(?)", [[x] for x in range(10)]) counter=[0] def tracefunc(cursor, args): counter[0]=counter[0]+1 if counter[0]%2: return None return args c.setrowtrace(tracefunc) countertoo=0 for row in c.execute("select * from bar"): countertoo+=1 c.setrowtrace(None) self.assertEqual(countertoo, 5) # half the rows should be skipped # connection based self.assertRaises(TypeError, self.db.setrowtrace, 12) self.assertEqual( self.db.getrowtrace(), None) traced=[False, False] def contrace(cursor, row): traced[0]=True return row def curtrace(cursor, row): traced[1]=True return row for row in c.execute("select 3,3"): pass self.assertEqual( traced, [False, False]) traced=[False, False] self.db.setrowtrace(contrace) for row in self.db.cursor().execute("select 3,3"): pass self.assertEqual( traced, [True, False]) traced=[False, False] c.setrowtrace(curtrace) for row in c.execute("select 3,3"): pass self.assertEqual( traced, [False, True]) traced=[False, False] c.setrowtrace(None) for row in c.execute("select 3"): pass self.assertEqual( traced, [False, False]) self.assertEqual( self.db.getrowtrace(), contrace) def testScalarFunctions(self): "Verify scalar functions" c=self.db.cursor() def ilove7(*args): return 7 self.assertRaises(TypeError, self.db.createscalarfunction, "twelve", 12) # must be callable self.assertRaises(TypeError, self.db.createscalarfunction, "twelve", 12, 27, 28) # too many params try: self.db.createscalarfunction("twelve", ilove7, 900) # too many args except (apsw.SQLError, apsw.MisuseError): # https://sqlite.org/cvstrac/tktview?tn=3875 pass # some unicode fun self.db.createscalarfunction, u(r"twelve\N{BLACK STAR}"), ilove7 try: # SQLite happily registers the function, but you can't # call it self.assertEqual(c.execute("select "+u(r"twelve\N{BLACK STAR}")+"(3)").fetchall(), [[7]]) except apsw.SQLError: pass self.db.createscalarfunction("seven", ilove7) c.execute("create table foo(x,y,z)") for i in range(10): c.execute("insert into foo values(?,?,?)", (i,i,i)) for i in range(10): self.assertEqual( (7,), next(c.execute("select seven(x,y,z) from foo where x=?", (i,)))) # clear func self.assertRaises(apsw.BusyError, self.db.createscalarfunction,"seven", None) # active select above so no funcs can be changed for row in c.execute("select null"): pass # no active sql now self.db.createscalarfunction("seven", None) # function names are limited to 255 characters - SQLerror is the rather unintuitive error return try: self.db.createscalarfunction("a"*300, ilove7) except (apsw.SQLError, apsw.MisuseError): pass # see sqlite ticket #3875 # have an error in a function def badfunc(*args): return 1/0 self.db.createscalarfunction("badscalarfunc", badfunc) self.assertRaises(ZeroDivisionError, c.execute, "select badscalarfunc(*) from foo") # return non-allowed types for v in ({'a': 'dict'}, ['a', 'list'], self): def badtype(*args): return v self.db.createscalarfunction("badtype", badtype) self.assertRaises(TypeError, c.execute, "select badtype(*) from foo") # return non-unicode string def ilove8bit(*args): return "\x99\xaa\xbb\xcc" self.db.createscalarfunction("ilove8bit", ilove8bit) if not py3: self.assertRaises(UnicodeDecodeError, c.execute, "select ilove8bit(*) from foo") # coverage def bad(*args): 1/0 self.db.createscalarfunction("bad", bad) self.assertRaises(ZeroDivisionError, c.execute, "select bad(3)+bad(4)") # turn a blob into a string to fail python utf8 conversion self.assertRaises(UnicodeDecodeError, c.execute, "select bad(cast (x'fffffcfb9208' as TEXT))") # register same named function taking different number of arguments for i in range(-1, 4): self.db.createscalarfunction("multi", lambda *args: len(args), i) gc.collect() for row in c.execute("select multi(), multi(1), multi(1,2), multi(1,2,3), multi(1,2,3,4), multi(1,2,3,4,5)"): self.assertEqual(row, (0, 1, 2, 3, 4, 5)) # deterministic flag # check error handling self.assertRaises(TypeError, self.db.createscalarfunction, "twelve", deterministic="324") self.assertRaises(TypeError, self.db.createscalarfunction, "twelve", deterministic=324) # check it has an effect class Counter: # on calling returns how many times this instance has been called num_calls=0 def __call__(self): self.num_calls+=1 return self.num_calls self.db.createscalarfunction("deterministic", Counter(), deterministic=True) self.db.createscalarfunction("nondeterministic", Counter(), deterministic=False) self.db.createscalarfunction("unspecdeterministic", Counter()) self.assertEqual(c.execute("select deterministic()=deterministic()").fetchall()[0][0], 1) self.assertEqual(c.execute("select nondeterministic()=nondeterministic()").fetchall()[0][0], 0) self.assertEqual(c.execute("select unspecdeterministic()=unspecdeterministic()").fetchall()[0][0], 0) def testAggregateFunctions(self): "Verify aggregate functions" c=self.db.cursor() c.execute("create table foo(x,y,z)") # aggregate function class longest: def __init__(self): self.result="" def step(self, context, *args): for i in args: if len(str(i))>len(self.result): self.result=str(i) def final(self, context): return self.result def factory(): v=longest() return None,v.step,v.final factory=staticmethod(factory) self.assertRaises(TypeError, self.db.createaggregatefunction,True, True, True, True) # wrong number/type of params self.assertRaises(TypeError, self.db.createaggregatefunction,"twelve", 12) # must be callable try: self.db.createaggregatefunction("twelve", longest.factory, 923) # max args is 127 except (apsw.SQLError, apsw.MisuseError): # used to be SQLerror then changed https://sqlite.org/cvstrac/tktview?tn=3875 pass self.assertRaises(TypeError, self.db.createaggregatefunction, u(r"twelve\N{BLACK STAR}"), 12) # must be ascii self.db.createaggregatefunction("twelve", None) self.db.createaggregatefunction("longest", longest.factory) vals=( ("kjfhgk","gkjlfdhgjkhsdfkjg","gklsdfjgkldfjhnbnvc,mnxb,mnxcv,mbncv,mnbm,ncvx,mbncv,mxnbcv,"), # last one is deliberately the longest ("gdfklhj",":gjkhgfdsgfd","gjkfhgjkhdfkjh"), ("gdfjkhg","gkjlfd",""), (1,2,30), ) for v in vals: c.execute("insert into foo values(?,?,?)", v) v=next(c.execute("select longest(x,y,z) from foo"))[0] self.assertEqual(v, vals[0][2]) # SQLite doesn't allow step functions to return an error, so we have to defer to the final def badfactory(): def badfunc(*args): 1/0 def final(*args): self.fail("This should not be executed") return 1 return None,badfunc,final self.db.createaggregatefunction("badfunc", badfactory) self.assertRaises(ZeroDivisionError, c.execute, "select badfunc(x) from foo") # error in final def badfactory(): def badfunc(*args): pass def final(*args): 1/0 return None,badfunc,final self.db.createaggregatefunction("badfunc", badfactory) self.assertRaises(ZeroDivisionError, c.execute, "select badfunc(x) from foo") # error in step and final def badfactory(): def badfunc(*args): 1/0 def final(*args): raise ImportError() # zero div from above is what should be returned return None,badfunc,final self.db.createaggregatefunction("badfunc", badfactory) self.assertRaises(ZeroDivisionError, c.execute, "select badfunc(x) from foo") # bad return from factory def badfactory(): def badfunc(*args): pass def final(*args): return 0 return {} self.db.createaggregatefunction("badfunc", badfactory) self.assertRaises(TypeError, c.execute, "select badfunc(x) from foo") # incorrect number of items returned def badfactory(): def badfunc(*args): pass def final(*args): return 0 return (None, badfunc, final, badfactory) self.db.createaggregatefunction("badfunc", badfactory) self.assertRaises(TypeError, c.execute, "select badfunc(x) from foo") # step not callable def badfactory(): def badfunc(*args): pass def final(*args): return 0 return (None, True, final ) self.db.createaggregatefunction("badfunc", badfactory) self.assertRaises(TypeError, c.execute, "select badfunc(x) from foo") # final not callable def badfactory(): def badfunc(*args): pass def final(*args): return 0 return (None, badfunc, True ) self.db.createaggregatefunction("badfunc", badfactory) self.assertRaises(TypeError, c.execute, "select badfunc(x) from foo") # error in factory method def badfactory(): 1/0 self.db.createaggregatefunction("badfunc", badfactory) self.assertRaises(ZeroDivisionError, c.execute, "select badfunc(x) from foo") def testCollation(self): "Verify collations" # create a whole bunch to check they are freed for i in range(1024): self.db.createcollation("x"*i, lambda x,y: i) for ii in range(1024): self.db.createcollation("x"*ii, lambda x,y: ii) c=self.db.cursor() def strnumcollate(s1, s2): "return -1 if s1s2 else 0. Items are string head and numeric tail" # split values into two parts - the head and the numeric tail values=[s1,s2] for vn,v in enumerate(values): for i in range(len(v),0,-1): if v[i-1] not in "01234567890": break try: v=v[:i],int(v[i:]) except ValueError: v=v[:i],None values[vn]=v # compare if values[0]values[1]: return long(1) # and a long return 0 self.assertRaises(TypeError, self.db.createcollation, "twelve", strnumcollate, 12) # wrong # params self.assertRaises(TypeError, self.db.createcollation, "twelve", 12) # must be callable self.db.createcollation("strnum", strnumcollate) c.execute("create table foo(x)") # adding this unicode in front improves coverage uni=u(r"\N{LATIN SMALL LETTER E WITH CIRCUMFLEX}") vals=(uni+"file1", uni+"file7", uni+"file9", uni+"file17", uni+"file20") valsrev=list(vals) valsrev.reverse() # put them into table in reverse order valsrev=valsrev[1:]+valsrev[:1] # except one out of order c.executemany("insert into foo values(?)", [(x,) for x in valsrev]) for i,row in enumerate(c.execute("select x from foo order by x collate strnum")): self.assertEqual(vals[i], row[0]) # collation function with an error def collerror(*args): return 1/0 self.db.createcollation("collerror", collerror) self.assertRaises(ZeroDivisionError, c.execute, "select x from foo order by x collate collerror") # collation function that returns bad value def collerror(*args): return {} self.db.createcollation("collbadtype", collerror) self.assertRaises(TypeError, c.execute, "select x from foo order by x collate collbadtype") # get error when registering c.execute("select x from foo order by x collate strnum") # nb we don't read so cursor is still active self.assertRaises(apsw.BusyError, self.db.createcollation, "strnum", strnumcollate) # unregister for row in c: pass self.db.createcollation("strnum", None) # check it really has gone try: c.execute("select x from foo order by x collate strnum") except apsw.SQLError: pass # check statement still works for _ in c.execute("select x from foo"): pass # collation needed testing self.assertRaises(TypeError, self.db.collationneeded, 12) def cn1(): pass def cn2(x, y): 1/0 def cn3(x, y): self.assertTrue(x is self.db) self.assertEqual(y, "strnum") self.db.createcollation("strnum", strnumcollate) self.db.collationneeded(cn1) try: for _ in c.execute("select x from foo order by x collate strnum"): pass except TypeError: pass self.db.collationneeded(cn2) try: for _ in c.execute("select x from foo order by x collate strnum"): pass except ZeroDivisionError: pass self.db.collationneeded(cn3) for _ in c.execute("select x from foo order by x collate strnum"): pass self.db.collationneeded(None) self.db.createcollation("strnum", None) # check it really has gone try: c.execute("select x from foo order by x collate strnum") except apsw.SQLError: pass def testProgressHandler(self): "Verify progress handler" c=self.db.cursor() phcalledcount=[0] def ph(): phcalledcount[0]=phcalledcount[0]+1 return 0 # make 400 rows of random numbers c.execute("begin ; create table foo(x)") c.executemany("insert into foo values(?)", randomintegers(400)) c.execute("commit") self.assertRaises(TypeError, self.db.setprogresshandler, 12) # must be callable self.assertRaises(TypeError, self.db.setprogresshandler, ph, "foo") # second param is steps self.db.setprogresshandler(ph, -17) # SQLite doesn't complain about negative numbers self.db.setprogresshandler(ph, 20) next(c.execute("select max(x) from foo")) self.assertNotEqual(phcalledcount[0], 0) saved=phcalledcount[0] # put an error in the progress handler def ph(): return 1/0 self.db.setprogresshandler(ph, 1) self.assertRaises(ZeroDivisionError, c.execute, "update foo set x=-10") self.db.setprogresshandler(None) # clear ph so next line runs # none should have taken self.assertEqual(0, next(c.execute("select count(*) from foo where x=-10"))[0]) # and previous ph should not have been called self.assertEqual(saved, phcalledcount[0]) def ph(): return BadIsTrue() self.db.setprogresshandler(ph, 1) self.assertRaises(ZeroDivisionError, c.execute, "update foo set x=-10") def testChanges(self): "Verify reporting of changes" c=self.db.cursor() c.execute("create table foo (x);begin") for i in range(100): c.execute("insert into foo values(?)", (i+1000,)) c.execute("commit") c.execute("update foo set x=0 where x>=1000") self.assertEqual(100, self.db.changes()) c.execute("begin") for i in range(100): c.execute("insert into foo values(?)", (i+1000,)) c.execute("commit") self.assertEqual(300, self.db.totalchanges()) def testLastInsertRowId(self): "Check last insert row id" c=self.db.cursor() c.execute("create table foo (x integer primary key)") for i in range(10): c.execute("insert into foo values(?)", (i,)) self.assertEqual(i, self.db.last_insert_rowid()) # get a 64 bit value v=2**40 c.execute("insert into foo values(?)", (v,)) self.assertEqual(v, self.db.last_insert_rowid()) def testComplete(self): "Completeness of SQL statement checking" # the actual underlying routine just checks that there is a semi-colon # at the end, not inside any quotes etc self.assertEqual(False, apsw.complete("select * from")) self.assertEqual(False, apsw.complete("select * from \";\"")) self.assertEqual(False, apsw.complete("select * from \";")) self.assertEqual(True, apsw.complete("select * from foo; select *;")) self.assertEqual(False, apsw.complete("select * from foo where x=1")) self.assertEqual(True, apsw.complete("select * from foo;")) self.assertEqual(True, apsw.complete(u(r"select '\u9494\ua7a7';"))) if not py3: self.assertRaises(UnicodeDecodeError, apsw.complete, "select '\x94\xa7';") self.assertRaises(TypeError, apsw.complete, 12) # wrong type self.assertRaises(TypeError, apsw.complete) # not enough args self.assertRaises(TypeError, apsw.complete, "foo", "bar") # too many args def testBusyHandling(self): "Verify busy handling" c=self.db.cursor() c.execute("create table foo(x); begin") c.executemany("insert into foo values(?)", randomintegers(400)) c.execute("commit") # verify it is blocked db2=apsw.Connection(TESTFILEPREFIX+"testdb") c2=db2.cursor() c2.execute("begin exclusive") try: self.assertRaises(apsw.BusyError, c.execute, "begin immediate ; select * from foo") finally: del c2 db2.close() del db2 # close and reopen databases - sqlite will return Busy immediately to a connection # it previously returned busy to del c self.db.close() del self.db self.db=apsw.Connection(TESTFILEPREFIX+"testdb") db2=apsw.Connection(TESTFILEPREFIX+"testdb") c=self.db.cursor() c2=db2.cursor() # Put in busy handler bhcalled=[0] def bh(*args): bhcalled[0]=bhcalled[0]+1 if bhcalled[0]==4: return False return True self.assertRaises(TypeError, db2.setbusyhandler, 12) # must be callable self.assertRaises(TypeError, db2.setbusytimeout, "12") # must be int db2.setbusytimeout(-77) # SQLite doesn't complain about negative numbers, but if it ever does this will catch it self.assertRaises(TypeError, db2.setbusytimeout, 77,88) # too many args self.db.setbusyhandler(bh) c2.execute("begin exclusive") try: for row in c.execute("begin immediate ; select * from foo"): self.fail("Transaction wasn't exclusive") except apsw.BusyError: pass self.assertEqual(bhcalled[0], 4) # Close and reopen again del c del c2 db2.close() self.db.close() del db2 del self.db self.db=apsw.Connection(TESTFILEPREFIX+"testdb") db2=apsw.Connection(TESTFILEPREFIX+"testdb") c=self.db.cursor() c2=db2.cursor() # Put in busy timeout TIMEOUT=3 # seconds, must be integer as sqlite can round down to nearest second anyway c2.execute("begin exclusive") self.assertRaises(TypeError, self.db.setbusyhandler, "foo") self.db.setbusytimeout(int(TIMEOUT*1000)) b4=time.time() try: c.execute("begin immediate ; select * from foo") except apsw.BusyError: pass after=time.time() took=after-b4 # this sometimes fails in virtualized environments due to time # going backwards or not going forwards consistently. if took+1=TIMEOUT) # check clearing of handler c2.execute("rollback") self.db.setbusyhandler(None) b4=time.time() c2.execute("begin exclusive") try: c.execute("begin immediate ; select * from foo") except apsw.BusyError: pass after=time.time() self.assertTrue(after-b4=2) # see SQLite ticket 2157 self.assertEqual(profileinfo[0][0], profileinfo[-1][0]) self.assertEqual("select max(x) from foo", profileinfo[0][0]) self.assertEqual("select max(x) from foo", profileinfo[-1][0]) # the query using the index should take way less time self.assertTrue(profileinfo[0][1]2 AND x2<7 AND y1>17.2 AND y2<=8", []) check("x1>5 AND x2<=6 AND y1>-11 AND y2<=8", [1]) def testICUExtension(self): "Check ICU extension if present" if not self.checkOptionalExtension("icu", "select lower('I', 'tr_tr')"): return c=self.db.cursor() # we compare SQLite standard vs icu def check(text, locale, func="lower", equal=False): q="select "+func+"(?%s)" sqlite=c.execute(q % ("",), (text,)).fetchall() icu=c.execute(q % (",'"+locale+"'",), (text,)).fetchall() if equal: self.assertEqual(sqlite, icu) else: self.assertNotEqual(sqlite, icu) check("I", "tr_tr") check("I", "en_us", equal=True) def testTracebacks(self): "Verify augmented tracebacks" return def badfunc(*args): 1/0 self.db.createscalarfunction("badfunc", badfunc) try: c=self.db.cursor() c.execute("select badfunc()") self.fail("Exception should have occurred") except ZeroDivisionError: tb=sys.exc_info()[2] traceback.print_tb(tb) del tb except: self.fail("Wrong exception type") def testLoadExtension(self): "Check loading of extensions" # unicode issues if not py3: self.assertRaises(UnicodeDecodeError, self.db.loadextension, "\xa7\x94") # they need to be enabled first (off by default) self.assertRaises(apsw.ExtensionLoadingError, self.db.loadextension, LOADEXTENSIONFILENAME) self.db.enableloadextension(False) self.assertRaises(ZeroDivisionError, self.db.enableloadextension, BadIsTrue()) # should still be disabled self.assertRaises(apsw.ExtensionLoadingError, self.db.loadextension, LOADEXTENSIONFILENAME) self.db.enableloadextension(True) # make sure it checks args self.assertRaises(TypeError, self.db.loadextension) self.assertRaises(TypeError, self.db.loadextension, 12) self.assertRaises(TypeError, self.db.loadextension, "foo", 12) self.assertRaises(TypeError, self.db.loadextension, "foo", "bar", 12) self.db.loadextension(LOADEXTENSIONFILENAME) c=self.db.cursor() self.assertEqual(1, next(c.execute("select half(2)"))[0]) # second entry point hasn't been called yet self.assertRaises(apsw.SQLError, c.execute, "select doubleup(2)") # load using other entry point self.assertRaises(apsw.ExtensionLoadingError, self.db.loadextension, LOADEXTENSIONFILENAME, "doesntexist") self.db.loadextension(LOADEXTENSIONFILENAME, "alternate_sqlite3_extension_init") self.assertEqual(4, next(c.execute("select doubleup(2)"))[0]) def testMakeSqliteMsgFromException(self): "Test C function that converts exception into SQLite error code" class Source: def Create1(self, *args): e=apsw.IOError() e.extendedresult=apsw.SQLITE_IOERR_ACCESS raise e def Create2(self, *args): e=apsw.IOError() e.extendedresult=long(apsw.SQLITE_IOERR_ACCESS) raise e def Create3(self, *args): e=apsw.IOError() e.extendedresult=(long("0x80")<<32)+apsw.SQLITE_IOERR_ACCESS # bigger than 32 bits raise e if not hasattr(self.db, "createmodule"): return self.db.createmodule("foo", Source()) for i in "1", "2", "3": Source.Create=getattr(Source, "Create"+i) try: self.db.cursor().execute("create virtual table vt using foo()") 1/0 except: klass,value,tb=sys.exc_info() # check types and values if i=="3": self.assertEqual(klass, ValueError) continue self.assertEqual(klass, apsw.IOError) self.assertTrue(isinstance(value, apsw.IOError)) # python 2.3 totally messes up on long<->int and signed conversions causing the test to fail # but the code is fine - so just ignore rest of test for py2.3 if sys.version_info<(2,4): return self.assertEqual(value.extendedresult& ((long(0xffff)<<16)|long(0xffff)), apsw.SQLITE_IOERR_ACCESS) def testVtables(self): "Test virtual table functionality" data=( # row 0 is headers, column 0 is rowid ( "rowid", "name", "number", "item", "description"), ( 1, "Joe Smith", 1.1, u(r"\u00f6\u1234"), "foo"), ( 6000000000, "Road Runner", -7.3, u(r"\u00f6\u1235"), "foo"), ( 77, "Fred", 0, u(r"\u00f6\u1236"), "foo"), ) dataschema="create table this_should_be_ignored"+str(data[0][1:]) # a query that will get constraints on every column allconstraints="select rowid,* from foo where rowid>-1000 and name>='A' and number<=12.4 and item>'A' and description=='foo' order by item" allconstraintsl=[(-1, apsw.SQLITE_INDEX_CONSTRAINT_GT), # rowid > ( 0, apsw.SQLITE_INDEX_CONSTRAINT_GE), # name >= ( 1, apsw.SQLITE_INDEX_CONSTRAINT_LE), # number <= ( 2, apsw.SQLITE_INDEX_CONSTRAINT_GT), # item > ( 3, apsw.SQLITE_INDEX_CONSTRAINT_EQ), # description == ] for i in range(20): self.db.createmodule("x"*i, lambda x: i) for ii in range(20): # SQLite 3.7.13 change - can't register same names self.assertRaises(apsw.MisuseError, self.db.createmodule, "x"*ii, lambda x: ii) # If shared cache is enabled then vtable creation is supposed to fail # See https://sqlite.org/cvstrac/tktview?tn=3144 try: apsw.enablesharedcache(True) db=apsw.Connection(TESTFILEPREFIX+"testdb2") db.createmodule("y", lambda x: 2) finally: apsw.enablesharedcache(False) # The testing uses a different module name each time. SQLite # doc doesn't define the semantics if a 2nd module is # registered with the same name as an existing one and I was # getting coredumps. It looks like issues inside SQLite. cur=self.db.cursor() # should fail since module isn't registered self.assertRaises(apsw.SQLError, cur.execute, "create virtual table vt using testmod(x,y,z)") # wrong args self.assertRaises(TypeError, self.db.createmodule, 1,2,3) # give a bad object self.db.createmodule("testmod", 12) # next line fails due to lack of Create method self.assertRaises(AttributeError, cur.execute, "create virtual table xyzzy using testmod(x,y,z)") class Source: def __init__(self, *expectargs): self.expectargs=expectargs def Create(self, *args): # db, modname, dbname, tablename, args if self.expectargs!=args[1:]: raise ValueError("Create arguments are not correct. Expected "+str(self.expectargs)+" but got "+str(args[1:])) 1/0 def CreateErrorCode(self, *args): # This makes sure that sqlite error codes happen. The coverage checker # is what verifies the code actually works. raise apsw.BusyError("foo") def CreateUnicodeException(self, *args): raise Exception(u(r"\N{LATIN SMALL LETTER E WITH CIRCUMFLEX}\N{LATIN SMALL LETTER A WITH TILDE}\N{LATIN SMALL LETTER O WITH DIAERESIS}")) def CreateBadSchemaType(self, *args): return 12, None def CreateBadSchema(self, *args): return "this isn't remotely valid sql", None def CreateWrongNumReturns(self, *args): return "way","too","many","items",3 def CreateBadSequence(self, *args): class badseq(object): def __getitem__(self, which): if which!=0: 1/0 return 12 def __len__(self): return 2 return badseq() # check Create does the right thing - we don't include db since it creates a circular reference self.db.createmodule("testmod1", Source("testmod1", "main", "xyzzy", "1", '"one"')) self.assertRaises(ZeroDivisionError, cur.execute, 'create virtual table xyzzy using testmod1(1,"one")') # unicode uni=u(r"\N{LATIN SMALL LETTER E WITH CIRCUMFLEX}\N{LATIN SMALL LETTER A WITH TILDE}\N{LATIN SMALL LETTER O WITH DIAERESIS}") self.db.createmodule("testmod1dash1", Source("testmod1dash1", "main", uni, "1", '"'+uni+'"')) self.assertRaises(ZeroDivisionError, cur.execute, u('create virtual table %s using testmod1dash1(1,"%s")') % (uni, uni)) Source.Create=Source.CreateErrorCode self.assertRaises(apsw.BusyError, cur.execute, 'create virtual table xyzzz using testmod1(2, "two")') Source.Create=Source.CreateUnicodeException self.assertRaises(Exception, cur.execute, 'create virtual table xyzzz using testmod1(2, "two")') Source.Create=Source.CreateBadSchemaType self.assertRaises(TypeError, cur.execute, 'create virtual table xyzzz using testmod1(2, "two")') Source.Create=Source.CreateBadSchema self.assertRaises(apsw.SQLError, cur.execute, 'create virtual table xyzzz2 using testmod1(2, "two")') Source.Create=Source.CreateWrongNumReturns self.assertRaises(TypeError, cur.execute, 'create virtual table xyzzz2 using testmod1(2, "two")') Source.Create=Source.CreateBadSequence self.assertRaises(ZeroDivisionError, cur.execute, 'create virtual table xyzzz2 using testmod1(2, "two")') # a good version of Source class Source: def Create(self, *args): return dataschema, VTable(list(data)) Connect=Create class VTable: # A set of results from bestindex which should all generate TypeError. # Coverage checking will ensure all the code is appropriately tickled badbestindex=(12, (12,), ((),), (((),),), ((((),),),), (((((),),),),), ((None,None,None,None,"bad"),), ((0,None,(0,),None,None),), ((("bad",True),None,None,None,None),), (((0, True),"bad",None,None,None),), (None,"bad"), [4,(3,True),[2,False],1, [0]], ) numbadbextindex=len(badbestindex) def __init__(self, data): self.data=data self.bestindex3val=0 def BestIndex1(self, wrong, number, of, arguments): 1/0 def BestIndex2(self, *args): 1/0 def BestIndex3(self, constraints, orderbys): retval=self.badbestindex[self.bestindex3val] self.bestindex3val+=1 if self.bestindex3val>=self.numbadbextindex: self.bestindex3val=0 return retval def BestIndex4(self, constraints, orderbys): # this gives ValueError ("bad" is not a float) return (None,12,u(r"\N{LATIN SMALL LETTER E WITH CIRCUMFLEX}"), "anything", "bad") def BestIndex5(self, constraints, orderbys): # unicode error return (None, None, "\xde\xad\xbe\xef") def BestIndex6(self, constraints, orderbys): return ( (0, 1, (2, BadIsTrue()), 3, 4), ) def BestIndex7(self, constraints, orderbys): return (None, long(77), "foo", BadIsTrue(), 99) _bestindexreturn=99 def BestIndex99(self, constraints, orderbys): cl=list(constraints) cl.sort() assert allconstraintsl == cl assert orderbys == ( (2, False), ) retval=( [long(4),(3,True),[long(2),False],1, (0, False)], 997, u(r"\N{LATIN SMALL LETTER E WITH CIRCUMFLEX}"), False, 99)[:self._bestindexreturn] return retval def BestIndexGood(self, constraints, orderbys): return None def BestIndexGood2(self, constraints, orderbys): return [] # empty list is same as None def Open(self): return Cursor(self) def Open1(self, wrong, number, of, arguments): 1/0 def Open2(self): 1/0 def Open3(self): return None def Open99(self): return Cursor(self) UpdateInsertRow1=None def UpdateInsertRow2(self, too, many, args): 1/0 def UpdateInsertRow3(self, rowid, fields): 1/0 def UpdateInsertRow4(self, rowid, fields): assert rowid is None return None def UpdateInsertRow5(self, rowid, fields): assert rowid is None return "this is not a number" def UpdateInsertRow6(self, rowid, fields): assert rowid is None return -922337203685477580799 # too big def UpdateInsertRow7(self, rowid, fields): assert rowid is None return 9223372036854775807 # ok def UpdateInsertRow8(self, rowid, fields): assert rowid is not None assert rowid==-12 return "this should be ignored since rowid was supplied" def UpdateChangeRow1(self, too, many, args, methinks): 1/0 def UpdateChangeRow2(self, rowid, newrowid, fields): 1/0 def UpdateChangeRow3(self, rowid, newrowid, fields): assert newrowid==rowid def UpdateChangeRow4(self, rowid, newrowid, fields): assert newrowid==rowid+20 def UpdateDeleteRow1(self, too, many, args): 1/0 def UpdateDeleteRow2(self, rowid): 1/0 def UpdateDeleteRow3(self, rowid): assert rowid==77 def Disconnect1(self, too, many, args): 1/0 def Disconnect2(self): 1/0 def Disconnect3(self): pass def Destroy1(self, too, many, args): 1/0 def Destroy2(self): 1/0 def Destroy3(self): pass def Begin1(self, too, many, args): 1/0 def Begin2(self): 1/0 def Begin3(self): pass def Sync(self): pass def Commit(self): pass def Rollback(self): pass def Rename1(self, too, many, args): 1/0 def Rename2(self, x): 1/0 def Rename3(self, x): return ["thisshouldbeignored"*25, [1]] def FindFunction1(self, too, many, args): 1/0 def FindFunction2(self, name, nargs): 1/0 def FindFunction3(self, name, nargs): return "this isn't a function" def FindFunction4(self, name, nargs): if nargs==2: return lambda x,y: x+y return None class Cursor: _bestindexreturn=99 def __init__(self, table): self.table=table def Filter1(self, toofewargs): 1/0 def Filter2(self, *args): 1/0 def Filter99(self, idxnum, idxstr, constraintargs): self.pos=1 # row 0 is headers if self._bestindexreturn==0: assert idxnum==0 assert idxstr==None assert constraintargs==() return if self._bestindexreturn==1: assert idxnum==0 assert idxstr==None assert constraintargs==('foo', 'A', 12.4, 'A', -1000) return if self._bestindexreturn==2: assert idxnum==997 assert idxstr==None assert constraintargs==('foo', 'A', 12.4, 'A', -1000) return # 3 or more assert idxnum==997 assert idxstr==u(r"\N{LATIN SMALL LETTER E WITH CIRCUMFLEX}") assert constraintargs==('foo', 'A', 12.4, 'A', -1000) def Filter(self, *args): self.Filter99(*args) 1/0 def FilterGood(self, *args): self.pos=1 # row 0 is headers def Eof1(self, toomany, args): 1/0 def Eof2(self): 1/0 def Eof3(self): return BadIsTrue() def Eof99(self): return not ( self.pos=len(self.table.data) def Rowid(self): return self.table.data[self.pos][0] def Column(self, col): return self.table.data[self.pos][1+col] def Next(self): self.pos+=1 def Close(self): pass paths=[x.replace("\\","/") for x in sys.path if len(x) and os.path.isdir(x)] cols,data=getfiledata(paths) self.db.createmodule("filesource", Source()) cur=self.db.cursor() args=",".join(["'%s'" % (x,) for x in paths]) cur.execute("create virtual table files using filesource("+args+")") # Find the largest file (SQL) for bigsql in cur.execute("select st_size,name,directory from files order by st_size desc limit 1"): pass # Find the largest (manually) colnum=cols.index("st_size") bigmanual=(0,"","") for file in data: if file[colnum]>bigmanual[0]: bigmanual=file[colnum], file[1], file[2] self.assertEqual(bigsql, bigmanual) # Find the oldest file (SQL) for oldestsql in cur.execute("select st_ctime,name,directory from files order by st_ctime limit 1"): pass # Find the oldest (manually) colnum=cols.index("st_ctime") oldestmanual=(99999999999999999,"","") for file in data: if file[colnum]len(nargs)) # do the same thing, but for cursor nargs=self.cursor_nargs tested=0 for func in [x for x in dir(cur) if not x.startswith("__") and not x in ("close",)]: tested+=1 args=("one", "two", "three")[:nargs.get(func,0)] try: getattr(cur, func)(*args) self.fail("cursor method "+func+" didn't notice that the connection is closed") except apsw.CursorClosedError: pass self.assertTrue(tested>=len(nargs)) def testClosing(self): "Verify behaviour of close() functions" cur=self.db.cursor() cur.execute("select 3;select 4") self.assertRaises(apsw.IncompleteExecutionError, cur.close) # now force it self.assertRaises(TypeError, cur.close, sys) self.assertRaises(TypeError, cur.close, 1,2,3) cur.close(True) l=[self.db.cursor() for i in range(1234)] cur=self.db.cursor() cur.execute("select 3; select 4; select 5") l2=[self.db.cursor() for i in range(1234)] self.assertRaises(apsw.IncompleteExecutionError, self.db.close) self.assertRaises(TypeError, self.db.close, sys) self.assertRaises(TypeError, self.db.close, 1,2,3) self.db.close(True) # force it self.db.close() # should be fine now # coverage - close cursor after closing db db=apsw.Connection(":memory:") cur=db.cursor() db.close() cur.close() def testLargeObjects(self): "Verify handling of large strings/blobs (>2GB) [Python 2.5+, 64 bit platform]" if not is64bit: return # For binary/blobs I use an anonymous area slightly larger than 2GB chunk of memory, but don't touch any of it import mmap f=mmap.mmap(-1, 2*1024*1024*1024+25000) c=self.db.cursor() c.execute("create table foo(theblob)") self.assertRaises(apsw.TooBigError, c.execute, "insert into foo values(?)", (f,)) c.execute("insert into foo values(?)", ("jkghjk"*1024,)) b=self.db.blobopen("main", "foo", "theblob", self.db.last_insert_rowid(), True) b.read(1) self.assertRaises(ValueError, b.write, f) def func(): return f self.db.createscalarfunction("toobig", func) self.assertRaises(apsw.TooBigError, c.execute, "select toobig()") f.close() # Other testing by fault injection if not hasattr(apsw, "faultdict"): return ## SetContextResultLargeUnicode apsw.faultdict["SetContextResultLargeUnicode"]=True try: db=apsw.Connection(":memory:") db.createscalarfunction("foo", lambda x: u("a unicode string")) for row in db.cursor().execute("select foo(3)"): pass 1/0 except apsw.TooBigError: pass ## SetContextResultLargeString if sys.version_info<(3,0): apsw.faultdict["SetContextResultLargeString"]=True try: db=apsw.Connection(":memory:") def func(x): return "an ordinary string"*10000 db.createscalarfunction("foo", func) for row in db.cursor().execute("select foo(3)"): pass 1/0 except apsw.TooBigError: pass ## DoBindingLargeUnicode apsw.faultdict["DoBindingLargeUnicode"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("create table foo(x); insert into foo values(?)", (u("aaaa"),)) 1/0 except apsw.TooBigError: pass ## DoBindingLargeString if sys.version_info<(3,0): apsw.faultdict["DoBindingLargeString"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("create table foo(x); insert into foo values(?)", ("aaaa"*10000,)) 1/0 except apsw.TooBigError: pass def testErrorCodes(self): "Verify setting of result codes on error/exception" fname=TESTFILEPREFIX+"gunk-errcode-test" write_whole_file(fname, "wb", b("A")*8192) db=None try: # The exception could be thrown on either of these lines # depending on several factors db=apsw.Connection(fname) db.cursor().execute("select * from sqlite_master") 1/0 # should not be reachable except: klass,e,tb=sys.exc_info() self.assertTrue(isinstance(e, apsw.NotADBError)) self.assertEqual(e.result, apsw.SQLITE_NOTADB); self.assertEqual(e.extendedresult&0xff, apsw.SQLITE_NOTADB) if db is not None: db.close(True) try: deletefile(fname) except: pass def testLimits(self): "Verify setting and getting limits" self.assertRaises(TypeError, self.db.limit, "apollo", 11) c=self.db.cursor() c.execute("create table foo(x)") c.execute("insert into foo values(?)", ("x"*1024,)) old=self.db.limit(apsw.SQLITE_LIMIT_LENGTH) self.db.limit(apsw.SQLITE_LIMIT_LENGTH, 1023) self.assertRaises(apsw.TooBigError, c.execute, "insert into foo values(?)", ("y"*1024,)) self.assertEqual(1023, self.db.limit(apsw.SQLITE_LIMIT_LENGTH, 0)) # bug in sqlite - see https://sqlite.org/cvstrac/tktview?tn=3085 if False: c.execute("insert into foo values(?)", ("x"*1024,)) self.assertEqual(apsw.SQLITE_MAX_LENGTH, self.db.limit(apsw.SQLITE_LIMIT_LENGTH)) def testConnectionHooks(self): "Verify connection hooks" del apsw.connection_hooks try: db=apsw.Connection(":memory:") except AttributeError: pass apsw.connection_hooks=sys # bad type try: db=apsw.Connection(":memory:") except TypeError: pass apsw.connection_hooks=("a", "tuple", "of", "non-callables") try: db=apsw.Connection(":memory:") except TypeError: pass apsw.connection_hooks=(dir, lambda x: 1/0) try: db=apsw.Connection(":memory:") except ZeroDivisionError: pass def delit(db): del db apsw.connection_hooks=[delit for _ in range(9000)] db=apsw.Connection(":memory:") db.close() apsw.connection_hooks=[lambda x: x] db=apsw.Connection(":memory:") db.close() def testCompileOptions(self): "Verify getting compile options" # We don't know what the right answers are, so just check # there are more than zero entries. v=apsw.compile_options self.assertEqual(type(v), tuple) self.assertTrue(len(v)>1) def testIssue4(self): "Issue 4: Error messages and SQLite ticket 3063" connection = apsw.Connection(":memory:") cursor = connection.cursor() cursor.execute("CREATE TABLE A_TABLE (ID ABC PRIMARY KEY NOT NULL)") try: cursor.execute("INSERT INTO A_TABLE VALUES (NULL)") except: klass,e,tb=sys.exc_info() assert "A_TABLE.ID" in str(e) try: cursor.execute("INSERT INTO A_TABLE VALUES (?)", (None,)) except: klass,e,tb=sys.exc_info() assert "A_TABLE.ID" in str(e) def testIssue15(self): "Issue 15: Release GIL during calls to prepare" self.db.cursor().execute("create table foo(x)") self.db.cursor().execute("begin exclusive") db2=apsw.Connection(TESTFILEPREFIX+"testdb") db2.setbusytimeout(30000) t=ThreadRunner(db2.cursor().execute, "select * from foo") t.start() time.sleep(1) self.db.cursor().execute("commit") t.go() def testIssue19(self): "Issue 19: Incomplete cursor execution" c=self.db.cursor() c.execute("create table numbers(x)") for i in range(10): c.execute("insert into numbers values(?)", (i,)) c.execute("select * from numbers") next(c) next(c) next(c) self.db.cursor().execute("delete from numbers where x=5") next(c) next(c) def testIssue24(self): "Issue 24: Ints and Longs" c=self.db.cursor() for row in c.execute("select 3"): pass self.assertEqual(int, type(row[0])) for row in c.execute("select -2147483647-1"): pass self.assertEqual(int, type(row[0])) for row in c.execute("select 2147483647"): pass self.assertEqual(int, type(row[0])) # Depending on the platform, sizeof(long), 64 bitness etc we # may remain as python type int or type long. Check we are # getting the right numbers no matter what. This duplicates # testTypes but you can never be too careful. for v in "2147483646", "2147483647", "2147483648", "2147483649", \ "21474836460", "21474836470", "21474836480", "21474836490", \ "147483646", "147483647", "147483648", "147483649": for neg in ("-", ""): val=c.execute("select "+neg+v).fetchall()[0][0] val=repr(val) if val.endswith("L"): val=val[:-1] self.assertEqual(val, neg+v) def testIssue31(self): "Issue 31: GIL & SQLite mutexes with heavy threading, threadsafe errors from SQLite" randomnumbers=[random.randint(0,10000) for _ in range(10000)] cursor=self.db.cursor() cursor.execute("create table foo(x)") cursor.execute("begin") for num in randomnumbers: cursor.execute("insert into foo values(?)", (num,)) cursor.execute("end") self.db.createscalarfunction("timesten", lambda x: x*10) def dostuff(n): # spend n seconds doing stuff to the database c=self.db.cursor() b4=time.time() while time.time()-b4=1) elif i%5==2: try: self.db.cursor().execute("deliberate syntax error") except apsw.SQLError: assert("deliberate" in str(sys.exc_info()[1])) elif i%5==3: try: self.db.cursor().execute("bogus syntax error") except apsw.SQLError: assert("bogus" in str(sys.exc_info()[1])) else: sql="select timesten(x) from foo where x=? order by x" self.db.cursor().execute(sql, (i,)) threads=[ThreadRunner(dostuff, 15) for _ in range(20)] for t in threads: t.start() for t in threads: # if there were any errors then exceptions would be raised here t.go() def testIssue50(self): "Issue 50: Check Blob.read return value on eof" # first get what the system returns on eof if iswindows: f=open("nul", "rb") else: f=open("/dev/null", "rb") try: # deliberately hit eof f.read() # now try to read some more feof=f.read(10) finally: f.close() cur=self.db.cursor() # make a blob to play with rowid=next(cur.execute("create table foo(x blob); insert into foo values(zeroblob(98765)); select rowid from foo"))[0] blobro=self.db.blobopen("main", "foo", "x", rowid, False) try: blobro.read(98765) beof=blobro.read(10) self.assertEqual(type(beof), type(feof)) self.assertEqual(beof, feof) finally: blobro.close() def testIssue98(self, runfrom106=None): "Issue 98: An error in context manager commit should do a rollback" self.db.cursor().execute("create table foo(x); insert into foo values(3); insert into foo values(4)") # We need the reader to block a writer, which requires non-WAL mode self.db.cursor().execute("pragma journal_mode=delete") db2=apsw.Connection(TESTFILEPREFIX+"testdb") if runfrom106: db2.setexectrace(runfrom106) db2.cursor().execute("pragma journal_mode=delete") # deliberately don't read from cursor on connection 1 which will prevent a commit x=self.db.cursor().execute("select * from foo") db2.__enter__() db2.cursor().execute("insert into foo values(5)") # transaction is buffered in memory by SQLite try: db2.__exit__(None, None, None) except apsw.BusyError: pass # Ensure transaction was rolled back x.fetchall() for row in db2.cursor().execute("select * from foo where x=5"): self.fail("Transaction was not rolled back") db2.close() if runfrom106: return # Verify that error in tracer results in rollback self.db.__enter__() def h(*args): 1/0 self.db.cursor().execute("insert into foo values(6)") self.db.setexectrace(h) try: self.db.__exit__(None, None, None) except ZeroDivisionError: self.db.setexectrace(None) pass for row in self.db.cursor().execute("select * from foo where x=6"): self.fail("Transaction was not rolled back") def testIssue103(self): "Issue 103: Error handling when sqlite3_declare_vtab fails" class Source: def Create(self, *args): return "create table x(delete)", None self.db.createmodule("issue103", Source()) try: self.db.cursor().execute("create virtual table foo using issue103()") 1/0 # should not be reached except apsw.SQLError: assert "near \"delete\": syntax error" in str(sys.exc_info()[1]) def testIssue106(self): "Issue 106: Profiling and tracing" traces=[] def tracer(cur, sql, bindings): sql=sql.lower().split()[0] if sql in ("savepoint", "release", "rollback"): traces.append(sql) return True self.testIssue98(tracer) self.assertTrue(len(traces)>=3) self.assertTrue("savepoint" in traces) self.assertTrue("release" in traces) self.assertTrue("rollback" in traces) def testIssue142(self): "Issue 142: bytes from system during dump" orig_strftime=time.strftime orig_getuser=getpass.getuser fh=[] try: time.strftime=lambda arg: BYTES(r"gjkTIMEJUNKhgjhg\xfe\xdf") getpass.getuser=lambda : BYTES(r"\x81\x82\x83gjkhgUSERJUNKjhg\xfe\xdf") import codecs fh=[codecs.open(TESTFILEPREFIX+"test-shell-"+t, "w+b", encoding="utf8") for t in ("in", "out", "err")] kwargs={"stdin": fh[0], "stdout": fh[1], "stderr": fh[2]} rows=(["correct"], ["horse"], ["battery"], ["staple"]) self.db.cursor().execute("create table foo(x)") self.db.cursor().executemany("insert into foo values(?)", rows) shell=apsw.Shell(db=self.db, **kwargs) shell.command_dump([]) fh[1].seek(0) out=fh[1].read() for row in rows: self.assertTrue(row[0] in out) self.assertTrue("TIMEJUNK" in out) self.assertTrue("USERJUNK" in out) finally: for f in fh: f.close() time.strftim=orig_strftime getpass.getuser=orig_getuser def testIssue186(self): "Issue 186: desription cache between statements" cur=self.db.cursor() for i, row in enumerate(cur.execute("select 1; select 1,2; select 1,2,3; select 1,2,3,4;")): # this catches if the order of getting them makes a difference if i%2: self.assertEqual(len(cur.description), len(cur.getdescription())) else: self.assertEqual(len(cur.getdescription()), len(cur.description)) self.assertEqual(len(cur.description), i+1) # check executemany too for i, row in enumerate(cur.executemany("select ?; select ?,?; select ?,?,?; select ?,?,?,?;", [(1, 1,2, 1,2,3, 1,2,3,4), (1, 1,2, 1,2,3, 1,2,3,4), ])): i%=4 self.assertEqual(len(cur.getdescription()), i+1) # and the tracers def tracer(cursor, *args): self.assertEqual(len(cursor.getdescription()), expect) return True expect=1 cur.setexectrace(tracer) cur.setrowtrace(tracer) for i, row in enumerate(cur.execute("select 1; select 1,2; select 1,2,3; select 1,2,3,4;")): expect+=1 expect=1 for i, row in enumerate(cur.executemany("select ?; select ?,?; select ?,?,?; select ?,?,?,?;", [(1, 1,2, 1,2,3, 1,2,3,4), (1, 1,2, 1,2,3, 1,2,3,4), ])): expect+=1 if expect>4: expect=1 def testTicket2158(self): "Check we are not affected by SQLite ticket #2158" # https://sqlite.org/cvstrac/tktview?tn=2158 def dummy(x,y): if xy: return 1 return 0 self.db.createcollation("dummy", dummy) cur=self.db.cursor() cur.execute("create table foo(x)") cur.executemany("insert into foo values(?)", randomintegers(20)) for row in cur.execute("select * from foo order by x collate dummy"): pass self.db.createcollation("dummy", None) self.assertRaises(apsw.SQLError, cur.execute, "select * from foo order by x collate dummy") def testPysqliteRecursiveIssue(self): "Check an issue that affected pysqlite" # https://code.google.com/p/pysqlite/source/detail?r=260ee266d6686e0f87b0547c36b68a911e6c6cdb cur=self.db.cursor() cur.execute("create table a(x); create table b(y);") def foo(): yield (1,) cur.execute("insert into a values(?)", (1,)) yield (2,) self.assertRaises(apsw.ThreadingViolationError, cur.executemany, "insert into b values(?)", foo()) def testWriteUnraiseable(self): "Verify writeunraiseable replacement function" def unraise(): # We cause an unraiseable error to happen by writing to a # blob open for reading. The close method called in the # destructor will then also give the error db=apsw.Connection(":memory:") rowid=next(db.cursor().execute("create table foo(x); insert into foo values(x'aabbccdd'); select rowid from foo"))[0] blob=db.blobopen("main", "foo", "x", rowid, False) try: blob.write(b("badd")) except apsw.ReadOnlyError: pass del db del blob gc.collect() # Normal excepthook self.assertRaisesUnraisable(apsw.ReadOnlyError, unraise) # excepthook with error to check PyErr_Display is called xx=sys.excepthook yy=sys.stderr sys.stderr=open(TESTFILEPREFIX+"errout.txt", "wt") def ehook(blah): 1/0 sys.excepthook=ehook unraise() sys.stderr.close() v=open(TESTFILEPREFIX+"errout.txt", "rt").read() deletefile(TESTFILEPREFIX+"errout.txt") self.assertTrue(len(v)) sys.excepthook=xx sys.stderr=yy def testStatementCache(self, scsize=100): "Verify statement cache integrity" cur=self.db.cursor() cur.execute("create table foo(x,y)") cur.execute("create index foo_x on foo(x)") cur.execute("insert into foo values(1,2)") cur.execute("drop index foo_x") cur.execute("insert into foo values(1,2)") # cache hit, but needs reprepare cur.execute("drop table foo; create table foo(x)") try: cur.execute("insert into foo values(1,2)") # cache hit, but invalid sql except apsw.SQLError: pass cur.executemany("insert into foo values(?)", [[1],[2]]) # overflow the statement cache l=[self.db.cursor().execute("select x from foo"+" "*i) for i in range(scsize+200)] del l gc.collect() # coverage l=[] for i in range(scsize+10): l.append(self.db.cursor().execute("select x from foo"+" "*i)) for row in self.db.cursor().execute("select * from foo"): pass # other wrangling l=[self.db.cursor().execute("select x from foo") for i in range(scsize+200)] for i in range(scsize+200): for row in self.db.cursor().execute("select * from foo"+" "*i): pass del l gc.collect() db2=apsw.Connection(TESTFILEPREFIX+"testdb", statementcachesize=scsize) cur2=db2.cursor() cur2.execute("create table bar(x,y)") for _ in cur.execute("select * from foo"): pass db2.close() # Get some coverage - overflow apswbuffer recycle. 100 is # statementcache size, 256 is apswbufferrecycle bin size, and # 17 is to overflow l=[self.db.cursor().execute(u("select 3"+" "*i)) for i in range(100+256+17)] while l: l.pop().fetchall() def testStatementCacheZeroSize(self): "Rerun statement cache tests with a zero sized/disabled cache" self.db=apsw.Connection(TESTFILEPREFIX+"testdb", statementcachesize=-1) self.testStatementCache(-1) def testWikipedia(self): "Use front page of wikipedia to check unicode handling" # the text also includes characters that can't be represented in 16 bits text=u("""WIKIPEDIA\nEnglish\nThe Free Encyclopedia\n2 386 000+ articles\nDeutsch\nDie freie Enzyklop\\u00e4die\n753 000+ Artikel\nFran\\u00e7ais\nL\\u2019encyclop\\u00e9die libre\n662 000+ articles\nPolski\nWolna encyklopedia\n503 000+ hase\\u0142\n\\u65e5\\u672c\\u8a9e\n\\u30d5\\u30ea\\u30fc\\u767e\\u79d1\\u4e8b\\u5178\n492 000+ \\u8a18\\u4e8b\nItaliano\nL\\u2019enciclopedia libera\n456 000+ voci\nNederlands\nDe vrije encyclopedie\n440 000+ artikelen\nPortugu\\u00eas\nA enciclop\\u00e9dia livre\n380 000+ artigos\nEspa\\u00f1ol\nLa enciclopedia libre\n363 000+ art\\u00edculos\n\\u0420\\u0443\\u0441\\u0441\\u043a\\u0438\\u0439\n\\u0421\\u0432\\u043e\\u0431\\u043e\\u0434\\u043d\\u0430\\u044f \\u044d\\u043d\\u0446\\u0438\\u043a\\u043b\\u043e\\u043f\\u0435\\u0434\\u0438\\u044f\n285 000+ \\u0441\\u0442\\u0430\\u0442\\u0435\\u0439\nSearch \\u00b7 Suche \\u00b7 Rechercher \\u00b7 Szukaj \\u00b7 \\u691c\\u7d22 \\u00b7 Ricerca \\u00b7 Zoeken \\u00b7 Busca \\u00b7 Buscar\n\\u041f\\u043e\\u0438\\u0441\\u043a \\u00b7 S\\u00f6k \\u00b7 \\u641c\\u7d22 \\u00b7 S\\u00f8k \\u00b7 Haku \\u00b7 Cerca \\u00b7 Suk \\u00b7 \\u041f\\u043e\\u0448\\u0443\\u043a \\u00b7 C\\u0103utare \\u00b7 Ara\n 100 000+ \nCatal\\u00e0 \\u00b7 Deutsch \\u00b7 English \\u00b7 Espa\\u00f1ol \\u00b7 Fran\\u00e7ais \\u00b7 Italiano \\u00b7 Nederlands \\u00b7 \\u65e5\\u672c\\u8a9e \\u00b7 Norsk (bokm\\u00e5l) \\u00b7 Polski \\u00b7 Portugu\\u00eas \\u00b7 \\u0420\\u0443\\u0441\\u0441\\u043a\\u0438\\u0439 \\u00b7 Rom\\u00e2n\\u0103 \\u00b7 Suomi \\u00b7 Svenska \\u00b7 T\\u00fcrk\\u00e7e \\u00b7 \\u0423\\u043a\\u0440\\u0430\\u0457\\u043d\\u0441\\u044c\\u043a\\u0430 \\u00b7 Volap\\u00fck \\u00b7 \\u4e2d\\u6587\n 10 000+ \n\\u0627\\u0644\\u0639\\u0631\\u0628\\u064a\\u0629 \\u00b7 Asturianu \\u00b7 Krey\\u00f2l Ayisyen \\u00b7 Az\\u0259rbaycan / \\u0622\\u0630\\u0631\\u0628\\u0627\\u064a\\u062c\\u0627\\u0646 \\u062f\\u064a\\u0644\\u06cc \\u00b7 \\u09ac\\u09be\\u0982\\u09b2\\u09be \\u00b7 \\u0411\\u0435\\u043b\\u0430\\u0440\\u0443\\u0441\\u043a\\u0430\\u044f (\\u0410\\u043a\\u0430\\u0434\\u044d\\u043c\\u0456\\u0447\\u043d\\u0430\\u044f) \\u00b7 \\u09ac\\u09bf\\u09b7\\u09cd\\u09a3\\u09c1\\u09aa\\u09cd\\u09b0\\u09bf\\u09af\\u09bc\\u09be \\u09ae\\u09a3\\u09bf\\u09aa\\u09c1\\u09b0\\u09c0 \\u00b7 Bosanski \\u00b7 Brezhoneg \\u00b7 \\u0411\\u044a\\u043b\\u0433\\u0430\\u0440\\u0441\\u043a\\u0438 \\u00b7 \\u010cesky \\u00b7 Cymraeg \\u00b7 Dansk \\u00b7 Eesti \\u00b7 \\u0395\\u03bb\\u03bb\\u03b7\\u03bd\\u03b9\\u03ba\\u03ac \\u00b7 Esperanto \\u00b7 Euskara \\u00b7 \\u0641\\u0627\\u0631\\u0633\\u06cc \\u00b7 Galego \\u00b7 \\ud55c\\uad6d\\uc5b4 \\u00b7 \\u0939\\u093f\\u0928\\u094d\\u0926\\u0940 \\u00b7 Hrvatski \\u00b7 Ido \\u00b7 Bahasa Indonesia \\u00b7 \\u00cdslenska \\u00b7 \\u05e2\\u05d1\\u05e8\\u05d9\\u05ea \\u00b7 Basa Jawa \\u00b7 \\u10e5\\u10d0\\u10e0\\u10d7\\u10e3\\u10da\\u10d8 \\u00b7 Kurd\\u00ee / \\u0643\\u0648\\u0631\\u062f\\u06cc \\u00b7 Latina \\u00b7 Lumbaart \\u00b7 Latvie\\u0161u \\u00b7 L\\u00ebtzebuergesch \\u00b7 Lietuvi\\u0173 \\u00b7 Magyar \\u00b7 \\u041c\\u0430\\u043a\\u0435\\u0434\\u043e\\u043d\\u0441\\u043a\\u0438 \\u00b7 \\u092e\\u0930\\u093e\\u0920\\u0940 \\u00b7 Bahasa Melayu \\u00b7 \\u0928\\u0947\\u092a\\u093e\\u0932 \\u092d\\u093e\\u0937\\u093e \\u00b7 Norsk (nynorsk) \\u00b7 Nnapulitano \\u00b7 Occitan \\u00b7 Piemont\\u00e8is \\u00b7 Plattd\\u00fc\\u00fctsch \\u00b7 Shqip \\u00b7 Sicilianu \\u00b7 Simple English \\u00b7 Sinugboanon \\u00b7 Sloven\\u010dina \\u00b7 Sloven\\u0161\\u010dina \\u00b7 \\u0421\\u0440\\u043f\\u0441\\u043a\\u0438 \\u00b7 Srpskohrvatski / \\u0421\\u0440\\u043f\\u0441\\u043a\\u043e\\u0445\\u0440\\u0432\\u0430\\u0442\\u0441\\u043a\\u0438 \\u00b7 Basa Sunda \\u00b7 Tagalog \\u00b7 \\u0ba4\\u0bae\\u0bbf\\u0bb4\\u0bcd \\u00b7 \\u0c24\\u0c46\\u0c32\\u0c41\\u0c17\\u0c41 \\u00b7 \\u0e44\\u0e17\\u0e22 \\u00b7 Ti\\u1ebfng Vi\\u1ec7t \\u00b7 Walon\n 1 000+ \nAfrikaans \\u00b7 Alemannisch \\u00b7 \\u12a0\\u121b\\u122d\\u129b \\u00b7 Aragon\\u00e9s \\u00b7 Arm\\u00e3neashce \\u00b7 Arpitan \\u00b7 B\\u00e2n-l\\u00e2m-g\\u00fa \\u00b7 Basa Banyumasan \\u00b7 \\u0411\\u0435\\u043b\\u0430\\u0440\\u0443\\u0441\\u043a\\u0430\\u044f (\\u0422\\u0430\\u0440\\u0430\\u0448\\u043a\\u0435\\u0432i\\u0446\\u0430) \\u00b7 \\u092d\\u094b\\u091c\\u092a\\u0941\\u0930\\u0940 \\u00b7 Boarisch \\u00b7 Corsu \\u00b7 \\u0427\\u0103\\u0432\\u0430\\u0448 \\u00b7 Deitsch \\u00b7 \\u078b\\u07a8\\u0788\\u07ac\\u0780\\u07a8 \\u00b7 Eald Englisc \\u00b7 F\\u00f8royskt \\u00b7 Frysk \\u00b7 Furlan \\u00b7 Gaeilge \\u00b7 Gaelg \\u00b7 G\\u00e0idhlig \\u00b7 \\u53e4\\u6587 / \\u6587\\u8a00\\u6587 \\u00b7 \\u02bb\\u014clelo Hawai\\u02bbi \\u00b7 \\u0540\\u0561\\u0575\\u0565\\u0580\\u0565\\u0576 \\u00b7 Hornjoserbsce \\u00b7 Ilokano \\u00b7 Interlingua \\u00b7 \\u0418\\u0440\\u043e\\u043d \\u00e6\\u0432\\u0437\\u0430\\u0433 \\u00b7 \\u0c95\\u0ca8\\u0ccd\\u0ca8\\u0ca1 \\u00b7 Kapampangan \\u00b7 Kasz\\u00ebbsczi \\u00b7 Kernewek \\u00b7 \\u1797\\u17b6\\u179f\\u17b6\\u1781\\u17d2\\u1798\\u17c2\\u179a \\u00b7 Ladino / \\u05dc\\u05d0\\u05d3\\u05d9\\u05e0\\u05d5 \\u00b7 Ligure \\u00b7 Limburgs \\u00b7 Ling\\u00e1la \\u00b7 \\u0d2e\\u0d32\\u0d2f\\u0d3e\\u0d33\\u0d02 \\u00b7 Malti \\u00b7 M\\u0101ori \\u00b7 \\u041c\\u043e\\u043d\\u0433\\u043e\\u043b \\u00b7 N\\u0101huatlaht\\u014dlli \\u00b7 Nedersaksisch \\u00b7 \\u0928\\u0947\\u092a\\u093e\\u0932\\u0940 \\u00b7 Nouormand \\u00b7 Novial \\u00b7 O\\u2018zbek \\u00b7 \\u092a\\u093e\\u0934\\u093f \\u00b7 Pangasin\\u00e1n \\u00b7 \\u067e\\u069a\\u062a\\u0648 \\u00b7 \\u049a\\u0430\\u0437\\u0430\\u049b\\u0448\\u0430 \\u00b7 Ripoarisch \\u00b7 Rumantsch \\u00b7 Runa Simi \\u00b7 \\u0938\\u0902\\u0938\\u094d\\u0915\\u0943\\u0924\\u092e\\u094d \\u00b7 S\\u00e1megiella \\u00b7 Scots \\u00b7 Kiswahili \\u00b7 Tarand\\u00edne \\u00b7 Tatar\\u00e7a \\u00b7 \\u0422\\u043e\\u04b7\\u0438\\u043a\\u04e3 \\u00b7 Lea faka-Tonga \\u00b7 T\\u00fcrkmen \\u00b7 \\u0627\\u0631\\u062f\\u0648 \\u00b7 V\\u00e8neto \\u00b7 V\\u00f5ro \\u00b7 West-Vlams \\u00b7 Winaray \\u00b7 \\u5434\\u8bed \\u00b7 \\u05d9\\u05d9\\u05b4\\u05d3\\u05d9\\u05e9 \\u00b7 \\u7cb5\\u8a9e \\u00b7 Yor\\u00f9b\\u00e1 \\u00b7 Zazaki \\u00b7 \\u017demait\\u0117\\u0161ka\n 100+ \n\\u0710\\u072a\\u0721\\u071d\\u0710 \\u00b7 Ava\\u00f1e\\u2019\\u1ebd \\u00b7 \\u0410\\u0432\\u0430\\u0440 \\u00b7 Aymara \\u00b7 Bamanankan \\u00b7 \\u0411\\u0430\\u0448\\u04a1\\u043e\\u0440\\u0442 \\u00b7 Bikol Central \\u00b7 \\u0f56\\u0f7c\\u0f51\\u0f0b\\u0f61\\u0f72\\u0f42 \\u00b7 Chamoru \\u00b7 Chavacano de Zamboanga \\u00b7 Bislama \\u00b7 Din\\u00e9 Bizaad \\u00b7 Dolnoserbski \\u00b7 Emigli\\u00e0n-Rumagn\\u00f2l \\u00b7 E\\u028begbe \\u00b7 \\u06af\\u06cc\\u0644\\u06a9\\u06cc \\u00b7 \\u0a97\\u0ac1\\u0a9c\\u0ab0\\u0abe\\u0aa4\\u0ac0 \\u00b7 \\U00010332\\U0001033f\\U00010344\\U00010339\\U00010343\\U0001033a \\u00b7 Hak-k\\u00e2-fa / \\u5ba2\\u5bb6\\u8a71 \\u00b7 Igbo \\u00b7 \\u1403\\u14c4\\u1483\\u144e\\u1450\\u1466 / Inuktitut \\u00b7 Interlingue \\u00b7 \\u0915\\u0936\\u094d\\u092e\\u0940\\u0930\\u0940 / \\u0643\\u0634\\u0645\\u064a\\u0631\\u064a \\u00b7 Kongo \\u00b7 \\u041a\\u044b\\u0440\\u0433\\u044b\\u0437\\u0447\\u0430 \\u00b7 \\u0e9e\\u0eb2\\u0eaa\\u0eb2\\u0ea5\\u0eb2\\u0ea7 \\u00b7 lojban \\u00b7 Malagasy \\u00b7 M\\u0101z\\u0259r\\u016bni / \\u0645\\u0627\\u0632\\u0650\\u0631\\u0648\\u0646\\u06cc \\u00b7 M\\u00ecng-d\\u0115\\u0324ng-ng\\u1e73\\u0304 \\u00b7 \\u041c\\u043e\\u043b\\u0434\\u043e\\u0432\\u0435\\u043d\\u044f\\u0441\\u043a\\u044d \\u00b7 \\u1017\\u1019\\u102c\\u1005\\u102c \\u00b7 Ekakair\\u0169 Naoero \\u00b7 N\\u0113hiyaw\\u0113win / \\u14c0\\u1426\\u1403\\u152d\\u140d\\u140f\\u1423 \\u00b7 Norfuk / Pitkern \\u00b7 \\u041d\\u043e\\u0445\\u0447\\u0438\\u0439\\u043d \\u00b7 \\u0b13\\u0b21\\u0b3c\\u0b3f\\u0b06 \\u00b7 Afaan Oromoo \\u00b7 \\u0985\\u09b8\\u09ae\\u09c0\\u09af\\u09bc\\u09be \\u00b7 \\u0a2a\\u0a70\\u0a1c\\u0a3e\\u0a2c\\u0a40 / \\u067e\\u0646\\u062c\\u0627\\u0628\\u06cc \\u00b7 Papiamentu \\u00b7 Q\\u0131r\\u0131mtatarca \\u00b7 Romani / \\u0930\\u094b\\u092e\\u093e\\u0928\\u0940 \\u00b7 Kinyarwanda \\u00b7 Gagana S\\u0101moa \\u00b7 Sardu \\u00b7 Seeltersk \\u00b7 \\u0dc3\\u0dd2\\u0d82\\u0dc4\\u0dbd \\u00b7 \\u0633\\u0646\\u068c\\u064a \\u00b7 \\u0421\\u043b\\u043e\\u0432\\u0463\\u043d\\u044c\\u0441\\u043a\\u044a \\u00b7 Af Soomaali \\u00b7 SiSwati \\u00b7 Reo Tahiti \\u00b7 Taqbaylit \\u00b7 Tetun \\u00b7 \\u1275\\u130d\\u122d\\u129b \\u00b7 Tok Pisin \\u00b7 \\u13e3\\u13b3\\u13a9 \\u00b7 \\u0423\\u0434\\u043c\\u0443\\u0440\\u0442 \\u00b7 Uyghur / \\u0626\\u06c7\\u064a\\u063a\\u06c7\\u0631\\u0686\\u0647 \\u00b7 Tshiven\\u1e13a \\u00b7 Wollof \\u00b7 isiXhosa \\u00b7 Ze\\u00eauws \\u00b7 isiZulu\nOther languages \\u00b7 Weitere Sprachen \\u00b7 \\u4ed6\\u306e\\u8a00\\u8a9e \\u00b7 Kompletna lista j\\u0119zyk\\u00f3w \\u00b7 \\u5176\\u4ed6\\u8bed\\u8a00 \\u00b7 \\u0414\\u0440\\u0443\\u0433\\u0438\\u0435 \\u044f\\u0437\\u044b\\u043a\\u0438 \\u00b7 Aliaj lingvoj \\u00b7 \\ub2e4\\ub978 \\uc5b8\\uc5b4 \\u00b7 Ng\\u00f4n ng\\u1eef kh\\u00e1c""") self.db.close() for encoding in "UTF-16", "UTF-16le", "UTF-16be", "UTF-8": if os.path.exists(TESTFILEPREFIX+"testdb"): deletefile(TESTFILEPREFIX+"testdb") db=apsw.Connection(TESTFILEPREFIX+"testdb") c=db.cursor() c.execute("pragma encoding=\"%s\"" % (encoding,)) for row in c.execute("pragma encoding"): # we use startswith as UTF-16 will be returned with le/be suffix self.assertTrue(row[0].startswith(encoding)) c.execute("create table foo(x); insert into foo values(?)", (text,)) for row in c.execute("select * from foo"): self.assertEqual(row[0], text) db.close() # calls that need protection calls={ 'sqlite3api': { # items of interest - sqlite3 calls 'match': re.compile(r"(sqlite3_[A-Za-z0-9_]+)\s*\("), # what must also be on same or preceding line 'needs': re.compile("PYSQLITE(_|_BLOB_|_CON_|_CUR_|_SC_|_VOID_|_BACKUP_)CALL"), # except if match.group(1) matches this - these don't # acquire db mutex so no need to wrap (determined by # examining sqlite3.c). If they acquire non-database # mutexes then that is ok. # In the case of sqlite3_result_*|declare_vtab, the mutex # is already held by enclosing sqlite3_step and the # methods will only be called from that same thread so it # isn't a problem. 'skipcalls': re.compile("^sqlite3_(blob_bytes|column_count|bind_parameter_count|data_count|vfs_.+|changes|total_changes|get_autocommit|last_insert_rowid|complete|interrupt|limit|free|threadsafe|value_.+|libversion|enable_shared_cache|initialize|shutdown|config|memory_.+|soft_heap_limit(64)?|randomness|db_readonly|db_filename|release_memory|status64|result_.+|user_data|mprintf|aggregate_context|declare_vtab|backup_remaining|backup_pagecount|sourceid|uri_.+)$"), # also ignore this file 'skipfiles': re.compile(r"[/\\]apsw.c$"), # error message 'desc': "sqlite3_ calls must wrap with PYSQLITE_CALL", }, 'inuse': { 'match': re.compile(r"(convert_column_to_pyobject|statementcache_prepare|statementcache_finalize|statementcache_next)\s*\("), 'needs': re.compile("INUSE_CALL"), 'desc': "call needs INUSE wrapper", }, } def sourceCheckMutexCall(self, filename, name, lines): # we check that various calls are wrapped with various macros for i,line in enumerate(lines): if "PYSQLITE_CALL" in line and "Py" in line: self.fail("%s: %s() line %d - Py call while GIL released - %s" % (filename, name, i, line.strip())) for k,v in self.calls.items(): if v.get('skipfiles', None) and v['skipfiles'].match(filename): continue mo=v['match'].search(line) if mo: func=mo.group(1) if v.get('skipcalls', None) and v['skipcalls'].match(func): continue if not v["needs"].search(line) and not v["needs"].search(lines[i-1]): self.fail("%s: %s() line %d call to %s(): %s - %s\n" % (filename, name, i, func, v['desc'], line.strip())) def sourceCheckFunction(self, filename, name, lines): # not further checked if name.split("_")[0] in ("ZeroBlobBind", "APSWVFS", "APSWVFSFile", "APSWBuffer", "FunctionCBInfo", "apswurifilename") : return checks={ "APSWCursor": { "skip": ("dealloc", "init", "dobinding", "dobindings", "doexectrace", "dorowtrace", "step", "close", "close_internal"), "req": { "use": "CHECK_USE", "closed": "CHECK_CURSOR_CLOSED", }, "order": ("use", "closed") }, "Connection": { "skip": ("internal_cleanup", "dealloc", "init", "close", "interrupt", "close_internal", "remove_dependent", "readonly", "getmainfilename", "db_filename"), "req": { "use": "CHECK_USE", "closed": "CHECK_CLOSED", }, "order": ("use", "closed") }, "APSWBlob": { "skip": ("dealloc", "init", "close", "close_internal"), "req": { "use": "CHECK_USE", "closed": "CHECK_BLOB_CLOSED" }, "order": ("use", "closed") }, "APSWBackup": { "skip": ("dealloc", "init", "close_internal", "get_remaining", "get_pagecount"), "req": { "use": "CHECK_USE", "closed": "CHECK_BACKUP_CLOSED" }, "order": ("use", "closed") }, "apswvfs": { "req": { "preamble": "VFSPREAMBLE", "tb": "AddTraceBackHere", "postamble": "VFSPOSTAMBLE" }, "order": ("preamble", "tb", "postamble") }, "apswvfspy": { "req": { "check": "CHECKVFSPY", "notimpl": "VFSNOTIMPLEMENTED(%(base)s," }, "order": ("check", "notimpl"), }, "apswvfspy_unregister": { "req": { "check": "CHECKVFSPY", }, }, "apswvfsfile": { "req": { "preamble": "FILEPREAMBLE", "postamble": "FILEPOSTAMBLE", }, "order": ("preamble", "postamble") }, "apswvfsfilepy": { "skip": ("xClose", ), "req": { "check": "CHECKVFSFILEPY", "notimpl": "VFSFILENOTIMPLEMENTED(%(base)s," }, "order": ("check", "notimpl"), }, } prefix,base=name.split("_", 1) if name in checks: checker=checks[name] elif prefix in checks: checker=checks[prefix] else: self.fail(filename+": "+prefix+" not in checks") if base in checker.get("skip", ()): return format={"base": base, "prefix": prefix} found={} for k in checker["req"]: found[k]=None # check the lines for i,line in enumerate(lines): for k,v in checker["req"].items(): v=v % format if v in line and found[k] is None: found[k]=i # check they are present for k,v in checker["req"].items(): if found[k] is None: v=v%format self.fail(filename+": "+k+" "+v+" missing in "+name) # check order order=checker.get("order", ()) for i in range(len(order)-2): b4=order[i] after=order[i+1] if found[b4]>found[after]: self.fail(filename+": "+checker["req"][b4]%format+" should be before "+checker["req"][after]%format+" in "+name) return def testSourceChecks(self): "Check various source code issues" # We expect a coding style where the functions are named # Object_method, are at the start of the line and have a first # parameter named self. if not os.path.exists("src/apsw.c"): return for filename in glob.glob("src/*.c"): # check not using C++ style comments code=read_whole_file(filename, "rt").replace("http://", "http:__").replace("https://", "https:__") if "//" in code: self.fail("// style comment in "+filename) # check check funcs funcpat1=re.compile(r"^(\w+_\w+)\s*\(\s*\w+\s*\*\s*self") funcpat2=re.compile(r"^(\w+)\s*\(") name1=None name2=None lines=[] infunc=0 for line in read_whole_file(filename, "rt").split("\n"): if line.startswith("}") and infunc: if infunc==1: self.sourceCheckMutexCall(filename, name1, lines) self.sourceCheckFunction(filename, name1, lines) elif infunc==2: self.sourceCheckMutexCall(filename, name2, lines) else: assert False infunc=0 lines=[] name1=None name2=None continue if name1 and line.startswith("{"): infunc=1 continue if name2 and line.startswith("{"): infunc=2 continue if infunc: lines.append(line) continue m=funcpat1.match(line) if m: name1=m.group(1) continue m=funcpat2.match(line) if m: name2=m.group(1) continue def testConfig(self): "Verify sqlite3_config wrapper" # we need to ensure there are no outstanding sqlite objects self.db=None gc.collect() self.assertRaises(apsw.MisuseError, apsw.config, apsw.SQLITE_CONFIG_MEMSTATUS, True) apsw.shutdown() try: self.assertRaises(TypeError, apsw.config) self.assertRaises(TypeError, apsw.config, "chicken") apsw.config(apsw.SQLITE_CONFIG_SINGLETHREAD) apsw.config(long(apsw.SQLITE_CONFIG_SINGLETHREAD)) self.assertRaises(TypeError, apsw.config, apsw.SQLITE_CONFIG_SINGLETHREAD, 2) self.assertRaises(TypeError, apsw.config, apsw.SQLITE_CONFIG_MEMSTATUS) apsw.config(apsw.SQLITE_CONFIG_MEMSTATUS, True) apsw.config(apsw.SQLITE_CONFIG_MEMSTATUS, False) self.assertRaises(TypeError, apsw.config, 89748937) x=long(0x7fffffff) self.assertRaises(OverflowError, apsw.config, x*x*x*x) self.assertTrue(apsw.config(apsw.SQLITE_CONFIG_PCACHE_HDRSZ)>=0); apsw.config(apsw.SQLITE_CONFIG_PMASZ, -1) finally: # put back to normal apsw.config(apsw.SQLITE_CONFIG_SERIALIZED) apsw.config(apsw.SQLITE_CONFIG_MEMSTATUS, True) apsw.initialize() def testMemory(self): "Verify memory tracking functions" self.assertNotEqual(apsw.memoryused(), 0) self.assertTrue(apsw.memoryhighwater() >= apsw.memoryused()) self.assertRaises(TypeError, apsw.memoryhighwater, "eleven") apsw.memoryhighwater(True) self.assertEqual(apsw.memoryhighwater(), apsw.memoryused()) self.assertRaises(TypeError, apsw.softheaplimit, 1, 2) apsw.softheaplimit(0) self.assertRaises(TypeError, apsw.releasememory, 1, 2) res=apsw.releasememory(0x7fffffff) self.assertTrue(type(res) in (int, long)) apsw.softheaplimit(l("0x1234567890abc")) self.assertEqual(l("0x1234567890abc"), apsw.softheaplimit(l("0x1234567890abe"))) def testRandomness(self): "Verify randomness routine" self.assertRaises(TypeError, apsw.randomness, "three") self.assertRaises(OverflowError, apsw.randomness, l("0xffffffffee")) self.assertRaises(ValueError, apsw.randomness, -2) self.assertEqual(0, len(apsw.randomness(0))) self.assertEqual(1, len(apsw.randomness(1))) self.assertEqual(16383, len(apsw.randomness(16383))) self.assertNotEqual(apsw.randomness(77), apsw.randomness(77)) def testSqlite3Pointer(self): "Verify getting underlying sqlite3 pointer" self.assertRaises(TypeError, self.db.sqlite3pointer, 7) self.assertTrue(type(self.db.sqlite3pointer()) in (int,long)) self.assertEqual(self.db.sqlite3pointer(), self.db.sqlite3pointer()) self.assertNotEqual(self.db.sqlite3pointer(), apsw.Connection(":memory:").sqlite3pointer()) def testPickle(self, module=None): "Verify data etc can be pickled" if module==None: import pickle self.testPickle(pickle) try: import cPickle self.testPickle(cPickle) except ImportError: pass return import pickle PicklingError=pickle.PicklingError try: import cPickle PicklingError=(PicklingError, cPickle.PicklingError) except ImportError: pass # work out what protocol versions we can use versions=[] for num in range(-1, 20): try: module.dumps(3, num) versions.append(num) except ValueError: pass # some objects to try pickling vals=test_types_vals cursor=self.db.cursor() cursor.execute("create table if not exists t(i,x)") def canpickle(val): if py3: return True # Python <= 2.5 wide builds screws up unicode codepoints # above 0xffff with pickle if sys.version_info<(2,6) and isinstance(val, unicode): for a in val: if ord(a)>0xffff: return False return not isinstance(val, buffer) cursor.execute("BEGIN") cursor.executemany("insert into t values(?,?)", [(i,v) for i,v in enumerate(vals) if canpickle(v)]) cursor.execute("COMMIT") for ver in versions: for row in cursor.execute("select * from t"): self.assertEqual(row, module.loads(module.dumps(row, ver))) rownum, val=row if type(vals[rownum]) is float: self.assertAlmostEqual(vals[rownum], val) else: self.assertEqual(vals[rownum], val) # can't pickle cursors try: module.dumps(cursor, ver) except TypeError: pass except PicklingError: pass # some versions can pickle the db, but give a zeroed db back db=None try: db=module.loads(module.dumps(self.db, ver)) except TypeError: pass if db is not None: self.assertRaises(apsw.ConnectionClosedError, db.db_filename, "main") self.assertRaises(apsw.ConnectionClosedError, db.cursor) self.assertRaises(apsw.ConnectionClosedError, db.getautocommit) def testStatus(self): "Verify status function" self.assertRaises(TypeError, apsw.status, "zebra") self.assertRaises(apsw.MisuseError, apsw.status, 2323) for i in apsw.mapping_status: if type(i)!=type(""): continue res=apsw.status(getattr(apsw, i)) self.assertEqual(len(res), 2) self.assertEqual(type(res), tuple) self.assertTrue(res[0]<=res[1]) def testDBStatus(self): "Verify db status function" self.assertRaises(TypeError, self.db.status, "zebra") self.assertRaises(apsw.SQLError, self.db.status, 2323) for i in apsw.mapping_db_status: if type(i)!=type(""): continue res=self.db.status(getattr(apsw, i)) self.assertEqual(len(res), 2) self.assertEqual(type(res), tuple) if i!="SQLITE_DBSTATUS_CACHE_USED": self.assertTrue(res[0]<=res[1]) def testZeroBlob(self): "Verify handling of zero blobs" self.assertRaises(TypeError, apsw.zeroblob) self.assertRaises(TypeError, apsw.zeroblob, "foo") self.assertRaises(TypeError, apsw.zeroblob, -7) self.assertRaises(TypeError, apsw.zeroblob, size=27) self.assertRaises(OverflowError, apsw.zeroblob, 4000000000) cur=self.db.cursor() cur.execute("create table foo(x)") cur.execute("insert into foo values(?)", (apsw.zeroblob(27),)) v=next(cur.execute("select * from foo"))[0] self.assertEqual(v, b(r"\x00"*27)) # Make sure inheritance works class multi: def __init__(self, *args): self.foo=3 class derived(apsw.zeroblob): def __init__(self, num): #multi.__init__(self) apsw.zeroblob.__init__(self, num) cur.execute("delete from foo; insert into foo values(?)", (derived(28),)) v=next(cur.execute("select * from foo"))[0] self.assertEqual(v, b(r"\x00"*28)) self.assertEqual(apsw.zeroblob(91210).length(), 91210) def testBlobIO(self): "Verify Blob input/output" cur=self.db.cursor() rowid=next(cur.execute("create table foo(x blob); insert into foo values(zeroblob(98765)); select rowid from foo"))[0] self.assertRaises(TypeError, self.db.blobopen, 1) self.assertRaises(TypeError, self.db.blobopen, u("main"), "foo\xf3") if sys.version_info>=(2,4): # Bug in python 2.3 gives internal error when complex is # passed to PyArg_ParseTuple for Long instead of raising # TypeError. Corrected in 2.4 self.assertRaises(TypeError, self.db.blobopen, u("main"), "foo", "x", complex(-1,-1), True) self.assertRaises(TypeError, self.db.blobopen, u("main"), "foo", "x", rowid, True, False) self.assertRaises(apsw.SQLError, self.db.blobopen, "main", "foo", "x", rowid+27, False) self.assertRaises(apsw.SQLError, self.db.blobopen, "foo", "foo" , "x", rowid, False) self.assertRaises(apsw.SQLError, self.db.blobopen, "main", "x" , "x", rowid, False) self.assertRaises(apsw.SQLError, self.db.blobopen, "main", "foo" , "y", rowid, False) blobro=self.db.blobopen("main", "foo", "x", rowid, False) # sidebar: check they can't be manually created self.assertRaises(TypeError, type(blobro)) # check vals self.assertEqual(blobro.length(), 98765) self.assertEqual(blobro.length(), 98765) self.assertEqual(blobro.read(0), BYTES("")) zero=BYTES(r"\x00") step=5 # must be exact multiple of size assert(blobro.length()%step==0) for i in range(0,98765,step): x=blobro.read(step) self.assertEqual(zero*step, x) x=blobro.read(10) self.assertEqual(x, BYTES("")) blobro.seek(0,1) self.assertEqual(blobro.tell(), 98765) blobro.seek(0) self.assertEqual(blobro.tell(), 0) self.assertEqual(len(blobro.read(11119999)), 98765) blobro.seek(2222) self.assertEqual(blobro.tell(), 2222) blobro.seek(0,0) self.assertEqual(blobro.tell(), 0) self.assertEqual(blobro.read(), BYTES(r"\x00"*98765)) blobro.seek(-3,2) self.assertEqual(blobro.read(), BYTES(r"\x00"*3)) # check types self.assertRaises(TypeError, blobro.read, "foo") self.assertRaises(TypeError, blobro.tell, "foo") self.assertRaises(TypeError, blobro.seek) self.assertRaises(TypeError, blobro.seek, "foo", 1) self.assertRaises(TypeError, blobro.seek, 0, 1, 2) self.assertRaises(ValueError, blobro.seek, 0, -3) self.assertRaises(ValueError, blobro.seek, 0, 3) # can't seek before begining or after end of file self.assertRaises(ValueError, blobro.seek, -1, 0) self.assertRaises(ValueError, blobro.seek, 25, 1) self.assertRaises(ValueError, blobro.seek, 25, 2) self.assertRaises(ValueError, blobro.seek, 100000, 0) self.assertRaises(ValueError, blobro.seek, -100000, 1) self.assertRaises(ValueError, blobro.seek, -100000, 2) # close testing blobro.seek(0,0) self.assertRaises(apsw.ReadOnlyError, blobro.write, b("kermit was here")) # you get the error on the close too, and blob is always closed - sqlite ticket #2815 self.assertRaises(apsw.ReadOnlyError, blobro.close) # check can't work on closed blob self.assertRaises(ValueError, blobro.read) self.assertRaises(ValueError, blobro.readinto, BYTES("ab")) self.assertRaises(ValueError, blobro.seek, 0, 0) self.assertRaises(ValueError, blobro.tell) self.assertRaises(ValueError, blobro.write, "abc") # readinto tests rowidri=self.db.cursor().execute("insert into foo values(x'112233445566778899aabbccddeeff'); select last_insert_rowid()").fetchall()[0][0] blobro=self.db.blobopen("main", "foo", "x", rowidri, False) self.assertRaises(TypeError, blobro.readinto) self.assertRaises(TypeError, blobro.readinto, 3) buffers=[] import array if sys.version_info<(3,): buffers.append(array.array("c", "\0\0\0\0")) else: buffers.append(array.array("b", b(r"\0\0\0\0"))) if sys.version_info>=(2,6): if sys.version_info<(3,): buffers.append(bytearray("\0\0\0\0")) else: buffers.append(bytearray(b(r"\0\0\0\0"))) # bytearray returns ints rather than chars so a fixup def _fixup(c): if type(c)==int: if py3: return bytes([c]) else: return chr(c) return c for buf in buffers: self.assertRaises(TypeError, blobro.readinto) self.assertRaises(TypeError, blobro.readinto, buf, buf) self.assertRaises(TypeError, blobro.readinto, buf, 1, buf) self.assertRaises(TypeError, blobro.readinto, buf, 1, 1, buf) blobro.seek(0) blobro.readinto(buf, 1, 1) self.assertEqual(_fixup(buf[0]), BYTES(r"\x00")) self.assertEqual(_fixup(buf[1]), BYTES(r"\x11")) self.assertEqual(_fixup(buf[2]), BYTES(r"\x00")) self.assertEqual(_fixup(buf[3]), BYTES(r"\x00")) self.assertEqual(len(buf), 4) blobro.seek(3) blobro.readinto(buf) def check_unchanged(): self.assertEqual(_fixup(buf[0]), BYTES(r"\x44")) self.assertEqual(_fixup(buf[1]), BYTES(r"\x55")) self.assertEqual(_fixup(buf[2]), BYTES(r"\x66")) self.assertEqual(_fixup(buf[3]), BYTES(r"\x77")) self.assertEqual(len(buf), 4) check_unchanged() blobro.seek(14) # too much requested self.assertRaises(ValueError, blobro.readinto, buf, 1) check_unchanged() # bounds errors self.assertRaises(ValueError, blobro.readinto, buf, 1, -1) self.assertRaises(ValueError, blobro.readinto, buf, 1, 7) self.assertRaises(ValueError, blobro.readinto, buf, -1, 2) self.assertRaises(ValueError, blobro.readinto, buf, 10000, 2) self.assertRaises(OverflowError, blobro.readinto, buf, 1, l("45236748972389749283")) check_unchanged() # get a read error blobro.seek(0) self.db.cursor().execute("update foo set x=x'112233445566' where rowid=?", (rowidri,)) self.assertRaises(apsw.AbortError, blobro.readinto, buf) # should fail with buffer being a string self.assertRaises(TypeError, blobro.readinto, "abcd", 1, 1) self.assertRaises(TypeError, blobro.readinto, u("abcd"), 1, 1) if not py3: self.assertRaises(TypeError, blobro.readinto, buffer("abcd"), 1, 1) # write tests blobrw=self.db.blobopen("main", "foo", "x", rowid, True) self.assertEqual(blobrw.length(), 98765) blobrw.write(b("abcd")) blobrw.seek(0, 0) self.assertEqual(blobrw.read(4), BYTES("abcd")) blobrw.write(b("efg")) blobrw.seek(0, 0) self.assertEqual(blobrw.read(7), BYTES("abcdefg")) blobrw.seek(50, 0) blobrw.write(b("hijkl")) blobrw.seek(-98765, 2) self.assertEqual(blobrw.read(55), BYTES("abcdefg"+r"\x00"*43+"hijkl")) self.assertRaises(TypeError, blobrw.write, 12) self.assertRaises(TypeError, blobrw.write) self.assertRaises(TypeError, blobrw.write, u("foo")) # try to go beyond end self.assertRaises(ValueError, blobrw.write, b(" "*100000)) self.assertRaises(TypeError, blobrw.close, "elephant") # coverage blobro=self.db.blobopen("main", "foo", "x", rowid, False) self.assertRaises(apsw.ReadOnlyError, blobro.write, b("abcd")) blobro.close(True) self.db.cursor().execute("insert into foo(_rowid_, x) values(99, 1)") blobro=self.db.blobopen("main", "foo", "x", rowid, False) self.assertRaises(TypeError, blobro.reopen) self.assertRaises(TypeError, blobro.reopen, "banana") self.assertRaises(OverflowError, blobro.reopen, l("45236748972389749283")) first=blobro.read(2) # check position is reset blobro.reopen(rowid) self.assertEqual(blobro.tell(), 0) self.assertEqual(first, blobro.read(2)) # invalid reopen self.assertRaises(apsw.SQLError, blobro.reopen, l("0x1ffffffff")) blobro.close() def testBlobReadError(self): "Ensure blob read errors are handled well" cur=self.db.cursor() cur.execute("create table ioerror (x, blob)") cur.execute("insert into ioerror (rowid,x,blob) values (2,3,x'deadbeef')") blob=self.db.blobopen("main", "ioerror", "blob", 2, False) blob.read(1) # Do a write which cause blob to become invalid cur.execute("update ioerror set blob='fsdfdsfasd' where x=3") try: blob.read(1) 1/0 except: klass,value=sys.exc_info()[:2] self.assertTrue(klass is apsw.AbortError) def testURIFilenames(self): assertRaises=self.assertRaises assertEqual=self.assertEqual class TVFS(apsw.VFS): def __init__(self): apsw.VFS.__init__(self, "uritest", "") def xOpen(self, name, flags): assert isinstance(name, apsw.URIFilename) # The various errors assertRaises(TypeError, name.uri_parameter) assertRaises(TypeError, name.uri_parameter, 2) assertRaises(TypeError, name.uri_int) assertRaises(TypeError, name.uri_int, 7) assertRaises(TypeError, name.uri_int, 7, 7) assertRaises(TypeError, name.uri_int, 7, 7, 7) if sys.version_info>(2,4): # 2.3 does systemerror instead of typeerror assertRaises(TypeError, name.uri_int, "seven", "seven") assertRaises(TypeError, name.uri_boolean, "seven") assertRaises(TypeError, name.uri_boolean, "seven", "seven") assertRaises(TypeError, name.uri_boolean, "seven", None) # Check values assert name.filename().endswith("testdb2") assertEqual(name.uri_parameter("notexist"), None) assertEqual(name.uri_parameter("foo"), "1&2=3") assertEqual(name.uri_int("foo", -7), -7) assertEqual(name.uri_int("bar", -7), 43242342) # https://sqlite.org/src/info/5f41597f7c # assertEqual(name.uri_boolean("foo", False), False) assertEqual(name.uri_boolean("bam", False), True) assertEqual(name.uri_boolean("baz", True), False) 1/0 testvfs=TVFS() self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, apsw.Connection, "file:testdb2?foo=1%262%3D3&bar=43242342&bam=true&baz=fal%73%65", flags=apsw.SQLITE_OPEN_READWRITE | apsw.SQLITE_OPEN_CREATE | apsw.SQLITE_OPEN_URI, vfs="uritest") def testVFSWithWAL(self): "Verify VFS using WAL where possible" apsw.connection_hooks.append(lambda c: c.cursor().execute("pragma journal_mode=WAL")) try: self.testVFS() finally: apsw.connection_hooks.pop() def testVFS(self): "Verify VFS functionality" global testtimeout # Check basic functionality and inheritance - make an obfuscated provider # obfusvfs code def encryptme(data): # An "encryption" scheme in honour of MAPI and SQL server passwords if not data: return data if py3: return bytes([x^0xa5 for x in data]) return "".join([chr(ord(x)^0xa5) for x in data]) class ObfuscatedVFSFile(apsw.VFSFile): def __init__(self, inheritfromvfsname, filename, flags): apsw.VFSFile.__init__(self, inheritfromvfsname, filename, flags) def xRead(self, amount, offset): return encryptme(super(ObfuscatedVFSFile, self).xRead(amount, offset)) def xWrite(self, data, offset): super(ObfuscatedVFSFile, self).xWrite(encryptme(data), offset) class ObfuscatedVFS(apsw.VFS): def __init__(self, vfsname="obfu", basevfs=""): self.vfsname=vfsname self.basevfs=basevfs apsw.VFS.__init__(self, self.vfsname, self.basevfs) def xOpen(self, name, flags): return ObfuscatedVFSFile(self.basevfs, name, flags) vfs=ObfuscatedVFS() query="create table foo(x,y); insert into foo values(1,2); insert into foo values(3,4)" self.db.cursor().execute(query) db2=apsw.Connection(TESTFILEPREFIX+"testdb2", vfs=vfs.vfsname) db2.cursor().execute(query) db2.close() waswal=self.db.cursor().execute("pragma journal_mode").fetchall()[0][0]=="wal" if waswal: self.db.cursor().execute("pragma journal_mode=delete").fetchall() self.db.close() # flush # check the two databases are the same (modulo the XOR) orig=read_whole_file(TESTFILEPREFIX+"testdb", "rb") obfu=read_whole_file(TESTFILEPREFIX+"testdb2", "rb") self.assertEqual(len(orig), len(obfu)) self.assertNotEqual(orig, obfu) # wal isn't exactly the same if waswal: def compare(one, two): self.assertEqual(one[:27], two[:27]) self.assertEqual(one[96:], two[96:]) else: compare=self.assertEqual compare(orig, encryptme(obfu)) # helper routines self.assertRaises(TypeError, apsw.exceptionfor, "three") self.assertRaises(ValueError, apsw.exceptionfor, 8764324) self.assertRaises(OverflowError, apsw.exceptionfor, l("0xffffffffffffffff10")) # test raw file object f=ObfuscatedVFSFile("", os.path.abspath(TESTFILEPREFIX+"testdb"), [apsw.SQLITE_OPEN_MAIN_DB|apsw.SQLITE_OPEN_READONLY, 0]) del f # check closes f=ObfuscatedVFSFile("", os.path.abspath(TESTFILEPREFIX+"testdb"), [apsw.SQLITE_OPEN_MAIN_DB|apsw.SQLITE_OPEN_READONLY, 0]) data=f.xRead(len(obfu), 0) # will encrypt it compare(obfu, data) f.xClose() f.xClose() f2=apsw.VFSFile("", os.path.abspath(TESTFILEPREFIX+"testdb"), [apsw.SQLITE_OPEN_MAIN_DB|apsw.SQLITE_OPEN_READONLY, 0]) del f2 f2=apsw.VFSFile("", os.path.abspath(TESTFILEPREFIX+"testdb2"), [apsw.SQLITE_OPEN_MAIN_DB|apsw.SQLITE_OPEN_READONLY, 0]) data=f2.xRead(len(obfu), 0) self.assertEqual(obfu, data) f2.xClose() f2.xClose() # cleanup so it doesn't interfere with following code using the same file del f del f2 db2.close() del db2 vfs.unregister() gc.collect() ### Detailed vfs testing # xRandomness is tested first. The method is called once after sqlite initializes # and only the default vfs is called. Consequently we have a helper test method # but it is only available when using testfixtures and the amalgamation self.db=None gc.collect() defvfs=apsw.vfsnames()[0] # we want to inherit from this one def testrand(): gc.collect() apsw.test_reset_rng() vfs=RandomVFS() db=apsw.Connection(TESTFILEPREFIX+"testdb") next(db.cursor().execute("select randomblob(10)")) class RandomVFSUpper(apsw.VFS): def __init__(self): apsw.VFS.__init__(self, "randomupper", defvfs) def xRandomness1(self, n): return b(r"\xaa\xbb") class RandomVFS(apsw.VFS): def __init__(self): apsw.VFS.__init__(self, "random", "randomupper", makedefault=True) def xRandomness1(self, bad, number, of, arguments): 1/0 def xRandomness2(self, n): 1/0 def xRandomness3(self, n): return b("abcd") def xRandomness4(self, n): return u("abcd") def xRandomness5(self, n): return b("a")*(2*n) def xRandomness6(self, n): return None def xRandomness7(self, n): return 3 def xRandomness99(self, n): return super(RandomVFS, self).xRandomness(n+2049) if hasattr(apsw, 'test_reset_rng'): vfsupper=RandomVFSUpper() vfs=RandomVFS() self.assertRaises(TypeError, vfs.xRandomness, "jksdhfsd") self.assertRaises(TypeError, vfs.xRandomness, 3, 3) self.assertRaises(ValueError, vfs.xRandomness, -88) RandomVFS.xRandomness=RandomVFS.xRandomness1 self.assertRaisesUnraisable(TypeError, testrand) RandomVFS.xRandomness=RandomVFS.xRandomness2 self.assertRaisesUnraisable(ZeroDivisionError, testrand) RandomVFS.xRandomness=RandomVFS.xRandomness3 testrand() # shouldn't have problems RandomVFS.xRandomness=RandomVFS.xRandomness4 self.assertRaisesUnraisable(TypeError, testrand) RandomVFS.xRandomness=RandomVFS.xRandomness5 testrand() # shouldn't have problems RandomVFS.xRandomness=RandomVFS.xRandomness6 testrand() # shouldn't have problems RandomVFS.xRandomness=RandomVFS.xRandomness7 self.assertRaisesUnraisable(TypeError, testrand) RandomVFS.xRandomness=RandomVFS.xRandomness99 testrand() # shouldn't have problems vfsupper.xRandomness=vfsupper.xRandomness1 testrand() # coverage vfsupper.unregister() vfs.unregister() class ErrorVFS(apsw.VFS): # A vfs that returns errors for all methods def __init__(self): apsw.VFS.__init__(self, "errorvfs", "") def errorme(self, *args): raise apsw.exceptionfor(apsw.SQLITE_IOERR) class TestVFS(apsw.VFS): def init1(self): super(TestVFS, self).__init__("apswtest") def init99(self, name="apswtest", base=""): super(TestVFS, self).__init__(name, base) def xDelete1(self, name, syncdir): super(TestVFS, self).xDelete(".", False) def xDelete2(self, bad, number, of, args): 1/0 def xDelete3(self, name, syncdir): 1/0 def xDelete4(self, name, syncdir): super(TestVFS,self).xDelete("bad", "arguments") def xDelete99(self, name, syncdir): assert(type(name)== type(u(""))) assert(type(syncdir)==type(1)) return super(TestVFS, self).xDelete(name, syncdir) def xAccess1(self, bad, number, of, args): 1/0 def xAccess2(self, name, flags): 1/0 def xAccess3(self, name, flags): return super(TestVFS,self).xAccess("bad", "arguments") def xAccess4(self, name, flags): return (3,) def xAccess99(self, name, flags): assert(type(name) == type(u(""))) assert(type(flags) == type(1)) return super(TestVFS, self).xAccess(name, flags) def xFullPathname1(self, bad, number, of, args): 1/0 def xFullPathname2(self, name): 1/0 def xFullPathname3(self, name): return super(TestVFS, self).xFullPathname("bad", "args") def xFullPathname4(self, name): # parameter is larger than default buffer sizes used by sqlite return super(TestVFS, self).xFullPathname(name*10000) def xFullPathname5(self, name): # result is larger than default buffer sizes used by sqlite return "a"*10000 def xFullPathname6(self, name): return 12 # bad return type def xFullPathname99(self, name): assert(type(name) == type(u(""))) return super(TestVFS, self).xFullPathname(name) def xOpen1(self, bad, number, of, arguments): 1/0 def xOpen2(self, name, flags): super(TestVFS, self).xOpen(name, 3) 1/0 def xOpen3(self, name, flags): v=super(TestVFS, self).xOpen(name, flags) flags.append(v) return v def xOpen4(self, name, flags): return None def xOpen99(self, name, flags): assert(isinstance(name, apsw.URIFilename) or name is None or type(name)==type(u(""))) assert(type(flags)==type([])) assert(len(flags)==2) assert(type(flags[0]) in (int,long)) assert(type(flags[1]) in (int,long)) return super(TestVFS, self).xOpen(name, flags) def xOpen100(self, name, flags): return TestFile(name, flags) def xDlOpen1(self, bad, number, of, arguments): 1/0 def xDlOpen2(self, name): 1/0 def xDlOpen3(self, name): return -1 def xDlOpen4(self, name): return "fred" def xDlOpen5(self, name): return super(TestVFS, self).xDlOpen(3) # python 3 only test def xDlOpen6(self, name): return super(TestVFS, self).xDlOpen(b("abcd")) # bad string type def xDlOpen7(self, name): return l("0xffffffffffffffff10") def xDlOpen99(self, name): assert(type(name)==type(u(""))) res=super(TestVFS, self).xDlOpen(name) if ctypes: try: cres=ctypes.cdll.LoadLibrary(name)._handle except: cres=0 assert(res==cres) return res def xDlSym1(self, bad, number, of, arguments): 1/0 def xDlSym2(self, handle, name): 1/0 def xDlSym3(self, handle, name): return "fred" def xDlSym4(self, handle, name): super(TestVFS, self).xDlSym(3,3) def xDlSym5(self, handle, name): return super(TestVFS, self).xDlSym(handle, b("abcd")) def xDlSym6(self, handle, name): return l("0xffffffffffffffff10") def xDlSym99(self, handle, name): assert(type(handle) in (int,long)) assert(type(name) == type(u(""))) res=super(TestVFS, self).xDlSym(handle, name) if not iswindows and _ctypes: assert(_ctypes.dlsym (handle, name)==res) # windows has funky issues I don't want to deal with here return res def xDlClose1(self, bad, number, of, arguments): 1/0 def xDlClose2(self, handle): 1/0 def xDlClose3(self, handle): return super(TestVFS, self).xDlClose("three") def xDlClose99(self, handle): assert(type(handle) in (int,long)) super(TestVFS,self).xDlClose(handle) def xDlError1(self, bad, number, of, arguments): 1/0 def xDlError2(self): 1/0 def xDlError3(self): return super(TestVFS, self).xDlError("three") def xDlError4(self): return 3 def xDlError5(self): return b("abcd") def xDlError6(self): return None def xDlError99(self): return super(TestVFS,self).xDlError() def xSleep1(self, bad, number, of, arguments): 1/0 def xSleep2(self, microseconds): 1/0 def xSleep3(self, microseconds): return super(TestVFS, self).xSleep("three") def xSleep4(self, microseconds): return "three" def xSleep5(self, microseconds): return l("0xffffffff0") def xSleep6(self, microseconds): return l("0xffffffffeeeeeeee0") def xSleep99(self, microseconds): assert(type(microseconds) in (int, long)) return super(TestVFS, self).xSleep(microseconds) def xCurrentTime1(self, bad, args): 1/0 def xCurrentTime2(self): 1/0 def xCurrentTime3(self): return super(TestVFS, self).xCurrentTime("three") def xCurrentTime4(self): return "three" def xCurrentTime5(self): return math.exp(math.pi)*26000 def xCurrentTime99(self): return super(TestVFS, self).xCurrentTime() def xGetLastError1(self, bad, args): 1/0 def xGetLastError2(self): 1/0 def xGetLastError3(self): return super(TestVFS,self).xGetLastError("three") def xGetLastError4(self): return 3 def xGetLastError5(self): return "a"*1500 def xGetLastError99(self): return super(TestVFS,self).xGetLastError() def xNextSystemCall1(self, bad, args): 1/0 def xNextSystemCall2(self, name): return 3 def xNextSystemCall3(self, name): return "foo\xf3" def xNextSystemCall4(self, name): 1/0 def xNextSystemCall99(self, name): return super(TestVFS,self).xNextSystemCall(name) def xGetSystemCall1(self, bad, args): 1/0 def xGetSystemCall2(self, name): 1/0 def xGetSystemCall3(self, name): return "fred" def xGetSystemCall4(self, name): return 3.7 def xGetSystemCall99(self, name): return super(TestVFS,self).xGetSystemCall(name) def xSetSystemCall1(self, bad, args, args3): 1/0 def xSetSystemCall2(self, name, ptr): 1/0 def xSetSystemCall3(self, name, ptr): raise apsw.NotFoundError() def xSetSystemCall99(self, name, ptr): return super(TestVFS,self).xSetSystemCall(name, ptr) class TestFile(apsw.VFSFile): def init1(self, name, flags): super(TestFile, self).__init__("bogus", "arguments") def init2(self, name, flags): super(TestFile, self).__init__("bogus", 3, 4) def init3(self, name, flags): super(TestFile, self).__init__("bogus", "4", 4) def init4(self, name, flags): super(TestFile, self).__init__("bogus", "4", [4,4,4,4]) def init5(self, name, flags): super(TestFile, self).__init__("", name, [l("0xffffffffeeeeeeee0"), l("0xffffffffeeeeeeee0")]) def init6(self, name, flags): super(TestFile, self).__init__("", name, [l("0xffffffffa"), 0]) # 64 bit int vs long overflow def init7(self, name, flags): super(TestFile, self).__init__("", name, (6, 7)) def init8(self, name, flags): super(TestFile, self).__init__("bogus", name, flags) def init9(self, name, flags): super(TestFile, self).__init__("", name, (6, "six")) def init10(self, name, flags): class badlist(list): # only allows setting an element once def __init__(self, *args): super(badlist, self).__init__(args) self.frozen=False def __setitem__(self, key, value): if self.frozen: raise ValueError("container is frozen") super(badlist, self).__setitem__(key, value) self.frozen=True super(TestFile, self).__init__("", name, badlist(flags[0], flags[1])) def init99(self, name, flags): super(TestFile, self).__init__("", name, flags) def xRead1(self, bad, number, of, arguments): 1/0 def xRead2(self, amount, offset): 1/0 def xRead3(self, amount, offset): return 3 def xRead4(self, amount, offset): return u("a")*amount def xRead5(self, amount, offset): return super(TestFile, self).xRead(amount-1, offset) def xRead99(self, amount, offset): return super(TestFile, self).xRead(amount, offset) def xWrite1(self, bad, number, of, arguments): 1/0 def xWrite2(self, buffy, offset): 1/0 def xWrite99(self, buffy, offset): return super(TestFile, self).xWrite(buffy, offset) def xUnlock1(self, bad, number, of, arguments): 1/0 def xUnlock2(self, level): 1/0 def xUnlock99(self, level): return super(TestFile, self).xUnlock(level) def xLock1(self, bad, number, of, arguments): 1/0 def xLock2(self, level): 1/0 def xLock99(self, level): return super(TestFile, self).xLock(level) def xTruncate1(self, bad, number, of, arguments): 1/0 def xTruncate2(self, size): 1/0 def xTruncate99(self, size): return super(TestFile, self).xTruncate(size) def xSync1(self, bad, number, of, arguments): 1/0 def xSync2(self, flags): 1/0 def xSync99(self, flags): return super(TestFile, self).xSync(flags) def xSectorSize1(self, bad, number, of, args): 1/0 def xSectorSize2(self): 1/0 def xSectorSize3(self): return "three" def xSectorSize4(self): return l("0xffffffffeeeeeeee0") def xSectorSize99(self): return super(TestFile, self).xSectorSize() def xDeviceCharacteristics1(self, bad, number, of, args): 1/0 def xDeviceCharacteristics2(self): 1/0 def xDeviceCharacteristics3(self): return "three" def xDeviceCharacteristics4(self): return l("0xffffffffeeeeeeee0") def xDeviceCharacteristics99(self): return super(TestFile, self).xDeviceCharacteristics() def xFileSize1(self, bad, number, of, args): 1/0 def xFileSize2(self): 1/0 def xFileSize3(self): return "three" def xFileSize4(self): return l("0xffffffffeeeeeeee0") def xFileSize99(self): res=super(TestFile, self).xFileSize() if res<100000: return int(res) return res def xCheckReservedLock1(self, bad, number, of, args): 1/0 def xCheckReservedLock2(self): 1/0 def xCheckReservedLock3(self): return "three" def xCheckReservedLock4(self): return l("0xffffffffeeeeeeee0") def xCheckReservedLock99(self): return super(TestFile,self).xCheckReservedLock() def xFileControl1(self, bad, number, of, args): 1/0 def xFileControl2(self, op, ptr): 1/0 def xFileControl3(self, op, ptr): return "banana" def xFileControl99(self, op, ptr): if op==1027: assert(ptr==1027) elif op==1028: if ctypes: assert(True is ctypes.py_object.from_address(ptr).value) else: return super(TestFile, self).xFileControl(op, ptr) return True # check initialization self.assertRaises(TypeError, apsw.VFS, "3", 3) self.assertRaises(ValueError, apsw.VFS, "never", "klgfkljdfsljgklfjdsglkdfs") self.assertTrue("never" not in apsw.vfsnames()) TestVFS.__init__=TestVFS.init1 vfs=TestVFS() self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, apsw.VFSNotImplementedError, testdb) del vfs gc.collect() TestVFS.__init__=TestVFS.init99 vfs=TestVFS() # Should work without any overridden methods testdb() ## xDelete self.assertRaises(TypeError, vfs.xDelete, "bogus", "arguments") TestVFS.xDelete=TestVFS.xDelete1 err=[apsw.IOError, apsw.IOError][iswindows] self.assertRaises(err, self.assertRaisesUnraisable, err, testdb) TestVFS.xDelete=TestVFS.xDelete2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestVFS.xDelete=TestVFS.xDelete3 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestVFS.xDelete=TestVFS.xDelete4 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestVFS.xDelete=TestVFS.xDelete99 testdb() ## xAccess self.assertRaises(TypeError, vfs.xAccess, "bogus", "arguments") TestVFS.xAccess=TestVFS.xAccess1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestVFS.xAccess=TestVFS.xAccess2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestVFS.xAccess=TestVFS.xAccess3 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestVFS.xAccess=TestVFS.xAccess4 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestVFS.xAccess=TestVFS.xAccess99 if iswindows: self.assertRaises(apsw.IOError, vfs.xAccess, u("0) ## xSetSystemCall fallback=apsw.VFS("fallback", base="") # undo any damage we do try: self.assertRaises(TypeError, vfs.xSetSystemCall) self.assertRaises(TypeError, vfs.xSetSystemCall, 3, 4) self.assertRaises((TypeError, ValueError), vfs.xSetSystemCall, "a\0b", 4) self.assertRaises(TypeError, vfs.xSetSystemCall, "none", 3.7) realopen=vfs.xGetSystemCall("open") self.assertEqual(False, vfs.xSetSystemCall("doesn't exist", 0)) self.assertEqual(True, vfs.xSetSystemCall("open", realopen+1)) self.assertEqual(realopen+1, vfs.xGetSystemCall("open")) self.assertEqual(True, vfs.xSetSystemCall("open", realopen)) TestVFS.xSetSystemCall=TestVFS.xSetSystemCall1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, vfs2.xSetSystemCall, "open", realopen) TestVFS.xSetSystemCall=TestVFS.xSetSystemCall2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, vfs2.xSetSystemCall, "open", realopen) TestVFS.xSetSystemCall=TestVFS.xSetSystemCall3 self.assertEqual(False, vfs2.xSetSystemCall("doesn't exist", 0)) TestVFS.xSetSystemCall=TestVFS.xSetSystemCall99 self.assertEqual(True, vfs2.xSetSystemCall("open", realopen)) finally: # undocumented - this resets all calls to their defaults fallback.xSetSystemCall(None, 0) fallback.unregister() ## ## VFS file testing ## ## init TestVFS.xOpen=TestVFS.xOpen100 TestFile.__init__=TestFile.init1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.__init__=TestFile.init2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.__init__=TestFile.init3 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.__init__=TestFile.init4 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.__init__=TestFile.init5 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, OverflowError, testdb) TestFile.__init__=TestFile.init6 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, OverflowError, testdb) TestFile.__init__=TestFile.init7 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.__init__=TestFile.init8 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ValueError, testdb) TestFile.__init__=TestFile.init9 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.__init__=TestFile.init10 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ValueError, testdb) TestFile.__init__=TestFile.init99 testdb() # should work just fine # cause an open failure self.assertRaises(apsw.CantOpenError, TestFile, ".", [apsw.SQLITE_OPEN_MAIN_DB|apsw.SQLITE_OPEN_CREATE|apsw.SQLITE_OPEN_READWRITE,0]) ## xRead t=TestFile(os.path.abspath(TESTFILEPREFIX+"testfile"), [apsw.SQLITE_OPEN_MAIN_DB|apsw.SQLITE_OPEN_CREATE|apsw.SQLITE_OPEN_READWRITE,0]) self.assertRaises(TypeError, t.xRead, "three", "four") self.assertRaises(OverflowError, t.xRead, l("0xffffffffeeeeeeee0"), 1) self.assertRaises(OverflowError, t.xRead, 1, l("0xffffffffeeeeeeee0")) TestFile.xRead=TestFile.xRead1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xRead=TestFile.xRead2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestFile.xRead=TestFile.xRead3 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xRead=TestFile.xRead4 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xRead=TestFile.xRead5 self.assertRaises(apsw.IOError, testdb) TestFile.xRead=TestFile.xRead99 testdb() ## xWrite if sys.version_info>=(2,4): # py2.3 has bug self.assertRaises(TypeError, t.xWrite, "three", "four") self.assertRaises(OverflowError, t.xWrite, "three", l("0xffffffffeeeeeeee0")) self.assertRaises(TypeError, t.xWrite, u("foo"), 0) TestFile.xWrite=TestFile.xWrite1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xWrite=TestFile.xWrite2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestFile.xWrite=TestFile.xWrite99 testdb() ## xUnlock self.assertRaises(TypeError, t.xUnlock, "three") self.assertRaises(OverflowError, t.xUnlock, l("0xffffffffeeeeeeee0")) # doesn't care about nonsensical levels - assert fails in debug build # t.xUnlock(-1) # python 3.4 garbage collection changes mean these get called during cleanup which # causes confusing messages in wal mode if not (apsw.connection_hooks and list(sys.version_info)>=[3,4]): TestFile.xUnlock=TestFile.xUnlock1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xUnlock=TestFile.xUnlock2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestFile.xUnlock=TestFile.xUnlock99 testdb() ## xLock self.assertRaises(TypeError, t.xLock, "three") self.assertRaises(OverflowError, t.xLock, l("0xffffffffeeeeeeee0")) # doesn't care about nonsensical levels - assert fails in debug build # t.xLock(0xffffff) TestFile.xLock=TestFile.xLock1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xLock=TestFile.xLock2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestFile.xLock=TestFile.xLock99 testdb() ## xTruncate if sys.version_info>=(2,4): # work around py2.3 bug self.assertRaises(TypeError, t.xTruncate, "three") self.assertRaises(OverflowError, t.xTruncate, l("0xffffffffeeeeeeee0")) if not iswindows: # windows is happy to truncate to -77 bytes # see https://sqlite.org/cvstrac/tktview?tn=3415 self.assertRaises(apsw.IOError, t.xTruncate, -77) TestFile.xTruncate=TestFile.xTruncate1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xTruncate=TestFile.xTruncate2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestFile.xTruncate=TestFile.xTruncate99 testdb() ## xSync if sys.version_info>=(2,4): # work around py2.3 bug self.assertRaises(TypeError, t.xSync, "three") self.assertRaises(OverflowError, t.xSync, l("0xffffffffeeeeeeee0")) TestFile.xSync=TestFile.xSync1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xSync=TestFile.xSync2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestFile.xSync=TestFile.xSync99 testdb() ## xSectorSize self.assertRaises(TypeError, t.xSectorSize, 3) TestFile.xSectorSize=TestFile.xSectorSize1 self.assertRaisesUnraisable(TypeError, testdb) TestFile.xSectorSize=TestFile.xSectorSize2 self.assertRaisesUnraisable(ZeroDivisionError, testdb) TestFile.xSectorSize=TestFile.xSectorSize3 self.assertRaisesUnraisable(TypeError, testdb) TestFile.xSectorSize=TestFile.xSectorSize4 self.assertRaisesUnraisable(OverflowError, testdb) TestFile.xSectorSize=TestFile.xSectorSize99 testdb() ## xDeviceCharacteristics self.assertRaises(TypeError, t.xDeviceCharacteristics, 3) TestFile.xDeviceCharacteristics=TestFile.xDeviceCharacteristics1 self.assertRaisesUnraisable(TypeError, testdb) TestFile.xDeviceCharacteristics=TestFile.xDeviceCharacteristics2 self.assertRaisesUnraisable(ZeroDivisionError, testdb) TestFile.xDeviceCharacteristics=TestFile.xDeviceCharacteristics3 self.assertRaisesUnraisable(TypeError, testdb) TestFile.xDeviceCharacteristics=TestFile.xDeviceCharacteristics4 self.assertRaisesUnraisable(OverflowError, testdb) TestFile.xDeviceCharacteristics=TestFile.xDeviceCharacteristics99 testdb() ## xFileSize self.assertRaises(TypeError, t.xFileSize, 3) TestFile.xFileSize=TestFile.xFileSize1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xFileSize=TestFile.xFileSize2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestFile.xFileSize=TestFile.xFileSize3 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xFileSize=TestFile.xFileSize4 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, OverflowError, testdb) TestFile.xFileSize=TestFile.xFileSize99 testdb() ## xCheckReservedLock self.assertRaises(TypeError, t.xCheckReservedLock, 8) if not iswindows: # we don't do checkreservedlock test on windows as the # various files that need to be copied and finagled behind # the scenes are locked TestFile.xCheckReservedLock=TestFile.xCheckReservedLock1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xCheckReservedLock=TestFile.xCheckReservedLock2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, testdb) TestFile.xCheckReservedLock=TestFile.xCheckReservedLock3 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, testdb) TestFile.xCheckReservedLock=TestFile.xCheckReservedLock4 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, OverflowError, testdb) TestFile.xCheckReservedLock=TestFile.xCheckReservedLock99 db=testdb() ## xFileControl self.assertRaises(TypeError, t.xFileControl, "three", "four") self.assertRaises(OverflowError, t.xFileControl, 10, l("0xffffffffeeeeeeee0")) self.assertRaises(TypeError, t.xFileControl, 10, "three") self.assertEqual(t.xFileControl(2000, 3000), False) fc1=testdb(TESTFILEPREFIX+"testdb", closedb=False).filecontrol fc2=testdb(TESTFILEPREFIX+"testdb2", closedb=False).filecontrol TestFile.xFileControl=TestFile.xFileControl1 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, fc1, "main", 1027, 1027) TestFile.xFileControl=TestFile.xFileControl2 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, ZeroDivisionError, fc2, "main", 1027, 1027) TestFile.xFileControl=TestFile.xFileControl3 self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, fc2, "main", 1027, 1027) TestFile.xFileControl=TestFile.xFileControl99 del fc1 del fc2 # these should work testdb(closedb=False).filecontrol("main", 1027, 1027) if ctypes: objwrap=ctypes.py_object(True) testdb(closedb=False).filecontrol("main", 1028, ctypes.addressof(objwrap)) # for coverage class VFSx(apsw.VFS): def __init__(self): apsw.VFS.__init__(self, "filecontrol", "apswtest") vfs2=VFSx() testdb(vfsname="filecontrol", closedb=False).filecontrol("main", 1027, 1027) del vfs2 ## xClose t.xClose() # make sure there is no problem closing twice t.xClose() del t gc.collect() t=apsw.VFSFile("", os.path.abspath(TESTFILEPREFIX+"testfile2"), [apsw.SQLITE_OPEN_MAIN_DB|apsw.SQLITE_OPEN_CREATE|apsw.SQLITE_OPEN_READWRITE,0]) t.xClose() # check all functions detect closed file for n in dir(t): if n not in ('xClose', 'excepthook') and not n.startswith("__"): self.assertRaises(apsw.VFSFileClosedError, getattr(t, n)) def testWith(self): "Context manager functionality" # we need py 2.5 for with stuff if sys.version_info<(2,5): return prefix="\n" if sys.version_info<(2,6): prefix="from __future__ import with_statement\n" def run(s, **kwargs): # ensure indentation matches first line s=s.strip("\n") s=s.rstrip(" ") s=(len(s)-len(s.lstrip(" ")))*" "+prefix+s l=locals().copy() l["self"]=self for k,v in kwargs.items(): l[k]=v # now remove indentation s=s.split("\n") p=len(s[0])-len(s[0].lstrip(" ")) s="\n".join([s[p:] for s in s]) execwrapper(s, globals(), l) # Does it work? # the autocommit tests are to make sure we are not in a transaction self.assertEqual(True, self.db.getautocommit()) self.assertTableNotExists("foo1") run("with self.db as db: db.cursor().execute('create table foo1(x)')") self.assertTableExists("foo1") self.assertEqual(True, self.db.getautocommit()) # with an error self.assertEqual(True, self.db.getautocommit()) self.assertTableNotExists("foo2") try: run(""" with self.db as db: db.cursor().execute('create table foo2(x)') 1/0 """) except ZeroDivisionError: pass self.assertTableNotExists("foo2") self.assertEqual(True, self.db.getautocommit()) # nested - simple - success run(""" with self.db as db: self.assertEqual(False, self.db.getautocommit()) db.cursor().execute('create table foo2(x)') with db as db2: self.assertEqual(False, self.db.getautocommit()) db.cursor().execute('create table foo3(x)') with db2 as db3: self.assertEqual(False, self.db.getautocommit()) db.cursor().execute('create table foo4(x)') """) self.assertEqual(True, self.db.getautocommit()) self.assertTableExists("foo2") self.assertTableExists("foo3") self.assertTableExists("foo4") # nested - simple - failure try: run(""" self.db.cursor().execute('begin; create table foo5(x)') with self.db as db: self.assertEqual(False, self.db.getautocommit()) db.cursor().execute('create table foo6(x)') with db as db2: self.assertEqual(False, self.db.getautocommit()) db.cursor().execute('create table foo7(x)') with db2 as db3: self.assertEqual(False, self.db.getautocommit()) db.cursor().execute('create table foo8(x)') 1/0 """) except ZeroDivisionError: pass self.assertEqual(False, self.db.getautocommit()) self.db.cursor().execute("commit") self.assertEqual(True, self.db.getautocommit()) self.assertTableExists("foo5") self.assertTableNotExists("foo6") self.assertTableNotExists("foo7") self.assertTableNotExists("foo8") # improve coverage and various corner cases self.db.__enter__() self.assertRaises(TypeError, self.db.__exit__, 1) for i in range(10): self.db.__exit__(None, None, None) # make an exit fail self.db.__enter__() self.db.cursor().execute("commit") # deliberately futz with the outstanding transaction self.assertRaises(apsw.SQLError, self.db.__exit__, None, None, None) self.db.__exit__(None, None, None) # extra exit should be harmless # exectracing traces=[] def et(con, sql, bindings): if con==self.db: traces.append(sql) return True self.db.setexectrace(et) try: run(""" with self.db as db: db.cursor().execute('create table foo2(x)') """) except apsw.SQLError: # table already exists so we should get an error pass # check we saw the right things in the traces self.assertTrue(len(traces)==3) for s in traces: self.assertTrue("SAVEPOINT" in s.upper()) def et(*args): return BadIsTrue() self.db.setexectrace(et) try: run(""" with self.db as db: db.cursor().execute('create table etfoo2(x)') """) except ZeroDivisionError: pass self.assertTableNotExists("etfoo2") def et(*args): return False self.db.setexectrace(et) try: run(""" with self.db as db: db.cursor().execute('create table etfoo2(x)') """) except apsw.ExecTraceAbort: pass self.db.setexectrace(None) self.assertTableNotExists("etfoo2") # test blobs with context manager self.db.cursor().execute("create table blobby(x); insert into blobby values(x'aabbccddee')") rowid=self.db.last_insert_rowid() blob=self.db.blobopen('main', 'blobby', 'x', rowid, 0) run(""" with blob as b: self.assertEqual(id(blob), id(b)) b.read(1) """, blob=blob) # blob gives ValueError if you do operations on closed blob self.assertRaises(ValueError, blob.read) self.db.cursor().execute("insert into blobby values(x'aabbccddee')") rowid=self.db.last_insert_rowid() blob=self.db.blobopen('main', 'blobby', 'x', rowid, 0) try: run(""" with blob as b: self.assertEqual(id(blob), id(b)) 1/0 b.read(1) """, blob=blob) except ZeroDivisionError: # blob gives ValueError if you do operating on closed blob self.assertRaises(ValueError, blob.read) # backup code if not hasattr(self.db, "backup"): return # experimental db2=apsw.Connection(":memory:") run(""" with db2.backup("main", self.db, "main") as b: while not b.done: b.step(1) self.assertEqual(b.done, True) self.assertDbIdentical(self.db, db2) """, db2=db2) def fillWithRandomStuff(self, db, seed=1): "Fills a database with random content" db.cursor().execute("create table a(x)") for i in range(1,11): db.cursor().execute("insert into a values(?)", ("aaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"*i*8192,)) def assertDbIdentical(self, db1, db2): "Ensures databases are identical" c1=db1.cursor() c2=db2.cursor() self.assertEqual( list(c1.execute("select * from sqlite_master order by _ROWID_")), list(c2.execute("select * from sqlite_master order by _ROWID_")) ) for table in db1.cursor().execute("select name from sqlite_master where type='table'"): table=table[0] self.assertEqual( list(c1.execute("select * from [%s] order by _ROWID_" % (table,))), list(c2.execute("select * from [%s] order by _ROWID_" % (table,))), ) for table in db2.cursor().execute("select name from sqlite_master where type='table'"): table=table[0] self.assertEqual( list(c1.execute("select * from [%s] order by _ROWID_" % (table,))), list(c2.execute("select * from [%s] order by _ROWID_" % (table,))), ) def testBackup(self): "Verify hot backup functionality" # bad calls self.assertRaises(TypeError, self.db.backup, "main", "main", "main", "main") self.assertRaises(TypeError, self.db.backup, "main", 3, "main") db2=apsw.Connection(":memory:") db2.close() self.assertRaises(ValueError, self.db.backup, "main", db2, "main") # can't copy self self.assertRaises(ValueError, self.db.backup, "main", self.db, "it doesn't care what is here") # try and get inuse error dbt=apsw.Connection(":memory:") vals={"stop": False, "raised": False} def wt(): # worker thread spins grabbing and releasing inuse flag while not vals["stop"]: try: dbt.setbusytimeout(100) except apsw.ThreadingViolationError: # this means main thread grabbed inuse first pass t=ThreadRunner(wt) t.start() b4=time.time() # try to get inuse error for 30 seconds try: try: while not vals["stop"] and time.time()-b4<30: self.db.backup("main", dbt, "main").close() except apsw.ThreadingViolationError: vals["stop"]=True vals["raised"]=True finally: vals["stop"]=True # standard usage db2=apsw.Connection(":memory:") self.fillWithRandomStuff(db2) b=self.db.backup("main", db2, "main") self.assertRaises(TypeError, b.step, '3') try: b.step(1) self.assertTrue(b.remaining > 0) self.assertTrue(b.pagecount > 0) while not b.done: b.step(1) finally: b.finish() self.assertDbIdentical(self.db, db2) self.db.cursor().execute("drop table a") # don't clean up b=self.db.backup("main", db2, "main") try: while not b.done: b.step(1) finally: b.finish() self.assertDbIdentical(self.db, db2) del b del db2 fname=self.db.filename self.db=None gc.collect() # check dest db can't be used for anything else db2=apsw.Connection(":memory:") c=db2.cursor() c.execute("create table x(y); insert into x values(3); select * from x") self.db=apsw.Connection(":memory:") self.fillWithRandomStuff(self.db) self.assertRaises(apsw.ThreadingViolationError, db2.backup, "main", self.db, "main") c.close() b=db2.backup("main", self.db, "main") # double check cursor really is dead self.assertRaises(apsw.CursorClosedError, c.execute, "select 3") # with the backup object existing, all operations on db2 should fail self.assertRaises(apsw.ThreadingViolationError, db2.cursor) # finish and then trying to step b.finish() self.assertRaises(apsw.ConnectionClosedError, b.step) # make step and finish fail with locked error self.db=apsw.Connection(fname) def lockerr(): db2=apsw.Connection(self.db.filename) db2.cursor().execute("begin exclusive") db3=apsw.Connection(self.db.filename) b=db3.backup("main", self.db, "main") # if step gets busy then so does finish, but step has to be called at least once self.assertRaises(apsw.BusyError, b.step) return b b=lockerr() b.close(True) del b b=lockerr() self.assertRaises(apsw.BusyError, b.close, False) del b b=lockerr() self.assertRaises(apsw.BusyError, b.finish) b.finish() # should be ok the second time del b b=lockerr() self.assertRaises(TypeError, b.close, "3") self.assertRaises(apsw.BusyError, b.close, False) b.close() # should also be ok del b def f(): b=lockerr() del b gc.collect() self.assertRaisesUnraisable(apsw.BusyError, f) # coverage b=lockerr() self.assertRaises(TypeError, b.__exit__, 3) self.assertRaises(apsw.BusyError, b.__exit__, None, None, None) b.__exit__(None, None, None) def testLog(self): "Verifies logging functions" self.assertRaises(TypeError, apsw.log) self.assertRaises(TypeError, apsw.log, 1) self.assertRaises(TypeError, apsw.log, 1, 2) self.assertRaises(TypeError, apsw.log, 1, 2, 3) self.assertRaises(TypeError, apsw.log, 1, None) apsw.log(apsw.SQLITE_MISUSE, "Hello world") # nothing should happen self.assertRaises(TypeError, apsw.config, apsw.SQLITE_CONFIG_LOG, 2) self.assertRaises(TypeError, apsw.config, apsw.SQLITE_CONFIG_LOG) # Can't change once SQLite is initialised self.assertRaises(apsw.MisuseError, apsw.config, apsw.SQLITE_CONFIG_LOG, None) # shutdown self.db=None gc.collect() apsw.shutdown() try: apsw.config(apsw.SQLITE_CONFIG_LOG, None) apsw.log(apsw.SQLITE_MISUSE, "Hello world") called=[0] def handler(code, message, called=called): called[0]+=1 self.assertEqual(code, apsw.SQLITE_MISUSE) self.assertEqual(message, u(r"a \u1234 unicode ' \ufe54 string \u0089")) apsw.config(apsw.SQLITE_CONFIG_LOG, handler) apsw.log(apsw.SQLITE_MISUSE, u(r"a \u1234 unicode ' \ufe54 string \u0089")) self.assertEqual(called[0], 1) def badhandler(code, message, called=called): called[0]+=1 self.assertEqual(code, apsw.SQLITE_NOMEM) self.assertEqual(message, u(r"Xa \u1234 unicode ' \ufe54 string \u0089")) 1/0 apsw.config(apsw.SQLITE_CONFIG_LOG, badhandler) self.assertRaisesUnraisable(ZeroDivisionError, apsw.log, apsw.SQLITE_NOMEM, u(r"Xa \u1234 unicode ' \ufe54 string \u0089")) self.assertEqual(called[0], 2) finally: gc.collect() apsw.shutdown() apsw.config(apsw.SQLITE_CONFIG_LOG, None) def testReadonly(self): "Check Connection.readonly()" self.assertEqual(self.db.readonly("main"), False) c=apsw.Connection(TESTFILEPREFIX+"testdb", flags=apsw.SQLITE_OPEN_READONLY) self.assertEqual(c.readonly("main"), True) self.assertRaises(apsw.SQLError, self.db.readonly, "sdfsd") class foo: def __str__(self): 1/0 self.assertRaises(TypeError, self.db.readonly, foo()) def testFilename(self): "Check connections and filenames" self.assertTrue(self.db.filename.endswith("testdb")) self.assertTrue(os.sep in self.db.filename) self.assertEqual(self.db.filename, self.db.db_filename("main")) self.db.cursor().execute("attach '%s' as foo" % (TESTFILEPREFIX+"testdb2",)) self.assertEqual(self.db.filename+"2", self.db.db_filename("foo")) def testShell(self, shellclass=None): "Check Shell functionality" # The windows stdio library is hopelessly broken when used # with codecs. Sadly Python before version 3 tried to use it # and you get a dismal mess - complaints about BOMs lacking on # zero length files, arbitrary truncation, inability to read # and write from the file and far too much other nonsense. I # wasted enough time trying to work around it but give up. We # just don't test the shell in Windows before Python 3. Feel # free to waste your own time trying to fix this. if iswindows and not py3: return if shellclass is None: shellclass=apsw.Shell # Python 3.3.0 crashes in csv module - fixed in 3.3.1 if sys.version_info>=(3,3,0) and sys.version_info<(3,3,1): return # I originally tried to use stringio for this but it barfs # badly over non-ascii stuff and there was no way to make all # the python versions simultaneously happy import codecs fh=[codecs.open(TESTFILEPREFIX+"test-shell-"+t, "w+b", encoding="utf8") for t in ("in", "out", "err")] kwargs={"stdin": fh[0], "stdout": fh[1], "stderr": fh[2]} def reset(): for i in fh: i.truncate(0) i.seek(0) def isempty(x): self.assertEqual(get(x), "") def isnotempty(x): self.assertNotEqual(len(get(x)), 0) def cmd(c): assert fh[0].tell()==0 fh[0].truncate(0) fh[0].seek(0) fh[0].write(c) fh[0].seek(0) def get(x): x.seek(0) return x.read() # Make one shellclass(stdin=fh[0], stdout=fh[1], stderr=fh[2]) # Lets give it some harmless sql arguments and do a sanity check s=shellclass(args=[TESTFILEPREFIX+"testdb", "create table x(x)", "insert into x values(1)"], **kwargs) self.assertTrue(s.db.filename.endswith("testdb")) # do a dump and check our table is there with its values s.command_dump([]) self.assertTrue("x(x)" in get(fh[1])) self.assertTrue("(1);" in get(fh[1])) # empty args self.assertEqual( (None, [], []), s.process_args(None)) # input description reset() write_whole_file(TESTFILEPREFIX+"test-shell-1", "wt", "syntax error") try: shellclass(args=[TESTFILEPREFIX+"testdb", ".read %stest-shell-1" % (TESTFILEPREFIX,) ], **kwargs) except shellclass.Error: self.assertTrue("test-shell-1" in get(fh[2])) isempty(fh[1]) # Check single and double dash behave the same reset() try: shellclass(args=["-init"], **kwargs) except shellclass.Error: isempty(fh[1]) self.assertTrue("specify a filename" in get(fh[2])) reset() s=shellclass(**kwargs) try: s.process_args(["--init"]) except shellclass.Error: self.assertTrue("specify a filename" in str(sys.exc_info()[1])) # various command line options # an invalid one reset() try: shellclass(args=["---tripledash"], **kwargs) except shellclass.Error: isempty(fh[1]) self.assertTrue("-tripledash" in get(fh[2])) self.assertTrue("--tripledash" not in get(fh[2])) ### ### --init ### reset() write_whole_file(TESTFILEPREFIX+"test-shell-1", "wt", "syntax error") try: shellclass(args=["-init", TESTFILEPREFIX+"test-shell-1"], **kwargs) except shellclass.Error: # we want to make sure it read the file isempty(fh[1]) self.assertTrue("syntax error" in get(fh[2])) reset() write_whole_file(TESTFILEPREFIX+"test-shell-1", "wt", "select 3;") shellclass(args=["-init", TESTFILEPREFIX+"test-shell-1"], **kwargs) # we want to make sure it read the file isempty(fh[2]) self.assertTrue("3" in get(fh[1])) ### ### --header ### reset() s=shellclass(**kwargs) s.process_args(["--header"]) self.assertEqual(s.header, True) s.process_args(["--noheader"]) self.assertEqual(s.header, False) s.process_args(["--noheader", "-header", "-noheader", "--header"]) self.assertEqual(s.header, True) # did they actually turn on? isempty(fh[1]) isempty(fh[2]) s.process_args([TESTFILEPREFIX+"testdb", ".mode column", "select 3"]) isempty(fh[2]) self.assertTrue("3" in get(fh[1])) self.assertTrue("----" in get(fh[1])) ### ### --echo, --bail, --interactive ### reset() for v in ("echo", "bail", "interactive"): s=shellclass(**kwargs) b4=getattr(s,v) s.process_args(["--"+v]) # setting should have changed self.assertNotEqual(b4, getattr(s,v)) isempty(fh[1]) isempty(fh[2]) ### ### --batch ### reset() s=shellclass(**kwargs) s.interactive=True s.process_args(["-batch"]) self.assertEqual(s.interactive, False) isempty(fh[1]) isempty(fh[2]) ### ### --separator, --nullvalue, --encoding ### for v,val in ("separator", "\n"), ("nullvalue", "abcdef"), ("encoding", "iso8859-1"): reset() s=shellclass(args=["--"+v, val], **kwargs) # We need the eval because shell processes backslashes in # string. After deliberating that is the right thing to # do if v=="encoding": self.assertEqual((val,None), getattr(s,v)) else: self.assertEqual(val, getattr(s,v)) isempty(fh[1]) isempty(fh[2]) self.assertRaises(shellclass.Error, shellclass, args=["-"+v, val, "--"+v], **kwargs) isempty(fh[1]) self.assertTrue(v in get(fh[2])) ### ### --version ### reset() self.assertRaises(SystemExit, shellclass, args=["--version"], **kwargs) # it writes to stdout isempty(fh[2]) self.assertTrue(apsw.sqlitelibversion() in get(fh[1])) ### ### --help ### reset() self.assertRaises(SystemExit, shellclass, args=["--help"], **kwargs) # it writes to stderr isempty(fh[1]) self.assertTrue("-version" in get(fh[2])) ### ### Items that correspond to output mode ### reset() shellclass(args=["--python", "--column", "--python", ":memory:", "create table x(x)", "insert into x values(x'aa')", "select * from x;"], **kwargs) isempty(fh[2]) self.assertTrue('b"' in get(fh[1]) or "buffer(" in get(fh[1])) ### ### Is process_unknown_args called as documented? ### reset() class s2(shellclass): def process_unknown_args(self, args): 1/0 self.assertRaises(ZeroDivisionError, s2, args=["--unknown"], **kwargs) isempty(fh[1]) self.assertTrue("division" in get(fh[2])) # py2 says "integer division", py3 says "int division" class s3(shellclass): def process_unknown_args(_, args): self.assertEqual(args[0:2], ["myoption", "myvalue"]) return args[2:] reset() self.assertRaises(s3.Error, s3, args=["--python", "--myoption", "myvalue", "--init"], **kwargs) isempty(fh[1]) self.assertTrue("-init" in get(fh[2])) ### ### Some test data ### reset() s=shellclass(**kwargs) s.cmdloop() def testnasty(): reset() # py 3 barfs with any codepoints above 0xffff whining # about surrogates not being allowed. If only it # implemented unicode properly. cmd(u("create table if not exists nastydata(x,y); insert into nastydata values(null,'xxx\\u1234\\uabcdyyy\r\n\t\"this \\is nasty\u0001stuff!');")) s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd(".bail on\n.header OFF\nselect * from nastydata;") s.cmdloop() isempty(fh[2]) isnotempty(fh[1]) ### ### Output formats - column ### reset() x='a'*20 cmd(".mode column\n.header ON\nselect '"+x+"';") s.cmdloop() isempty(fh[2]) # colwidth should be 2 more sep='-'*(len(x)+2) # apostrophes quoting string in column header out=get(fh[1]).replace("\n", "") self.assertEqual(len(out.split(sep)), 2) self.assertEqual(len(out.split(sep)[0]), len(x)+2) # plus two apostrophes self.assertEqual(len(out.split(sep)[1]), len(x)+2) # same self.assertTrue(" " in out.split(sep)[1]) # space padding # make sure truncation happens reset() cmd(".width 5\nselect '"+x+"';\n") s.cmdloop() isempty(fh[2]) self.assertTrue("a"*6 not in get(fh[1])) # right justification reset() cmd(".header off\n.width -3 -3\nselect 3,3;\n.width 3 3\nselect 3,3;") s.cmdloop() isempty(fh[2]) v=get(fh[1]) self.assertTrue(v.startswith(" 3 3")) v=v.split("\n") self.assertNotEqual(v[0], v[1]) self.assertEqual(len(v[0]), len(v[1])) # explain mode doesn't truncate reset() cmd(".header on\ncreate table %s(x);create index %s_ on %s(x);\n.explain\nexplain select * from %s where x=7;\n" % (x,x,x,x)) s.cmdloop() isempty(fh[2]) self.assertTrue(x in get(fh[1])) # check null and blobs reset() nv="ThIsNuLlVaLuE" cmd(".nullvalue %s\nselect null, x'aaee';\n" % (nv,)) s.cmdloop() isempty(fh[2]) self.assertTrue(nv in get(fh[1])) # do not output blob as is self.assertTrue(u("\xaa") not in get(fh[1])) # undo explain reset() cmd(".explain OFF\n") s.cmdloop() testnasty() ### ### Output formats - csv ### reset() # mode change should reset separator cmd(".separator F\n.mode csv\nselect 3,3;\n") s.cmdloop() isempty(fh[2]) self.assertTrue("3,3" in get(fh[1])) # tab sep reset() cmd(".separator '\\t'\nselect 3,3;\n") s.cmdloop() isempty(fh[2]) self.assertTrue("3\t3" in get(fh[1])) # back to comma reset() cmd(".mode csv\nselect 3,3;\n") s.cmdloop() isempty(fh[2]) self.assertTrue("3,3" in get(fh[1])) # quoting reset() cmd(".header ON\nselect 3 as [\"one\"], 4 as [\t];\n") s.cmdloop() isempty(fh[2]) self.assertTrue('"""one""",\t' in get(fh[1])) # custom sep reset() cmd(".separator |\nselect 3 as [\"one\"], 4 as [\t];\n") s.cmdloop() isempty(fh[2]) self.assertTrue("3|4\n" in get(fh[1])) self.assertTrue('"one"|\t\n' in get(fh[1])) # testnasty() - csv module is pretty much broken ### ### Output formats - html ### reset() cmd(".mode html\n.header OFF\nselect 3,4;\n") s.cmdloop() isempty(fh[2]) # should be no header self.assertTrue("" not in get(fh[1]).lower()) # does it actually work? self.assertTrue("3" in get(fh[1]).lower()) # check quoting works reset() cmd(".header ON\nselect 3 as [<>&];\n") s.cmdloop() isempty(fh[2]) self.assertTrue("<>&" in get(fh[1]).lower()) # do we output rows? self.assertTrue("" in get(fh[1]).lower()) self.assertTrue("" in get(fh[1]).lower()) testnasty() ### ### Output formats - insert ### reset() all="3,3.1,'3.11',null,x'0311'" cmd(".mode insert\n.header OFF\nselect "+all+";\n") s.cmdloop() isempty(fh[2]) self.assertTrue(all in get(fh[1]).lower()) # empty values reset() all="0,0.0,'',null,x''" cmd("select "+all+";\n") s.cmdloop() isempty(fh[2]) self.assertTrue(all in get(fh[1]).lower()) # header, separator and nullvalue should make no difference save=get(fh[1]) reset() cmd(".header ON\n.separator %\n.nullvalue +\nselect "+all+";\n") s.cmdloop() isempty(fh[2]) self.assertEqual(save, get(fh[1])) # check the table name self.assertTrue(get(fh[1]).lower().startswith('insert into "table" values')) reset() cmd(".mode insert funkychicken\nselect "+all+";\n") s.cmdloop() isempty(fh[2]) self.assertTrue(get(fh[1]).lower().startswith("insert into funkychicken values")) testnasty() ### ### Output formats - json ### reset() all="3,2.2,'string',null,x'0311'" cmd(".mode json\n.header ON\n select "+all+";") s.cmdloop() isempty(fh[2]) v=get(fh[1]).strip() v=v[:-1] # remove trailing comma havejson=False try: import json havejson=True except ImportError: try: import simplejson as json havejson=True except ImportError: pass if havejson: out=json.loads(v) self.assertEqual(out, { "3": 3, "2.2": 2.2, "'string'": "string", "null": None, "x'0311'": "AxE=" }) # a regular table reset() cmd("create table jsontest([int], [float], [string], [null], [blob]);insert into jsontest values("+all+");select * from jsontest;") s.cmdloop() isempty(fh[2]) v=get(fh[1]).strip()[:-1] if havejson: out=json.loads(v) self.assertEqual(out, { "int": 3, "float": 2.2, "string": "string", "null": None, "blob": "AxE=" }) testnasty() ### ### Output formats - line ### reset() cmd(".header OFF\n.nullvalue *\n.mode line\nselect 3 as a, null as b, 0.0 as c, 'a' as d, x'aa' as e;\n") s.cmdloop() isempty(fh[2]) out=get(fh[1]).replace(" ","") self.assertTrue("a=3\n" in out) self.assertTrue("b=*\n" in out) self.assertTrue("c=0.0\n" in out) self.assertTrue("d=a\n" in out) self.assertTrue("e=\n" in out) self.assertEqual(7, len(out.split("\n"))) # one for each col plus two trailing newlines # header should make no difference reset() cmd(".header ON\n.nullvalue *\n.mode line\nselect 3 as a, null as b, 0.0 as c, 'a' as d, x'aa' as e;\n") s.cmdloop() isempty(fh[2]) self.assertEqual(out, get(fh[1]).replace(" ","")) # wide column name reset() ln="kjsfhgjksfdjkgfhkjsdlafgjkhsdkjahfkjdsajfhsdja"*12 cmd("select 3 as %s, 3 as %s1;" % (ln,ln)) s.cmdloop() isempty(fh[2]) self.assertEqual(get(fh[1]), " %s = 3\n%s1 = 3\n\n" % (ln,ln)) testnasty() ### ### Output formats - list ### reset() cmd(".header off\n.mode list\n.nullvalue (\n.separator &\nselect 3 as a, null as b, 0.0 as c, 'a' as d, x'aa' as e;\n") s.cmdloop() isempty(fh[2]) self.assertEqual(get(fh[1]), '3&(&0.0&a&\n') reset() # header on cmd(".header on\n.mode list\n.nullvalue (\n.separator &\nselect 3 as a, null as b, 0.0 as c, 'a' as d, x'aa' as e;\n") s.cmdloop() isempty(fh[2]) self.assertTrue(get(fh[1]).startswith("a&b&c&d&e\n")) testnasty() ### ### Output formats - python ### reset() cmd(".header off\n.mode python\nselect 3 as a, null as b, 0.0 as c, 'a' as d, x'aa44bb' as e;\n") s.cmdloop() isempty(fh[2]) v=eval(get(fh[1])) self.assertEqual(len(v), 1) # 1 tuple self.assertEqual(v, ( (3, None, 0.0, 'a', b(r"\xaa\x44\xbb")), )) reset() cmd(".header on\n.mode python\nselect 3 as a, null as b, 0.0 as c, 'a' as d, x'aa44bb' as e;\n") s.cmdloop() isempty(fh[2]) v=eval("("+get(fh[1])+")") # need parentheses otherwise indent rules apply self.assertEqual(len(v), 2) # headers and row self.assertEqual(v, ( ("a", "b", "c", "d", "e"), (3, None, 0.0, 'a', b(r"\xaa\x44\xbb")), )) testnasty() ### ### Output formats - TCL ### reset() cmd(".header off\n.mode tcl\n.separator -\n.nullvalue ?\nselect 3 as a, null as b, 0.0 as c, 'a' as d, x'aa44bb' as e;\n") s.cmdloop() isempty(fh[2]) self.assertEqual(get(fh[1]), '"3"-"?"-"0.0"-"a"-"\\xAAD\\xBB"\n') reset() cmd(".header on\nselect 3 as a, null as b, 0.0 as c, 'a' as d, x'aa44bb' as e;\n") s.cmdloop() isempty(fh[2]) self.assertTrue('"a"-"b"-"c"-"d"-"e"' in get(fh[1])) testnasty() # What happens if db cannot be opened? s.process_args(args=["/"]) reset() cmd("select * from sqlite_master;\n.bail on\nselect 3;\n") self.assertRaises(apsw.CantOpenError, s.cmdloop) isempty(fh[1]) self.assertTrue("unable to open database file" in get(fh[2])) # echo testing - multiple statements s.process_args([":memory:"]) # back to memory db reset() cmd(".bail off\n.echo on\nselect 3;\n") s.cmdloop() self.assertTrue("select 3;\n" in get(fh[2])) # multiline reset() cmd("select 3;select 4;\n") s.cmdloop() self.assertTrue("select 3;\n" in get(fh[2])) self.assertTrue("select 4;\n" in get(fh[2])) # multiline with error reset() cmd("select 3;select error;select 4;\n") s.cmdloop() self.assertTrue("select 3;\n" in get(fh[2])) # apsw can't tell where erroneous command ends so all processing on the line stops self.assertTrue("select error;select 4;\n" in get(fh[2])) # is timing info output correctly? reset() timersupported=False try: cmd(".bail on\n.echo off\n.timer on\n.timer off\n") s.cmdloop() timersupported=True except s.Error: pass if timersupported: reset() # create something that should take some time to execute s.db.cursor().execute("create table xyz(x); begin;") s.db.cursor().executemany("insert into xyz values(?)", randomintegers(4000)) s.db.cursor().execute("end") reset() # this takes .6 seconds on my machine so we should # definitely have non-zero timing information cmd(".timer ON\nselect max(x),min(x),max(x+x),min(x-x) from xyz union select x+max(x),x-min(x),3,4 from xyz union select x,x,x,x from xyz union select x,x,x,x from xyz;select 3;\n") s.cmdloop() isnotempty(fh[1]) isnotempty(fh[2]) reset() cmd(".bail off\n.timer off") s.cmdloop() # command handling reset() cmd(".nonexist 'unclosed") s.cmdloop() isempty(fh[1]) self.assertTrue("no closing quotation" in get(fh[2]).lower()) reset() cmd(".notexist ") s.cmdloop() isempty(fh[1]) self.assertTrue('Unknown command "notexist"' in get(fh[2])) ### ### Commands - backup and restore ### reset() cmd(".backup with too many parameters") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".backup ") # too few s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".restore with too many parameters") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".restore ") # too few s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) # bogus filenames for i in ('/', '"main" /'): for c in (".backup ", ".restore "): reset() cmd(c+i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) def randomtable(cur, dbname=None): name=list("abcdefghijklmnopqrstuvwxtz") random.shuffle(name) name="".join(name) fullname=name if dbname: fullname=dbname+"."+fullname cur.execute("begin;create table %s(x)" % (fullname,)) cur.executemany("insert into %s values(?)" % (fullname,), randomintegers(400)) cur.execute("end") return name # Straight forward backup. The gc.collect() is needed because # non-gc cursors hanging around will prevent the backup from # happening. n=randomtable(s.db.cursor()) contents=s.db.cursor().execute("select * from "+n).fetchall() reset() cmd(".backup %stestdb2" % (TESTFILEPREFIX,) ) gc.collect() s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd("drop table "+n+";") s.cmdloop() isempty(fh[1]) isempty(fh[2]) self.assertTrue(os.path.isfile("%stestdb2" % (TESTFILEPREFIX,))) reset() cmd(".restore %stestdb2" % (TESTFILEPREFIX,)) gc.collect() s.cmdloop() isempty(fh[1]) isempty(fh[2]) newcontents=s.db.cursor().execute("select * from "+n).fetchall() # no guarantee of result order contents.sort() newcontents.sort() self.assertEqual(contents, newcontents) # do they pay attention to the dbname s.db.cursor().execute("attach ':memory:' as memdb") n=randomtable(s.db.cursor(), "memdb") contents=s.db.cursor().execute("select * from memdb."+n).fetchall() reset() gc.collect() cmd(".backup memdb %stestdb2" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[1]) isempty(fh[2]) s.db.cursor().execute("detach memdb; attach ':memory:' as memdb2") reset() gc.collect() cmd(".restore memdb2 %stestdb2" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[1]) isempty(fh[2]) newcontents=s.db.cursor().execute("select * from memdb2."+n).fetchall() # no guarantee of result order contents.sort() newcontents.sort() self.assertEqual(contents, newcontents) ### ### Commands - bail ### reset() cmd(".bail") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".bail on\n.mode list\nselect 3;\nselect error;\nselect 4;\n") self.assertRaises(apsw.Error, s.cmdloop) self.assertTrue("3" in get(fh[1])) self.assertTrue("4" not in get(fh[1])) reset() cmd(".bail oFf\n.mode list\nselect 3;\nselect error;\nselect 4;\n") s.cmdloop() self.assertTrue("3" in get(fh[1])) self.assertTrue("4" in get(fh[1])) ### ### Commands - databases ### reset() cmd(".databases foo") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) # clean things up s=shellclass(**kwargs) reset() cmd(".header oFF\n.databases") s.cmdloop() isempty(fh[2]) for i in "main", "name", "file": self.assertTrue(i in get(fh[1])) reset() cmd("attach '%stestdb' as quack;\n.databases" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[2]) for i in "main", "name", "file", "testdb", "quack": self.assertTrue(i in get(fh[1])) reset() cmd("detach quack;") s.cmdloop() isempty(fh[2]) for i in "testdb", "quack": self.assertTrue(i not in get(fh[1])) ### ### Commands - dump ### reset() cmd("create table foo(x); create table bar(x);\n.dump foox") s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd(".dump foo") s.cmdloop() isempty(fh[2]) for i in "foo", "create table", "begin", "commit": self.assertTrue(i in get(fh[1]).lower()) self.assertTrue("bar" not in get(fh[1]).lower()) # can we do virtual tables? reset() if self.checkOptionalExtension("fts3", "create virtual table foo using fts3()"): reset() cmd("CREATE virtual TaBlE fts3 using fts3(colA FRED , colB JOHN DOE);\n" "insert into fts3 values('one', 'two');insert into fts3 values('onee', 'two');\n" "insert into fts3 values('one', 'two two two');") s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd(".dump") s.cmdloop() isempty(fh[2]) v=get(fh[1]) for i in "pragma writable_schema", "create virtual table fts3", "cola fred", "colb john doe": self.assertTrue(i in v.lower()) # analyze reset() cmd("drop table bar;create table bar(x unique,y);create index barf on bar(x,y);create index barff on bar(y);insert into bar values(3,4);\nanalyze;\n.dump bar") s.cmdloop() isempty(fh[2]) v=get(fh[1]) for i in "analyze bar", "create index barf": self.assertTrue(i in v.lower()) self.assertTrue("autoindex" not in v.lower()) # created by sqlite to do unique constraint self.assertTrue("sqlite_sequence" not in v.lower()) # not autoincrements # repeat but all tables reset() cmd(".dump") s.cmdloop() isempty(fh[2]) v=get(fh[1]) for i in "analyze bar", "create index barf": self.assertTrue(i in v.lower()) self.assertTrue("autoindex" not in v.lower()) # created by sqlite to do unique constraint # foreign keys reset() cmd("create table xxx(z references bar(x));\n.dump") s.cmdloop() isempty(fh[2]) v=get(fh[1]) for i in "foreign_keys", "references": self.assertTrue(i in v.lower()) # views reset() cmd("create view noddy as select * from foo;\n.dump noddy") s.cmdloop() isempty(fh[2]) v=get(fh[1]) for i in "drop view", "create view noddy": self.assertTrue(i in v.lower()) # issue82 - view ordering reset() cmd("create table issue82(x);create view issue82_2 as select * from issue82; create view issue82_1 as select count(*) from issue82_2;\n.dump issue82%") s.cmdloop() isempty(fh[2]) v=get(fh[1]) s.db.cursor().execute("drop table issue82 ; drop view issue82_1 ; drop view issue82_2") reset() cmd(v) s.cmdloop() isempty(fh[1]) isempty(fh[2]) # autoincrement reset() cmd("create table abc(x INTEGER PRIMARY KEY AUTOINCREMENT); insert into abc values(null);insert into abc values(null);\n.dump") s.cmdloop() isempty(fh[2]) v=get(fh[1]) for i in "sqlite_sequence", "'abc', 2": self.assertTrue(i in v.lower()) # user version self.assertTrue("user_version" not in v) reset() cmd("pragma user_version=27;\n.dump") s.cmdloop() isempty(fh[2]) v=get(fh[1]) self.assertTrue("pragma user_version=27;" in v) s.db.cursor().execute("pragma user_version=0") # some nasty stuff reset() cmd(u("create table nastydata(x,y); insert into nastydata values(null,'xxx\\u1234\\uabcd\\U00012345yyy\r\n\t\"this \\is nasty\u0001stuff!');" 'create table "table"([except] int); create table [](""); create table [using]("&");' )) s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd(".dump") s.cmdloop() isempty(fh[2]) v=get(fh[1]) self.assertTrue("nasty" in v) self.assertTrue("stuff" in v) # sanity check the dumps reset() cmd(v) # should run just fine s.cmdloop() isempty(fh[1]) isempty(fh[2]) # drop all the tables we made to do another dump and compare with before for t in "abc", "bar", "foo", "fts3", "xxx", "noddy", "sqlite_sequence", "sqlite_stat1", \ "issue82", "issue82_1", "issue82_2": reset() cmd("drop table %s;drop view %s;" % (t,t)) s.cmdloop() # there will be errors which we ignore reset() cmd(v) s.cmdloop() isempty(fh[1]) isempty(fh[2]) # another dump reset() cmd(".dump") s.cmdloop() isempty(fh[2]) v2=get(fh[1]) v=re.sub("-- Date:.*", "", v) v2=re.sub("-- Date:.*", "", v2) self.assertEqual(v, v2) # clean database reset() s=shellclass(args=[':memory:'], **kwargs) cmd(v) s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd(v2+"\n.dump") s.cmdloop() isempty(fh[2]) v3=get(fh[1]) v3=re.sub("-- Date:.*", "", v3) self.assertEqual(v, v3) # trailing comments reset() cmd("""create table xxblah(b -- ff ) -- xx ; create index xxfoo on xxblah(b -- ff ) -- xx ; create view xxbar as select * from xxblah -- ff ; insert into xxblah values(3); .dump """) s.cmdloop() isempty(fh[2]) dump=get(fh[1]) reset() cmd("drop table xxblah; drop view xxbar;") s.cmdloop() isempty(fh[2]) isempty(fh[1]) reset() cmd(dump) s.cmdloop() isempty(fh[2]) isempty(fh[1]) self.assertEqual(s.db.cursor().execute("select * from xxbar").fetchall(), [(3,)]) # check index reset() cmd("drop index xxfoo;") s.cmdloop() isempty(fh[1]) isempty(fh[2]) ### ### Command - echo ### reset() cmd(".echo") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".echo bananas") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".echo on on") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".echo off\nselect 3;") s.cmdloop() self.assertTrue("3" in get(fh[1])) self.assertTrue("select 3" not in get(fh[2])) reset() cmd(".echo on\nselect 3;") s.cmdloop() self.assertTrue("3" in get(fh[1])) self.assertTrue("select 3" in get(fh[2])) # more complex testing is done earlier including multiple statements and errors ### ### Command - encoding ### for i in ".encoding one two", ".encoding", ".encoding utf8 another": reset() cmd(i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".encoding this-does-not-exist") s.cmdloop() isempty(fh[1]) self.assertTrue("no known encoding" in get(fh[2]).lower()) # use iso8859-1 to make sure data is read correctly - it # differs from utf8 us=u(r"unitestdata \xaa\x89 34") write_whole_file(TESTFILEPREFIX+"test-shell-1", "w", "iso8859-1", "insert into enctest values('%s');\n" % (us,)) gc.collect() reset() cmd(".encoding iso8859-1\ncreate table enctest(x);\n.echo on\n.read %stest-shell-1\n.echo off" % (TESTFILEPREFIX,) ) s.cmdloop() self.assertEqual(s.db.cursor().execute("select * from enctest").fetchall()[0][0], us) self.assertTrue(us in get(fh[2])) reset() write_whole_file(TESTFILEPREFIX+"test-shell-1", "w", "iso8859-1", us+"\n") cmd("drop table enctest;create table enctest(x);\n.import %stest-shell-1 enctest" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[2]) isempty(fh[1]) self.assertEqual(s.db.cursor().execute("select * from enctest").fetchall()[0][0], us) reset() cmd(".output %stest-shell-1\n.mode list\nselect * from enctest;" % (TESTFILEPREFIX,)) s.cmdloop() self.assertEqual(read_whole_file(TESTFILEPREFIX+"test-shell-1", "rb").strip(), # skip eol us.encode("iso8859-1")) reset() cmd(".output stdout\nselect '%s';\n" % (us,)) s.cmdloop() isempty(fh[2]) self.assertTrue(us in get(fh[1])) ### encoding specifying error handling - see issue 108 reset() cmd(".encoding utf8:replace") s.cmdloop() isempty(fh[1]) isempty(fh[2]) # non-existent error reset() cmd(".encoding cp437:blahblah") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) self.assertTrue("blahblah" in get(fh[2])) # check replace works reset() us=u(r"\N{BLACK STAR}8\N{WHITE STAR}") write_whole_file(TESTFILEPREFIX+"test-shell-1", "w", "utf8", "insert into enctest values('%s');" % (us,) ) cmd(".encoding utf8\n.read %stest-shell-1\n.encoding cp437:replace\n.output %stest-shell-1\nselect * from enctest;\n.encoding utf8\n.output stdout" % (TESTFILEPREFIX, TESTFILEPREFIX)) s.cmdloop() isempty(fh[2]) isempty(fh[1]) self.assertTrue("?8?" in read_whole_file(TESTFILEPREFIX+"test-shell-1", "r", "cp437")) ### ### Command - exceptions ### reset() cmd("syntax error;") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) self.assertTrue(len(get(fh[2]).split("\n"))<5) reset() cmd(".exceptions on\nsyntax error;") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) self.assertTrue(len(get(fh[2]).split("\n"))>10) self.assertTrue("sql = " in get(fh[2])) # deliberately leave exceptions on ### ### Command - exit & quit ### for i in ".exit", ".quit": reset() cmd(i) self.assertRaises(SystemExit, s.cmdloop) isempty(fh[1]) isempty(fh[2]) reset() cmd(i+" jjgflk") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) ### ### Command explain and header are tested above ### # pass ### ### Command find ### reset() cmd(".find one two three") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd("create table findtest([x\" x],y); insert into findtest values(3, 'xx3'); insert into findtest values(34, 'abcd');") s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd(".find 3") s.cmdloop() isempty(fh[2]) for text,present in ( ("findtest", True), ("xx3", True), ("34", False) ): if present: self.assertTrue(text in get(fh[1])) else: self.assertTrue(text not in get(fh[1])) reset() cmd(".find does-not-exist") s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd(".find ab_d") s.cmdloop() isempty(fh[2]) for text,present in ( ("findtest", True), ("xx3", False), ("34", True) ): if present: self.assertTrue(text in get(fh[1])) else: self.assertTrue(text not in get(fh[1])) reset() cmd(".find 3 table-not-exist") s.cmdloop() isempty(fh[1]) isempty(fh[2]) ### ### Command help ### reset() cmd(".help\n.help all\n.help import backup") s.cmdloop() isempty(fh[1]) for i in ".import", "Reads data from the file": self.assertTrue(i in get(fh[2])) reset() cmd(".help backup notexist import") s.cmdloop() isempty(fh[1]) for i in "Copies the contents", "No such command": self.assertTrue(i in get(fh[2])) # screw up terminal width origtw=s._terminal_width def tw(*args): return 7 s._terminal_width=tw reset() cmd(".bail on\n.help all\n.bail off") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) ### ### Command - import ### # check it fundamentally works reset() cmd(".encoding utf16\ncreate table imptest(x real, y char);\n" "insert into imptest values(3.1, 'xabc');\n" "insert into imptest values(3.2, 'xabfff\"ffffc');\n" ".output %stest-shell-1\n.mode csv\nselect * from imptest;\n" ".output stdout" % (TESTFILEPREFIX,) ) s.cmdloop() isempty(fh[1]) isempty(fh[2]) # make sure encoding took if sys.version_info>=(3,0): self.assertTrue(b("xab") not in read_whole_file(TESTFILEPREFIX+"test-shell-1", "rb")) else: self.assertTrue("xab" not in read_whole_file(TESTFILEPREFIX+"test-shell-1", "rb")) data=s.db.cursor().execute("select * from imptest; delete from imptest").fetchall() self.assertEqual(2, len(data)) reset() cmd(".import %stest-shell-1 imptest" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[1]) isempty(fh[2]) newdata=s.db.cursor().execute("select * from imptest; drop table imptest").fetchall() data.sort() newdata.sort() self.assertEqual(data, newdata) # error handling for i in ".import", ".import one", ".import one two three", ".import nosuchfile nosuchtable", ".import nosuchfile sqlite_master": reset() cmd(i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) # wrong number of columns reset() cmd("create table imptest(x,y);\n.mode tabs\n.output %stest-shell-1\nselect 3,4;select 5,6;select 7,8,9;" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd(".output stdout\n.import %stest-shell-1 imptest" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() # check it was done in a transaction and aborted self.assertEqual(0, s.db.cursor().execute("select count(*) from imptest").fetchall()[0][0]) ### ### Command - autoimport ### # errors for i in ".autoimport", ".autoimport 1 2 3", ".autoimport nosuchfile", ".autoimport %stest-shell-1 sqlite_master" % (TESTFILEPREFIX,): reset() cmd(i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) # check correct detection with each type of separator and that types are not mangled c=s.db.cursor() for row in ( ('a,b', '21/1/20', '00'), (' ', '1/1/20', 10), ('a"b', '1/1/01', '00'), ('+40', '01123', '2010 100 15'), ('2010//10//13', '2010/10/13 12', 2), ('2010/13/13 12:13', '13/13/2010 12:93', '13/2010/13'), ("+3", " 3", 3), ("03.03", "03.03.20", "03"), ( (None, 2, 5.5), (None, 4, 99), ), ): c.execute("""drop table if exists aitest ; create table aitest("x y", ["], "3d")""") if isinstance(row[0], tuple): f=c.executemany else: f=c.execute f("insert into aitest values(?,?,?)", row) fname=TESTFILEPREFIX+"test-shell-1" for sep in "\t", "|", ",", "X": reset() cmd(".mode csv\n.headers on\n.output %stest-shell-1\n.separator \"%s\"\nselect * from aitest;\n.output stdout\n.separator X\ndrop table if exists \"test-shell-1\";\n.autoimport %stest-shell-1" % (TESTFILEPREFIX, sep, TESTFILEPREFIX)) s.cmdloop() isnotempty(fh[1]) isempty(fh[2]) self.assertTablesEqual(s.db, "aitest", s.db, "test-shell-1") # Change encoding back to sensible reset() cmd(".encoding utf8") s.cmdloop() # Check date detection for expect, fmt, sequences in ( ("1999-10-13", "%d-%d:%d", ( (1999, 10, 13), (13, 10, 1999), (10, 13, 1999), ) ), ("1999-10-13T12:14:17", "%d/%d/%d/%d/%d/%d", ( (1999, 10, 13, 12, 14, 17), (13, 10, 1999, 12, 14, 17), (10, 13, 1999, 12, 14, 17), ) ), ("1999-10-13T12:14:00", "%dX%dX%dX%dX%d", ( (1999, 10, 13, 12, 14), (13, 10, 1999, 12, 14), (10, 13, 1999, 12, 14), ) ) ): for seq in sequences: write_whole_file(TESTFILEPREFIX+"test-shell-1", "wt", ("a,b\nrow,"+(fmt%seq)+"\n")) reset() cmd("drop table [test-shell-1];\n.autoimport %stest-shell-1" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[2]) imp=c.execute("select b from [test-shell-1] where a='row'").fetchall()[0][0] self.assertEqual(imp, expect) # Check diagnostics when unable to import for err, content in ( ("current encoding", b(r"\x81\x82\x83\tfoo\n\x84\x97\xff\tbar")), ("known type", "abcdef\nhiojklmnop\n"), ("more than one", 'ab,c\tdef\nqr,dd\t\n'), ("ambiguous data format", "a,b\n1/1/2001,3\n2001/4/4,4\n"), ): if py3: if isinstance(content, bytes): continue write_whole_file(TESTFILEPREFIX+"test-shell-1", "wt", content) reset() cmd("drop table [test-shell-1];\n.autoimport %stest-shell-1" % (TESTFILEPREFIX,)) s.cmdloop() errmsg=get(fh[2]) self.assertTrue(err in errmsg) ### ### Command - indices ### for i in ".indices", ".indices one two": reset() cmd(i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd("create table indices(x unique, y unique); create index shouldseethis on indices(x,y);") s.cmdloop() isempty(fh[1]) isempty(fh[2]) reset() cmd(".indices indices") s.cmdloop() isempty(fh[2]) for i in "shouldseethis", "autoindex": self.assertTrue(i in get(fh[1])) ### ### Command - load ### if hasattr(APSW, "testLoadExtension"): lf=LOADEXTENSIONFILENAME for i in ".load", ".load one two three": reset() cmd(i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".load nosuchfile") s.cmdloop() isempty(fh[1]) self.assertTrue("nosuchfile" in get(fh[2]) or "ExtensionLoadingError" in get(fh[2])) reset() cmd(".mode list\n.load "+lf+" alternate_sqlite3_extension_init\nselect doubleup(2);") s.cmdloop() isempty(fh[2]) self.assertTrue("4" in get(fh[1])) reset() cmd(".mode list\n.load "+lf+"\nselect half(2);") s.cmdloop() isempty(fh[2]) self.assertTrue("1" in get(fh[1])) ### ### Command - mode ### # already thoroughly tested in code above for i in ".mode", ".mode foo more", ".mode invalid": reset() cmd(i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) ### ### command nullvalue & separator ### # already tested in code above for i in ".nullvalue", ".nullvalue jkhkl lkjkj", ".separator", ".separator one two": reset() cmd(i) b4=s.nullvalue, s.separator s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) self.assertEqual(b4, (s.nullvalue, s.separator)) ### ### command output ### for i in ".output", ".output too many args", ".output "+os.sep: reset() cmd(i) b4=s.stdout s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) self.assertEqual(b4, s.stdout) ### ### Command prompt ### # not much to test until pty testing is working for i in ".prompt", ".prompt too many args": reset() cmd(i) b4=s.prompt,s.moreprompt s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) self.assertEqual(b4, (s.prompt, s.moreprompt)) ### ### Command read ### # pretty much thoroughly tested above write_whole_file(TESTFILEPREFIX+"test-shell-1.py", "wt", """ assert apsw assert shell shell.write(shell.stdout, "hello world\\n") """) for i in ".read", ".read one two", ".read "+os.sep: reset() cmd(i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) reset() cmd(".read %stest-shell-1.py" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[2]) self.assertTrue("hello world" in get(fh[1])) # restore tested with backup ### ### Command - schema ### # make sure it works reset() cmd(".schema") s.cmdloop() isempty(fh[2]) isnotempty(fh[1]) reset() cmd("create table schematest(x);create index unrelatedname on schematest(x);\n.schema schematest foo notexist foo") s.cmdloop() isempty(fh[2]) for i in "schematest", "unrelatedname": self.assertTrue(i in get(fh[1])) # separator done earlier ### ### Command - show ### # set all settings to known values resetcmd=".echo off\n.explain off\n.headers off\n.mode list\n.nullvalue ''\n.output stdout\n.separator |\n.width 1 2 3\n.exceptions off" reset() cmd(resetcmd) s.cmdloop() isempty(fh[2]) isempty(fh[1]) reset() cmd(".show") s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) baseline=get(fh[2]) for i in ".echo on", ".explain", ".headers on", ".mode column", ".nullvalue T",".separator %", ".width 8 9 1", ".exceptions on": reset() cmd(resetcmd) s.cmdloop() isempty(fh[1]) if not get(fh[2]).startswith(".echo off"): isempty(fh[2]) reset() cmd(i+"\n.show") s.cmdloop() isempty(fh[1]) # check size has not changed much self.assertTrue(abs(len(get(fh[2]))-len(baseline))<14) # output reset() cmd(".output %stest-shell-1\n.show" % (TESTFILEPREFIX,)) s.cmdloop() isempty(fh[1]) self.assertTrue("output: "+TESTFILEPREFIX+"test-shell-1" in get(fh[2])) reset() cmd(".output stdout\n.show") s.cmdloop() isempty(fh[1]) self.assertTrue("output: stdout" in get(fh[2])) self.assertTrue(not os.path.exists("stdout")) # errors reset() cmd(".show one two") s.cmdloop() isempty(fh[1]) self.assertTrue("at most one parameter" in get(fh[2])) reset() cmd(".show notexist") s.cmdloop() isempty(fh[1]) self.assertTrue("notexist: " not in get(fh[2])) ### ### Command tables ### reset() cmd(".tables") s.cmdloop() isempty(fh[2]) isnotempty(fh[1]) reset() cmd("create table tabletest(x);create index tabletest1 on tabletest(x);create index noway on tabletest(x);\n.tables tabletest\n.tables") s.cmdloop() isempty(fh[2]) self.assertTrue("tabletest" in get(fh[1])) self.assertTrue("tabletest1" not in get(fh[1])) self.assertTrue("noway" not in get(fh[1])) ### ### Command timeout ### for i in (".timeout", ".timeout ksdjfh", ".timeout 6576 78987"): reset() cmd(i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) for i in (".timeout 1000", ".timeout 0", ".timeout -33"): reset() cmd(i) s.cmdloop() isempty(fh[1]) isempty(fh[2]) # timer is tested earlier ### ### Command width ### # does it work? reset() cmd(".width 10 10 10 0") s.cmdloop() isempty(fh[1]) isempty(fh[2]) def getw(): reset() cmd(".show width") s.cmdloop() isempty(fh[1]) return [int(x) for x in get(fh[2]).split()[1:]] self.assertEqual([10,10,10,0], getw()) # some errors for i in ".width", ".width foo", ".width 1 2 3 seven 3": reset() cmd(i) s.cmdloop() isempty(fh[1]) isnotempty(fh[2]) self.assertEqual([10,10,10,0], getw()) for i,r in ("9 0 9", [9,0,9]), ("10 -3 10 -3", [10,-3,10,-3]), ("0", [0]): reset() cmd(".width "+i) s.cmdloop() isempty(fh[1]) isempty(fh[2]) self.assertEqual(r, getw()) ### ### Unicode output with all output modes ### colname=u(r"\N{BLACK STAR}8\N{WHITE STAR}") val=u('xxx\\u1234\\uabcdyyy this\" is nasty\u0001stuff!') noheadermodes=('insert',) # possible ways val can be represented (eg csv doubles up double quotes) outputs=(val, val.replace('"', '""'), val.replace('"', '"'), val.replace('"', '\\"')) for mode in [x[len("output_"):] for x in dir(shellclass) if x.startswith("output_")]: reset() cmd(".separator |\n.width 999\n.encoding utf8\n.header on\n.mode %s\nselect '%s' as '%s';" % (mode, val, colname)) s.cmdloop() isempty(fh[2]) # modes too complicated to construct the correct string if mode in ('python', 'tcl'): continue # all others if mode not in noheadermodes: self.assertTrue(colname in get(fh[1])) cnt=0 for o in outputs: cnt+=o in get(fh[1]) self.assertTrue(cnt) # clean up files for f in fh: f.close() # This one uses the coverage module def _testShellWithCoverage(self): "Check Shell functionality (with coverage)" # We currently allow coverage module to not exist which helps # with debugging try: import coverage except ImportError: coverage=None import imp # I had problems with the compiled bytecode being around for suff in "c","o": try: os.remove("tools/shell.py"+suff) except: pass if coverage: coverage.start() covshell=imp.load_source("shell_coverage", "tools/shell.py") try: self._originaltestShell(shellclass=covshell.Shell) finally: if coverage: coverage.stop() coverage.annotate(morfs=[covshell]) os.rename("tools/shell.py,cover", "shell.py.gcov") # Note that faults fire only once, so there is no need to reset # them. The testing for objects bigger than 2GB is done in # testLargeObjects def testzzFaultInjection(self): "Deliberately inject faults to exercise all code paths" if not hasattr(apsw, "faultdict"): return def dummy(*args): 1/0 def dummy2(*args): return 7 # The 1/0 in these tests is to cause a ZeroDivisionError so # that an exception is always thrown. If we catch that then # it means earlier expected exceptions were not thrown. ## UnknownSQLiteErrorCode apsw.faultdict["UnknownSQLiteErrorCode"]=True try: self.db.cursor().execute("select '") 1/0 except: klass,value=sys.exc_info()[:2] self.assertTrue(klass is apsw.Error) self.assertTrue("254" in str(value)) ## AsWriteBufferFails if not py3: apsw.faultdict["AsWriteBufferFails"]=True try: for row in self.db.cursor().execute("select x'1234ccddeeff'"): pass 1/0 except MemoryError: pass ## ConnectionCloseFail if "APSW_NO_MEMLEAK" not in os.environ: apsw.faultdict["ConnectionCloseFail"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("select 3") db.close(True) 1/0 except apsw.IOError: pass ## ConnectionCloseFail in destructor if "APSW_NO_MEMLEAK" not in os.environ: # test apsw.faultdict["ConnectionCloseFail"]=True def f(): db=apsw.Connection(":memory:") db.cursor().execute("select 3") del db gc.collect() self.assertRaisesUnraisable(apsw.ConnectionNotClosedError, f) ## BlobAllocFails apsw.faultdict["BlobAllocFails"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("create table foo(ablob); insert into foo (ROWID, ablob) values (1,x'aabbccddeeff')") blob=db.blobopen("main", "foo", "ablob", 1, False) 1/0 except MemoryError: pass ## CursorAllocFails apsw.faultdict["CursorAllocFails"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("select 3") 1/0 except MemoryError: pass ## DBConfigFails apsw.faultdict["DBConfigFails"]=True try: db=apsw.Connection(":memory:") db.config(apsw.SQLITE_DBCONFIG_ENABLE_TRIGGER, -1) 1/0 except apsw.NoMemError: pass ## RollbackHookExistingError apsw.faultdict["RollbackHookExistingError"]=True try: db=apsw.Connection(":memory:") db.setrollbackhook(dummy) db.cursor().execute("create table foo(a); begin ; insert into foo values(3); rollback") 1/0 except MemoryError: pass ## CommitHookExceptionAlready apsw.faultdict["CommitHookExistingError"]=True try: db=apsw.Connection(":memory:") db.setcommithook(dummy) db.cursor().execute("begin; create table foo(a); insert into foo values(3); commit") 1/0 except MemoryError: pass ## AuthorizerExistingError apsw.faultdict["AuthorizerExistingError"]=True try: db=apsw.Connection(":memory:") db.setauthorizer(dummy) db.cursor().execute("create table foo(a)") 1/0 except MemoryError: pass ## SetAuthorizerNullFail apsw.faultdict["SetAuthorizerNullFail"]=True try: db=apsw.Connection(":memory:") db.setauthorizer(None) 1/0 except apsw.IOError: klass,value=sys.exc_info()[:2] self.assertTrue(klass is apsw.IOError) ## SetAuthorizerFail apsw.faultdict["SetAuthorizerFail"]=True try: db=apsw.Connection(":memory:") db.setauthorizer(dummy) 1/0 except: pass ## CollationNeededNullFail apsw.faultdict["CollationNeededNullFail"]=True try: db=apsw.Connection(":memory:") db.collationneeded(None) 1/0 except apsw.IOError: klass,value=sys.exc_info()[:2] self.assertTrue(klass is apsw.IOError) ## CollationNeededFail apsw.faultdict["CollationNeededFail"]=True try: db=apsw.Connection(":memory:") db.collationneeded(dummy) 1/0 except: klass,value=sys.exc_info()[:2] self.assertTrue(klass is apsw.IOError) ##EnableLoadExtensionFail apsw.faultdict["EnableLoadExtensionFail"]=True try: db=apsw.Connection(":memory:") db.enableloadextension(True) 1/0 except: pass ## SetBusyHandlerNullFail apsw.faultdict["SetBusyHandlerNullFail"]=True try: db=apsw.Connection(":memory:") db.setbusyhandler(None) 1/0 except apsw.IOError: pass ## SetBusyHandlerFail apsw.faultdict["SetBusyHandlerFail"]=True try: db=apsw.Connection(":memory:") db.setbusyhandler(dummy) 1/0 except apsw.IOError: pass ## UnknownValueType apsw.faultdict["UnknownValueType"]=True try: db=apsw.Connection(":memory:") db.createscalarfunction("dummy", dummy) db.cursor().execute("select dummy(4)") 1/0 except: klass,value=sys.exc_info()[:2] self.assertTrue(klass is apsw.Error) self.assertTrue("123456" in str(value)) ## UnknownColumnType apsw.faultdict["UnknownColumnType"]=True try: db=apsw.Connection(":memory:") for row in db.cursor().execute("select 3"): pass 1/0 except: klass,value=sys.exc_info()[:2] self.assertTrue(klass is apsw.Error) self.assertTrue("12348" in str(value)) ## SetContextResultUnicodeConversionFails apsw.faultdict["SetContextResultUnicodeConversionFails"]=True try: db=apsw.Connection(":memory:") db.createscalarfunction("foo", lambda x: u("another unicode string")) for row in db.cursor().execute("select foo(3)"): pass 1/0 except MemoryError: pass ## SetContextResultStringUnicodeConversionFails if sys.version_info<(3,0): apsw.faultdict["SetContextResultStringUnicodeConversionFails"]=True try: db=apsw.Connection(":memory:") db.createscalarfunction("foo", lambda x: "another string"*10000) for row in db.cursor().execute("select foo(3)"): pass 1/0 except MemoryError: pass ## SetContextResultAsReadBufferFail apsw.faultdict["SetContextResultAsReadBufferFail"]=True try: db=apsw.Connection(":memory:") db.createscalarfunction("foo", lambda x: b("another string")) for row in db.cursor().execute("select foo(3)"): pass 1/0 except MemoryError: pass ## GFAPyTuple_NewFail apsw.faultdict["GFAPyTuple_NewFail"]=True try: db=apsw.Connection(":memory:") db.createscalarfunction("foo", dummy) for row in db.cursor().execute("select foo(3)"): pass 1/0 except MemoryError: pass ## Same again apsw.faultdict["GFAPyTuple_NewFail"]=True try: db=apsw.Connection(":memory:") def foo(): return None, dummy2, dummy2 db.createaggregatefunction("foo", foo) for row in db.cursor().execute("create table bar(x);insert into bar values(3); select foo(x) from bar"): pass 1/0 except MemoryError: pass ## CBDispatchExistingError apsw.faultdict["CBDispatchExistingError"]=True try: db=apsw.Connection(":memory:") db.createscalarfunction("foo", dummy) db.cursor().execute("select foo(3)") 1/0 except MemoryError: pass ## CBDispatchFinalError apsw.faultdict["CBDispatchFinalError"]=True try: def f(): db=apsw.Connection(":memory:") def foo(): return None, dummy, dummy2 db.createaggregatefunction("foo", foo) for row in db.cursor().execute("create table bar(x);insert into bar values(3); select foo(x) from bar"): pass 1/0 self.assertRaisesUnraisable(Exception, f) except ZeroDivisionError: pass ## Virtual table code class Source: def Create(self, *args): return "create table foo(x,y)", Table() Connect=Create class Table: def __init__(self): self.data=[ #("rowid", "x", "y"), [0, 1, 2], [3, 4, 5]] def Open(self): return Cursor(self) def BestIndex(self, *args): return None def UpdateChangeRow(self, rowid, newrowid, fields): for i, row in enumerate(self.data): if row[0]==rowid: self.data[i]=[newrowid]+list(fields) def FindFunction(self, *args): return lambda *args: 1 class Cursor: def __init__(self, table): self.table=table self.row=0 def Eof(self): return self.row>=len(self.table.data) def Rowid(self): return self.table.data[self.row][0] def Column(self, col): return self.table.data[self.row][1+col] def Filter(self, *args): self.row=0 def Next(self): self.row+=1 def Close(self): pass ## VtabCreateBadString apsw.faultdict["VtabCreateBadString"]=True try: db=apsw.Connection(":memory:") db.createmodule("nonsense", None) db.cursor().execute("create virtual table foo using nonsense(3,4)") 1/0 except MemoryError: pass ## VtabUpdateChangeRowFail apsw.faultdict["VtabUpdateChangeRowFail"]=True try: db=apsw.Connection(":memory:") db.createmodule("foo", Source()) db.cursor().execute("create virtual table foo using foo();update foo set x=3 where y=2") 1/0 except MemoryError: pass ## VtabUpdateBadField apsw.faultdict["VtabUpdateBadField"]=True try: db=apsw.Connection(":memory:") db.createmodule("foo", Source()) db.cursor().execute("create virtual table foo using foo();update foo set x=3 where y=2") 1/0 except MemoryError: pass ## VtabRenameBadName apsw.faultdict["VtabRenameBadName"]=True try: db=apsw.Connection(":memory:") db.createmodule("foo", Source()) db.cursor().execute("create virtual table foo using foo(); alter table foo rename to bar") 1/0 except MemoryError: pass ## VtabRenameBadName apsw.faultdict["CreateModuleFail"]=True try: db=apsw.Connection(":memory:") db.createmodule("foo", Source()) 1/0 except apsw.IOError: pass ## FindFunctionAllocFailed apsw.faultdict["FindFunctionAllocFailed"]=True try: db=apsw.Connection(":memory:") db.overloadfunction("xyz", 2) db.createmodule("foo", Source()) db.cursor().execute("create virtual table foo using foo()") db.cursor().execute("select xyz(x,y) from foo") 1/0 except MemoryError: pass ## BlobDeallocException def f(): db=apsw.Connection(":memory:") db.cursor().execute("create table foo(b);insert into foo(rowid,b) values(2,x'aabbccddee')") blob=db.blobopen("main", "foo", "b", 2, False) # open read-only # deliberately cause problem try: blob.write(b('a')) except apsw.ReadOnlyError: pass # garbage collect del blob gc.collect() self.assertRaisesUnraisable(apsw.ReadOnlyError, f) ## BlobWriteAsReadBufFails apsw.faultdict["BlobWriteAsReadBufFails"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("create table foo(b);insert into foo(rowid,b) values(2,x'aabbccddee')") blob=db.blobopen("main", "foo", "b", 2, True) blob.write(b("aaaaaa")) 1/0 except MemoryError: pass ## GetDescriptionFail apsw.faultdict["GetDescriptionFail"]=True try: db=apsw.Connection(":memory:") c=db.cursor() c.execute("create table foo(b);insert into foo(rowid,b) values(2,x'aabbccddee');select * from foo") c.getdescription() 1/0 except MemoryError: pass ## DoBindingUnicodeConversionFails apsw.faultdict["DoBindingUnicodeConversionFails"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("select ?", (u("abc"),)) 1/0 except MemoryError: pass ## DoBindingStringConversionFails if sys.version_info<(3,0): apsw.faultdict["DoBindingStringConversionFails"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("select ?", ("abc"*10000,)) 1/0 except MemoryError: pass ## DoBindingAsReadBufferFails apsw.faultdict["DoBindingAsReadBufferFails"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("select ?", (b("abcd"),)) 1/0 except MemoryError: pass ## DoExecTraceBadSlice apsw.faultdict["DoExecTraceBadSlice"]=True try: db=apsw.Connection(":memory:") c=db.cursor() c.setexectrace(dummy) c.execute("select ?; select ?; select ?", (1,2,3)) 1/0 except MemoryError: pass ## EnableSharedCacheFail apsw.faultdict["EnableSharedCacheFail"]=True try: apsw.enablesharedcache(True) 1/0 except apsw.NoMemError: pass ## InitializeFail apsw.faultdict["InitializeFail"]=True try: apsw.initialize() 1/0 except apsw.NoMemError: pass ## ShutdownFail apsw.faultdict["ShutdownFail"]=True try: apsw.shutdown() 1/0 except apsw.NoMemError: pass ### vfs routines class FaultVFS(apsw.VFS): def __init__(self, name="faultvfs", inherit="", makedefault=False): super(FaultVFS, self).__init__(name, inherit, makedefault=makedefault) def xGetLastErrorLong(self): return "a"*1024 def xOpen(self, name, flags): return FaultVFSFile(name, flags) class FaultVFSFile(apsw.VFSFile): def __init__(self, name, flags): super(FaultVFSFile,self).__init__("", name, flags) vfs=FaultVFS() ## xFullPathnameConversion apsw.faultdict["xFullPathnameConversion"]=True self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, MemoryError, apsw.Connection, TESTFILEPREFIX+"testdb", vfs="faultvfs") ## xDlError db=apsw.Connection(":memory:", vfs="faultvfs") if hasattr(db, 'enableloadextension'): db.enableloadextension(True) ## xDlErrorAllocFail apsw.faultdict["xDlErrorAllocFail"]=True self.assertRaises(apsw.ExtensionLoadingError, self.assertRaisesUnraisable, MemoryError, db.loadextension, "non-existent-file-name") ## xDlErrorUnicodeFail apsw.faultdict["xDlErrorUnicodeFail"]=True self.assertRaises(apsw.ExtensionLoadingError, self.assertRaisesUnraisable, MemoryError, db.loadextension, "non-existent-file-name") del db gc.collect() ## xRandomnessAllocFail if hasattr(apsw, 'test_reset_rng'): # we need to be default vfs vfs2=FaultVFS("faultvfs2", apsw.vfsnames()[0], makedefault=True) apsw.test_reset_rng() apsw.faultdict["xRandomnessAllocFail"]=True # doesn't matter which vfs opens the file self.assertRaisesUnraisable(MemoryError, apsw.Connection(":memory:").cursor().execute, "select randomblob(10)") del vfs2 gc.collect() ## xCurrentTimeFail apsw.faultdict["xCurrentTimeFail"]=True self.assertRaisesUnraisable(apsw.SQLError, apsw.Connection(":memory:", vfs="faultvfs").cursor().execute, "select date('now')") ## xGetLastErrorAllocFail if hasattr(apsw, 'test_call_xGetLastError'): apsw.faultdict["xGetLastErrorAllocFail"]=True vfs2=FaultVFS("faultvfs2", "faultvfs") vfs.xGetLastError=vfs.xGetLastErrorLong self.assertRaisesUnraisable(MemoryError, apsw.test_call_xGetLastError, "faultvfs2", 128) vfs.xGetLastError=super(FaultVFS, vfs).xGetLastError ## APSWVFSDeallocFail apsw.faultdict["APSWVFSDeallocFail"]=True def foo(): vfs2=FaultVFS("faultvfs2", "faultvfs") del vfs2 gc.collect() self.assertRaisesUnraisable(apsw.IOError, foo) ## APSWVFSBadVersion apsw.faultdict["APSWVFSBadVersion"]=True self.assertRaises(ValueError, apsw.VFS, "foo", "") self.assertTrue("foo" not in apsw.vfsnames()) ## APSWVFSRegistrationFails apsw.faultdict["APSWVFSRegistrationFails"]=True self.assertRaises(apsw.NoMemError, apsw.VFS, "foo", "") self.assertTrue("foo" not in apsw.vfsnames()) ## xReadReadBufferFail try: # This will fail if we are using auto-WAL so we don't run # the rest of the test in WAL mode. apsw.Connection(TESTFILEPREFIX+"testdb", vfs="faultvfs").cursor().execute("create table dummy1(x,y)") openok=True except apsw.CantOpenError: if len(apsw.connection_hooks)==0: raise openok=False # The following tests cause failures when making the # connection because a connection hook turns on wal mode which # causes database reads which then cause failures if openok: apsw.faultdict["xReadReadBufferFail"]=True def foo(): apsw.Connection(TESTFILEPREFIX+"testdb", vfs="faultvfs").cursor().execute("select * from dummy1") self.assertRaises(apsw.SQLError, self.assertRaisesUnraisable, TypeError, foo) ## xUnlockFails apsw.faultdict["xUnlockFails"]=True # Used to wrap in self.assertRaises(apsw.IOError, ...) but SQLite no longer passes on the error. # See https://sqlite.org/cvstrac/tktview?tn=3946 self.assertRaisesUnraisable(apsw.IOError, apsw.Connection(TESTFILEPREFIX+"testdb", vfs="faultvfs").cursor().execute, "select * from dummy1") ## xSyncFails apsw.faultdict["xSyncFails"]=True self.assertRaises(apsw.IOError, self.assertRaisesUnraisable, apsw.IOError, apsw.Connection(TESTFILEPREFIX+"testdb", vfs="faultvfs").cursor().execute, "insert into dummy1 values(3,4)") ## xFileSizeFails apsw.faultdict["xFileSizeFails"]=True self.assertRaises(apsw.IOError, self.assertRaisesUnraisable, apsw.IOError, apsw.Connection(TESTFILEPREFIX+"testdb", vfs="faultvfs").cursor().execute, "select * from dummy1") ## xCheckReservedLockFails apsw.faultdict["xCheckReservedLockFails"]=True self.assertRaises(apsw.IOError, self.assertRaisesUnraisable, apsw.IOError, testdb, vfsname="faultvfs") ## xCheckReservedLockIsTrue apsw.faultdict["xCheckReservedLockIsTrue"]=True testdb(vfsname="faultvfs") ## xCloseFails t=apsw.VFSFile("", os.path.abspath(TESTFILEPREFIX+"testfile"), [apsw.SQLITE_OPEN_MAIN_DB|apsw.SQLITE_OPEN_CREATE|apsw.SQLITE_OPEN_READWRITE,0]) apsw.faultdict["xCloseFails"]=True self.assertRaises(apsw.IOError, t.xClose) del t # now catch it in the destructor def foo(): t=apsw.VFSFile("", os.path.abspath(TESTFILEPREFIX+"testfile"), [apsw.SQLITE_OPEN_MAIN_DB|apsw.SQLITE_OPEN_CREATE|apsw.SQLITE_OPEN_READWRITE,0]) apsw.faultdict["xCloseFails"]=True del t gc.collect() self.assertRaisesUnraisable(apsw.IOError, foo) ## vfspyopen_fullpathnamemallocfailed del FaultVFS.xOpen # remove overriding fault xOpen method so we get default implementation apsw.faultdict["vfspyopen_fullpathnamemallocfailed"]=True self.assertRaises(MemoryError, vfs.xOpen, "doesn't matter", apsw.SQLITE_OPEN_CREATE|apsw.SQLITE_OPEN_READWRITE) # and again in file open apsw.faultdict["vfspyopen_fullpathnamemallocfailed"]=True self.assertRaises(MemoryError, apsw.VFSFile, "", "/doesn't matter", [apsw.SQLITE_OPEN_CREATE|apsw.SQLITE_OPEN_READWRITE,0]) ## vfsnamesfails apsw.faultdict["vfsnamesfails"]=True self.assertRaises(MemoryError, apsw.vfsnames) ## StatementCacheAllocFails apsw.faultdict["StatementCacheAllocFails"]=True try: apsw.Connection(":memory:") 1/0 except MemoryError: pass ## TransferBindingsFail apsw.faultdict["TransferBindingsFail"]=True try: db=apsw.Connection(":memory:") db.cursor().execute("create table foo(x,y); insert into foo values(3,4)") db.cursor().execute("create index fooxy on foo(x,y)") for row in db.cursor().execute("select * from foo"): pass db.cursor().execute("drop index fooxy") for row in db.cursor().execute("select * from foo"): pass 1/0 except apsw.NoMemError: pass ## OverloadFails apsw.faultdict["OverloadFails"]=True try: db=apsw.Connection(":memory:") db.overloadfunction("foo", 1) 1/0 except apsw.NoMemError: pass ## ConnectionEnterExecFailed apsw.faultdict["ConnectionEnterExecFailed"]=True try: db=apsw.Connection(":memory:") db.__enter__() 1/0 except apsw.NoMemError: pass ## BackupInitFails apsw.faultdict["BackupInitFails"]=True try: db=apsw.Connection(":memory:") db.backup("main", apsw.Connection(":memory:"), "main") 1/0 except apsw.NoMemError: pass ## BackupNewFails apsw.faultdict["BackupNewFails"]=True try: db=apsw.Connection(":memory:") db.backup("main", apsw.Connection(":memory:"), "main") 1/0 except MemoryError: pass ## FormatSQLValueResizeFails apsw.faultdict["FormatSQLValueResizeFails"]=True try: apsw.format_sql_value(u("fsdkljfl'fdsfds")) 1/0 except SystemError: pass ## FormatSQLValueAsReadBufferFails apsw.faultdict["FormatSQLValueAsReadBufferFails"]=True try: apsw.format_sql_value(b("abcd")) 1/0 except MemoryError: pass ## FormatSQLValuePyUnicodeFromUnicodeFails apsw.faultdict["FormatSQLValuePyUnicodeFromUnicodeFails"]=True try: apsw.format_sql_value(b("abcd")) 1/0 except MemoryError: pass ## WalAutocheckpointFails apsw.faultdict["WalAutocheckpointFails"]=True try: apsw.Connection(":memory:").wal_autocheckpoint(77) 1/0 except apsw.IOError: pass ## WalCheckpointFails apsw.faultdict["WalCheckpointFails"]=True try: apsw.Connection(":memory:").wal_checkpoint() 1/0 except apsw.IOError: pass # This test is run last by deliberate name choice. If it did # uncover any bugs there isn't much that can be done to turn the # checker off. def testzzForkChecker(self): "Test detection of using objects across fork" # need to free up everything that already exists self.db.close() gc.collect() # install it apsw.fork_checker() # return some objects def getstuff(): db=apsw.Connection(":memory:") cur=db.cursor() for row in cur.execute("create table foo(x);insert into foo values(1);insert into foo values(x'aabbcc'); select last_insert_rowid()"): blobid=row[0] blob=db.blobopen("main", "foo", "x", blobid, 0) db2=apsw.Connection(":memory:") if hasattr(db2, "backup"): backup=db2.backup("main", db, "main") else: backup=None return (db,cur,blob,backup) # test the objects def teststuff(db, cur, blob, backup): if db: db.cursor().execute("select 3") if cur: cur.execute("select 3") if blob: blob.read(1) if backup: backup.step() # Sanity check teststuff(*getstuff()) # get some to use in parent parent=getstuff() # to be used (and fail with error) in child child=getstuff() def childtest(*args): # we can't use unittest methods here since we are in a different process val=args[0] args=args[1:] # this should work teststuff(*getstuff()) # ignore the unraiseable stuff sent to sys.excepthook def eh(*args): pass sys.excepthook=eh # call with each seperate item to check try: for i in range(len(args)): a=[None]*len(args) a[i]=args[i] try: teststuff(*a) except apsw.ForkingViolationError: pass except apsw.ForkingViolationError: # we get one final exception "between" line due to the # nature of how the exception is raised pass # this should work again teststuff(*getstuff()) val.value=1 import multiprocessing val=multiprocessing.Value("i", 0) p=multiprocessing.Process(target=childtest, args=[val]+list(child)) p.start() p.join() self.assertEqual(1, val.value) # did child complete ok? teststuff(*parent) testtimeout=False # timeout testing adds several seconds to each run def testdb(filename=TESTFILEPREFIX+"testdb2", vfsname="apswtest", closedb=True, mode=None, attachdb=None): "This method causes all parts of a vfs to be executed" gc.collect() # free any existing db handles for suf in "", "-journal", "x", "x-journal": deletefile(filename+suf) db=apsw.Connection("file:"+filename+"?psow=0", vfs=vfsname, flags=openflags) if mode: db.cursor().execute("pragma journal_mode="+mode) db.cursor().execute("create table foo(x,y); insert into foo values(1,2); insert into foo values(date('now'), date('now'))") if testtimeout: # busy db2=apsw.Connection(filename, vfs=vfsname) if mode: db2.cursor().execute("pragma journal_mode="+mode) db.setbusytimeout(1100) db2.cursor().execute("begin exclusive") try: db.cursor().execute("begin immediate") 1/0 # should not be reached except apsw.BusyError: pass db2.cursor().execute("end") # cause truncate to be called # see sqlite test/pager3.test where this (public domain) code is taken from # I had to add the pragma journal_mode to get it to work c=db.cursor() for row in c.execute("pragma journal_mode=truncate"): pass c.execute(""" create table t1(a unique, b); insert into t1 values(1, 'abcdefghijklmnopqrstuvwxyz'); insert into t1 values(2, 'abcdefghijklmnopqrstuvwxyz'); update t1 set b=b||a||b; update t1 set b=b||a||b; update t1 set b=b||a||b; update t1 set b=b||a||b; update t1 set b=b||a||b; update t1 set b=b||a||b; create temp table t2 as select * from t1; begin; create table t3(x);""") try: c.execute("insert into t1 select 4-a, b from t2") except apsw.ConstraintError: pass c.execute("rollback") if attachdb: c.execute("attach '%s' as second" % (attachdb,)) if hasattr(APSW, "testLoadExtension"): # can we use loadextension? db.enableloadextension(True) try: db.loadextension("./"*128+LOADEXTENSIONFILENAME+"xxx") except apsw.ExtensionLoadingError: pass db.loadextension(LOADEXTENSIONFILENAME) assert(1==next(db.cursor().execute("select half(2)"))[0]) # Get the routine xCheckReservedLock to be called. We need a hot journal # which this code adapted from SQLite's pager.test does if not iswindows: c.execute("create table abc(a,b,c)") for i in range(20): c.execute("insert into abc values(1,2,?)", (randomstring(200),)) c.execute("begin; update abc set c=?", (randomstring(200),)) write_whole_file(filename+"x", "wb", read_whole_file(filename, "rb")) write_whole_file(filename+"x-journal", "wb", read_whole_file(filename+"-journal", "rb")) f=open(filename+"x-journal", "ab") f.seek(-1032, 2) # 1032 bytes before end of file f.write(b(r"\x00\x00\x00\x00")) f.close() hotdb=apsw.Connection(filename+"x", vfs=vfsname) if mode: hotdb.cursor().execute("pragma journal_mode="+mode) hotdb.cursor().execute("select sql from sqlite_master") hotdb.close() if closedb: db.close() else: return db if not iswindows: # note that a directory must be specified otherwise $LD_LIBRARY_PATH is used LOADEXTENSIONFILENAME="./testextension.sqlext" else: LOADEXTENSIONFILENAME="testextension.sqlext" MEMLEAKITERATIONS=1000 PROFILESTEPS=250000 def setup(write=write): """Call this if importing this test suite as it will ensure tests we can't run are removed etc. It will also print version information.""" print_version_info(write) if hasattr(apsw, "config"): apsw.config(apsw.SQLITE_CONFIG_MEMSTATUS, True) # ensure memory tracking is on apsw.initialize() # manual call for coverage memdb=apsw.Connection(":memory:") if not getattr(memdb, "enableloadextension", None): del APSW.testLoadExtension forkcheck=False if hasattr(apsw, "fork_checker") and hasattr(os, "fork"): try: import multiprocessing # sometimes the import works but doing anything fails val=multiprocessing.Value("i", 0) forkcheck=True except ImportError: pass # we also remove forkchecker if doing multiple iterations if not forkcheck or "APSW_TEST_ITERATIONS" in os.environ: del APSW.testzzForkChecker # These tests are of experimental features if not hasattr(memdb, "backup"): del APSW.testBackup if not hasattr(apsw, "config"): del APSW.testConfig if not hasattr(memdb, "limit"): del APSW.testLimits if not hasattr(memdb, "setprofile"): del APSW.testProfile if not hasattr(memdb, "createmodule"): del APSW.testVtables del APSW.testVTableExample # We can do extension loading but no extension present ... if getattr(memdb, "enableloadextension", None) and not os.path.exists(LOADEXTENSIONFILENAME): write("Not doing LoadExtension test. You need to compile the extension first\n") if sys.platform.startswith("darwin"): write(" gcc -fPIC -bundle -o "+LOADEXTENSIONFILENAME+" -I. -Isqlite3 src/testextension.c\n") else: write(" gcc -fPIC -shared -o "+LOADEXTENSIONFILENAME+" -I. -Isqlite3 src/testextension.c\n") del APSW.testLoadExtension sys.stdout.flush() # coverage testing of the shell if "APSW_PY_COVERAGE" in os.environ: APSW._originaltestShell=APSW.testShell APSW.testShell=APSW._testShellWithCoverage del memdb # This can't be a member of APSW class above because Python 2.3 # unittest gets confused and tries to execute it! test_types_vals=("a simple string", # "ascii" string "0123456789"*200000, # a longer string u(r"a \u1234 unicode \ufe54 string \u0089"), # simple unicode string u(r"\N{BLACK STAR} \N{WHITE STAR} \N{LIGHTNING} \N{COMET} "), # funky unicode or an episode of b5 u(r"\N{MUSICAL SYMBOL G CLEF}"), # http://www.cmlenz.net/archives/2008/07/the-truth-about-unicode-in-python 97, # integer 2147483647, # numbers on 31 bit boundary (32nd bit used for integer sign), and then -2147483647, # start using 32nd bit (must be represented by 64bit to avoid losing long(2147483648), # detail) long(-2147483648), long(2147483999), long(-2147483999), 992147483999, -992147483999, 9223372036854775807, -9223372036854775808, b("a set of bytes"), # bag of bytes initialised from a string, but don't confuse it with a b("".join(["\\x%02x" % (x,) for x in range(256)])), # string b("".join(["\\x%02x" % (x,) for x in range(256)])*20000), # non-trivial size None, # our good friend NULL/None 1.1, # floating point can't be compared exactly - assertAlmostEqual is used to check 10.2, # see Appendix B in the Python Tutorial 1.3, 1.45897589347E97, 5.987987/8.7678678687676786, math.pi, True, # derived from integer False ) if __name__=='__main__': setup() def runtests(): def set_wal_mode(c): # Note that WAL won't be on for memory databases. This # execution returns the active mode c.cursor().execute("PRAGMA journal_mode=WAL").fetchall() b4=apsw.connection_hooks[:] try: if "APSW_TEST_WALMODE" in os.environ: apsw.connection_hooks.append(set_wal_mode) sys.stderr.write("WAL: ") if os.getenv("PYTRACE"): import trace t=trace.Trace(count=0, trace=1, ignoredirs=[sys.prefix, sys.exec_prefix]) t.runfunc(unittest.main) else: unittest.main() finally: apsw.connection_hooks=b4 v=os.environ.get("APSW_TEST_ITERATIONS", None) if v is None: try: runtests() except SystemExit: exitcode=sys.exc_info()[1].code else: # we run all the tests multiple times which has better coverage # a larger value for MEMLEAKITERATIONS slows down everything else MEMLEAKITERATIONS=5 PROFILESTEPS=1000 v=int(v) for i in range(v): write("Iteration "+str(i+1)+" of "+str(v)+"\n") try: runtests() except SystemExit: exitcode=sys.exc_info()[1].code # Free up everything possible del APSW del ThreadRunner del randomintegers # clean up sqlite and apsw gc.collect() # all cursors & connections must be gone apsw.shutdown() apsw.config(apsw.SQLITE_CONFIG_LOG, None) if hasattr(apsw, "_fini"): apsw._fini() gc.collect() del apsw exit=sys.exit # modules del unittest del os del math del random del time del threading del Queue del traceback del re gc.collect() if py3: # doesn't handle modules being deleted very well exit(exitcode) # In py3 gc and sys can end up being None even though we take care not to delete them for k in list(sys.modules.keys()): if k not in ('gc', 'sys', '__main__'): try: del sys.modules[k] except: pass del sys if gc: gc.collect() del gc exit(exitcode) apsw-3.8.11.1-r1/tools/0000755000000000000000000000000012576610040013074 5ustar rootrootapsw-3.8.11.1-r1/tools/shell.py0000644000000000000000000035304412275662210014570 0ustar rootroot#!/usr/bin/env python import sys import apsw import shlex import os import csv import re import textwrap import time import codecs import base64 if sys.platform=="win32": _win_colour=False try: import colorama colorama.init() del colorama _win_colour=True except: # there are several failure reasons, ignore them all pass class Shell(object): """Implements a SQLite shell :param stdin: Where to read input from (default sys.stdin) :param stdout: Where to send output (default sys.stdout) :param stderr: Where to send errors (default sys.stderr) :param encoding: Default encoding for files opened/created by the Shell. If you want stdin/out/err to use a particular encoding then you need to provide them `already configured `__ that way. :param args: This should be program arguments only (ie if passing in sys.argv do not include sys.argv[0] which is the program name. You can also pass in None and then call :meth:`process_args` if you want to catch any errors in handling the arguments yourself. :param db: A existing :class:`Connection` you wish to use The commands and behaviour are modelled after the `interactive shell `__ that is part of SQLite. You can inherit from this class to embed in your own code and user interface. Internally everything is handled as unicode. Conversions only happen at the point of input or output which you can override in your own code. This implementation fixes a number of bugs/quirks present in the sqlite shell. Its control-C handling is also friendlier. Some examples of issues not present in this implementation: * https://sqlite.org/src/info/c25aab7e7e * https://sqlite.org/src/info/7b61b6c6ce * https://sqlite.org/src/info/ee19e690ec * https://sqlite.org/src/info/2466653295 Errors and diagnostics are only ever sent to error output (self.stderr) and never to the regular output (self.stdout). This means using shell output is always easy and consistent. Shell commands begin with a dot (eg .help). They are implemented as a method named after the command (eg command_help). The method is passed one parameter which is the list of arguments to the command. Output modes are implemented by functions named after the mode (eg output_column). When you request help the help information is automatically generated from the docstrings for the command and output functions. You should not use a Shell object concurrently from multiple threads. It is one huge set of state information which would become inconsistent if used simultaneously, and then give baffling errors. It is safe to call methods one at a time from different threads. ie it doesn't care what thread calls methods as long as you don't call more than one concurrently. """ class Error(Exception): """Class raised on errors. The expectation is that the error will be displayed by the shell as text so there are no specific subclasses as the distinctions between different types of errors doesn't matter.""" pass def __init__(self, stdin=None, stdout=None, stderr=None, encoding="utf8", args=None, db=None): """Create instance, set defaults and do argument processing.""" super(Shell, self).__init__() # The parameter doc has to be in main class doc as sphinx # ignores any described here self.exceptions=False self.history_file="~/.sqlite_history" self._db=None self.dbfilename=None if db: self.db=db, db.filename else: self.db=None, None self.prompt= "sqlite> " self.moreprompt=" ..> " self.separator="|" self.bail=False self.echo=False self.timer=False self.header=False self.nullvalue="" self.output=self.output_list self._output_table=self._fmt_sql_identifier("table") self.widths=[] # do we truncate output in list mode? (explain doesn't, regular does) self.truncate=True # a stack of previous outputs. turning on explain saves previous, off restores self._output_stack=[] # other stuff self.set_encoding(encoding) if stdin is None: stdin=sys.stdin if stdout is None: stdout=sys.stdout if stderr is None: stderr=sys.stderr self.stdin=stdin self.stdout=stdout self._original_stdout=stdout self.stderr=stderr # we don't become interactive until the command line args are # successfully parsed and acted upon self.interactive=None # current colouring object self.command_colour() # set to default self._using_readline=False self._input_stack=[] self.input_line_number=0 self.push_input() self.push_output() self._input_descriptions=[] if args: try: self.process_args(args) except: if len(self._input_descriptions): self._input_descriptions.append("Processing command line arguments") self.handle_exception() raise if self.interactive is None: self.interactive=getattr(self.stdin, "isatty", False) and self.stdin.isatty() and getattr(self.stdout, "isatty", False) and self.stdout.isatty() def _ensure_db(self): "The database isn't opened until first use. This function ensures it is now open." if not self._db: if not self.dbfilename: self.dbfilename=":memory:" self._db=apsw.Connection(self.dbfilename, flags=apsw.SQLITE_OPEN_URI | apsw.SQLITE_OPEN_READWRITE | apsw.SQLITE_OPEN_CREATE) return self._db def _set_db(self, newv): "Sets the open database (or None) and filename" (db, dbfilename)=newv if self._db: self._db.close() self._db=None self._db=db self.dbfilename=dbfilename db=property(_ensure_db, _set_db, None, "The current :class:`Connection`") def process_args(self, args): """Process command line options specified in args. It is safe to call this multiple times. We try to be compatible with SQLite shell argument parsing. :param args: A list of string options. Do not include the program as args[0] :returns: A tuple of (databasefilename, initfiles, sqlncommands). This is provided for informational purposes only - they have already been acted upon. An example use is that the SQLite shell does not enter the main interactive loop if any sql/commands were provided. The first non-option is the database file name. Each remaining non-option is treated as a complete input (ie it isn't joined with others looking for a trailing semi-colon). The SQLite shell uses single dash in front of options. We allow both single and double dashes. When an unrecognized argument is encountered then :meth:`process_unknown_args` is called. """ # we don't use optparse as we need to use single dashes for # options - all hand parsed if not args: return None, [], [] # are options still valid? options=True # have we seen the database name? havedbname=False # List of init files to read inits=[] # List of sql/dot commands sqls=[] while args: if not options or not args[0].startswith("-"): options=False if not havedbname: # grab new database self.db=None, args[0] havedbname=True else: sqls.append(args[0]) args=args[1:] continue # remove initial single or double dash args[0]=args[0][1:] if args[0].startswith("-"): args[0]=args[0][1:] if args[0]=="init": if len(args)<2: raise self.Error("You need to specify a filename after -init") inits.append(args[1]) args=args[2:] continue if args[0]=="header" or args[0]=="noheader": self.header=args[0]=="header" args=args[1:] continue if args[0] in ("echo", "bail", "interactive"): setattr(self, args[0], True) args=args[1:] continue if args[0]=="batch": self.interactive=False args=args[1:] continue if args[0] in ("separator", "nullvalue", "encoding"): if len(args)<2: raise self.Error("You need to specify a value after -"+args[0]) getattr(self, "command_"+args[0])([args[1]]) args=args[2:] continue if args[0]=="version": self.write(self.stdout, apsw.sqlitelibversion()+"\n") # A pretty gnarly thing to do sys.exit(0) if args[0]=="help": self.write(self.stderr, self.usage()) sys.exit(0) if args[0] in ("no-colour", "no-color", "nocolour", "nocolor"): self.colour_scheme="off" self._out_colour() args=args[1:] continue # only remaining known args are output modes if getattr(self, "output_"+args[0], None): self.command_mode(args[:1]) args=args[1:] continue newargs=self.process_unknown_args(args) if newargs is None: raise self.Error("Unrecognized argument '"+args[0]+"'") args=newargs for f in inits: self.command_read([f]) for s in sqls: self.process_complete_line(s) return self.dbfilename, inits, sqls def process_unknown_args(self, args): """This is called when :meth:`process_args` encounters an argument it doesn't understand. Override this method if you want to be able to understand additional command line arguments. :param args: A list of the remaining arguments. The initial one will have had the leading dashes removed (eg if it was --foo on the command line then args[0] will be "foo" :returns: None if you don't recognize the argument either. Otherwise return the list of remaining arguments after you have processed yours. """ return None def usage(self): "Returns the usage message. Make sure it is newline terminated" msg=""" Usage: program [OPTIONS] FILENAME [SQL|CMD] [SQL|CMD]... FILENAME is the name of a SQLite database. A new database is created if the file does not exist. OPTIONS include: -init filename read/process named file -echo print commands before execution -[no]header turn headers on or off -bail stop after hitting an error -interactive force interactive I/O -batch force batch I/O -column set output mode to 'column' -csv set output mode to 'csv' -html set output mode to 'html' -line set output mode to 'line' -list set output mode to 'list' -python set output mode to 'python' -separator 'x' set output field separator (|) -nullvalue 'text' set text string for NULL values -version show SQLite version -encoding 'name' the encoding to use for files opened via .import, .read & .output -nocolour disables colour output to screen """ return msg.lstrip() ### ### Value formatting routines. They take a value and return a ### text formatting of them. Mostly used by the various output's ### but also by random other pieces of code. ### _binary_type = eval(("buffer", "bytes") [sys.version_info>=(3,0)]) _basestring = eval(("basestring", "str") [sys.version_info>=(3,0)]) # bytes that are ok in C strings - no need for quoting _printable=[ord(x) for x in "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789~!@#$%^&*()`_-+={}[]:;,.<>/?|" ] def _fmt_c_string(self, v): "Format as a C string including surrounding double quotes" if isinstance(v, self._basestring): op=['"'] for c in v: if c=="\\": op.append("\\\\") elif c=="\r": op.append("\\r") elif c=="\n": op.append("\\n") elif c=="\t": op.append("\\t") elif ord(c) not in self._printable: op.append("\\"+c) else: op.append(c) op.append('"') return "".join(op) elif v is None: return '"'+self.nullvalue+'"' elif isinstance(v, self._binary_type): if sys.version_info<(3,0): o=lambda x: ord(x) fromc=lambda x: x else: o=lambda x: x fromc=lambda x: chr(x) res=['"'] for c in v: if o(c) in self._printable: res.append(fromc(c)) else: res.append("\\x%02X" % (o(c),)) res.append('"') return "".join(res) else: # number of some kind return '"%s"' % (v,) def _fmt_html_col(self, v): "Format as HTML (mainly escaping &/" return self._fmt_text_col(v).\ replace("&", "&"). \ replace(">", ">"). \ replace("<", "<"). \ replace("'", "'"). \ replace('"', """) def _fmt_json_value(self, v): "Format a value." if isinstance(v, self._basestring): # we assume utf8 so only some characters need to be escaed op=['"'] for c in v: if c=="\\": op.append("\\\\") elif c=="\r": op.append("\\r") elif c=="\n": op.append("\\n") elif c=="\t": op.append("\\t") elif c=="/": # yes you have to escape forward slash for some reason op.append("\\/") elif c=='"': op.append("\\"+c) elif c=="\\b": op.append("\\b") elif c=="\\f": op.append("\\f") else: # It isn't clear when \u sequences *must* be used. # Assuming not needed due to utf8 output which # corresponds to what rfc4627 implies. op.append(c) op.append('"') return "".join(op) elif v is None: return 'null' elif isinstance(v, self._binary_type): if sys.version_info<(3,0): o=base64.encodestring(v) else: o=base64.encodebytes(v).decode("ascii") if o[-1]=="\n": o=o[:-1] return '"'+o+'"' else: # number of some kind return '%s' % (v,) def _fmt_python(self, v): "Format as python literal" if v is None: return "None" elif isinstance(v, self._basestring): return repr(v) elif isinstance(v, self._binary_type): if sys.version_info<(3,0): res=["buffer(\""] for i in v: if ord(i) in self._printable: res.append(i) else: res.append("\\x%02X" % (ord(i),)) res.append("\")") return "".join(res) else: res=['b"'] for i in v: if i in self._printable: res.append(chr(i)) else: res.append("\\x%02X" % (i,)) res.append('"') return "".join(res) else: return "%s" % (v,) def _fmt_sql_identifier(self, v): "Return the identifier quoted in SQL syntax if needed (eg table and column names)" if not len(v): # yes sqlite does allow zero length identifiers return '""' nonalnum=re.sub("[A-Za-z_0-9]+", "", v) if len(nonalnum)==0: if v.upper() not in self._sqlite_reserved: # Ok providing it doesn't start with a digit if v[0] not in "0123456789": return v # double quote it unless there are any double quotes in it if '"' in nonalnum: return "[%s]" % (v,) return '"%s"' % (v,) def _fmt_text_col(self, v): "Regular text formatting" if v is None: return self.nullvalue elif isinstance(v, self._basestring): return v elif isinstance(v, self._binary_type): # sqlite gives back raw bytes! return "" else: return "%s" % (v,) ### ### The various output routines. They are always called with the ### header irrespective of the setting allowing for some per query ### setup. (see output_column for example). The doc strings are ### used to generate help. ### def output_column(self, header, line): """ Items left aligned in space padded columns. They are truncated if they do not fit. If the width hasn't been specified for a column then 10 is used unless the column name (header) is longer in which case that width is used. Use the .width command to change column sizes. """ # as an optimization we calculate self._actualwidths which is # reset for each query if header: def gw(n): if n"] for l in line: out.append(("","")[header]) out.append(l) out.append(("\n","\n")[header]) out.append("\n") self.write(self.stdout, "".join(out)) def output_insert(self, header, line): """ Lines as SQL insert statements. The table name is "table" unless you specified a different one as the second parameter to the .mode command. """ if header: return fmt=lambda x: self.colour.colour_value(x, apsw.format_sql_value(x)) out="INSERT INTO "+self._output_table+" VALUES("+",".join([fmt(l) for l in line])+");\n" self.write(self.stdout, out) def output_json(self, header, line): """ Each line as a JSON object with a trailing comma. Blobs are output as base64 encoded strings. You should be using UTF8 output encoding. """ if header: self._output_json_cols=line return fmt=lambda x: self.colour.colour_value(x, self._fmt_json_value(x)) out=["%s: %s" % (self._fmt_json_value(k), fmt(line[i])) for i,k in enumerate(self._output_json_cols)] self.write(self.stdout, "{ "+", ".join(out)+"},\n") def output_line(self, header, line): """ One value per line in the form 'column = value' with a blank line between rows. """ if header: w=5 for l in line: if len(l)>w: w=len(l) self._line_info=(w, line) return fmt=lambda x: self.colour.colour_value(x, self._fmt_text_col(x)) w=self._line_info[0] for i in range(len(line)): self.write(self.stdout, "%*s = %s\n" % (w, self._line_info[1][i], fmt(line[i]))) self.write(self.stdout, "\n") def output_list(self, header, line): "All items on one line with separator" if header: if not self.header: return c=self.colour fmt=lambda x: c.header+x+c.header_ else: fmt=lambda x: self.colour.colour_value(x, self._fmt_text_col(x)) self.write(self.stdout, self.separator.join([fmt(x) for x in line])+"\n") def output_python(self, header, line): "Tuples in Python source form for each row" if header: if not self.header: return c=self.colour fmt=lambda x: c.header+self._fmt_python(x)+c.header_ else: fmt=lambda x: self.colour.colour_value(x, self._fmt_python(x)) self.write(self.stdout, '('+", ".join([fmt(l) for l in line])+"),\n") def output_tcl(self, header, line): "Outputs TCL/C style strings using current separator" # In theory you could paste the output into your source ... if header: if not self.header: return c=self.colour fmt=lambda x: c.header+self._fmt_c_string(x)+c.header_ else: fmt=lambda x: self.colour.colour_value(x, self._fmt_c_string(x)) self.write(self.stdout, self.separator.join([fmt(l) for l in line])+"\n") def _output_summary(self, summary): # internal routine to output a summary line or two self.write(self.stdout, self.colour.summary+summary+self.colour.summary_) ### ### Various routines ### def cmdloop(self, intro=None): """Runs the main interactive command loop. :param intro: Initial text banner to display instead of the default. Make sure you newline terminate it. """ if intro is None: intro=""" SQLite version %s (APSW %s) Enter ".help" for instructions Enter SQL statements terminated with a ";" """ % (apsw.sqlitelibversion(), apsw.apswversion()) intro=intro.lstrip() if self.interactive and intro: if sys.version_info<(3,0): intro=unicode(intro) c=self.colour self.write(self.stdout, c.intro+intro+c.intro_) using_readline=False try: if self.interactive and self.stdin is sys.stdin: import readline old_completer=readline.get_completer() readline.set_completer(self.complete) readline.parse_and_bind("tab: complete") using_readline=True try: readline.read_history_file(os.path.expanduser(self.history_file)) except: # We only expect IOError here but if the history # file does not exist and this code has been # compiled into the module it is possible to get # an IOError that doesn't match the IOError from # Python parse time resulting in an IOError # exception being raised. Consequently we just # catch all exceptions. pass except ImportError: pass try: while True: self._input_descriptions=[] if using_readline: # we drop completion cache because it contains # table and column names which could have changed # with last executed SQL self._completion_cache=None self._using_readline=True try: command=self.getcompleteline() if command is None: # EOF return self.process_complete_line(command) except: self._append_input_description() try: self.handle_exception() except UnicodeDecodeError: self.handle_exception() finally: if using_readline: readline.set_completer(old_completer) readline.set_history_length(256) readline.write_history_file(os.path.expanduser(self.history_file)) def handle_exception(self): """Handles the current exception, printing a message to stderr as appropriate. It will reraise the exception if necessary (eg if bail is true)""" eclass,eval,etb=sys.exc_info() # py2&3 compatible way of doing this if isinstance(eval, SystemExit): eval._handle_exception_saw_this=True raise self._out_colour() self.write(self.stderr, self.colour.error) if isinstance(eval, KeyboardInterrupt): self.handle_interrupt() text="Interrupted" else: text=str(eval) if not text.endswith("\n"): text=text+"\n" if len(self._input_descriptions): for i in range(len(self._input_descriptions)): if i==0: pref="At " else: pref=" "*i+"From " self.write(self.stderr, pref+self._input_descriptions[i]+"\n") self.write(self.stderr, text) if self.exceptions: stack=[] while etb: stack.append(etb.tb_frame) etb = etb.tb_next for frame in stack: self.write(self.stderr, "\nFrame %s in %s at line %d\n" % (frame.f_code.co_name, frame.f_code.co_filename, frame.f_lineno)) vars=list(frame.f_locals.items()) vars.sort() for k,v in vars: try: v=repr(v)[:80] except: v="" self.write(self.stderr, "%10s = %s\n" % (k,v)) self.write(self.stderr, "\n%s: %s\n" % (eclass, repr(eval))) self.write(self.stderr, self.colour.error_) eval._handle_exception_saw_this=True if self.bail: raise def process_sql(self, sql, bindings=None, internal=False, summary=None): """Processes SQL text consisting of one or more statements :param sql: SQL to execute :param bindings: bindings for the *sql* :param internal: If True then this is an internal execution (eg the .tables or .database command). When executing internal sql timings are not shown nor is the SQL echoed. :param summary: If not None then should be a tuple of two items. If the ``sql`` returns any data then the first item is printed before the first row, and the second item is printed after the last row. An example usage is the .find command which shows table names. """ cur=self.db.cursor() # we need to know when each new statement is executed state={'newsql': True, 'timing': None} def et(cur, sql, bindings): state['newsql']=True # if time reporting, do so now if not internal and self.timer: if state['timing']: self.display_timing(state['timing'], self.get_resource_usage()) # print statement if echo is on if not internal and self.echo: # ? should we strip leading and trailing whitespace? backslash quote stuff? if bindings: self.write(self.stderr, "%s [%s]\n" % (sql, bindings)) else: self.write(self.stderr, sql+"\n") # save resource from begining of command (ie don't include echo time above) if not internal and self.timer: state['timing']=self.get_resource_usage() return True cur.setexectrace(et) # processing loop try: for row in cur.execute(sql, bindings): if state['newsql']: # summary line? if summary: self._output_summary(summary[0]) # output a header always cols=[h for h,d in cur.getdescription()] self.output(True, cols) state['newsql']=False self.output(False, row) if not state['newsql'] and summary: self._output_summary(summary[1]) except: # If echo is on and the sql to execute is a syntax error # then the exec tracer won't have seen it so it won't be # printed and the user will be wondering exactly what sql # had the error. We look in the traceback and deduce if # the error was happening in a prepare or not. Also we # need to ignore the case where SQLITE_SCHEMA happened and # a reprepare is being done since the exec tracer will # have been called in that situation. if not internal and self.echo: tb=sys.exc_info()[2] last=None while tb: last=tb.tb_frame tb=tb.tb_next if last and last.f_code.co_name=="sqlite3_prepare" \ and last.f_code.co_filename.endswith("statementcache.c") \ and "sql" in last.f_locals: self.write(self.stderr, last.f_locals["sql"]+"\n") raise if not internal and self.timer: self.display_timing(state['timing'], self.get_resource_usage()) def process_command(self, cmd): """Processes a dot command. It is split into parts using the `shlex.split `__ function which is roughly the same method used by Unix/POSIX shells. """ if self.echo: self.write(self.stderr, cmd+"\n") # broken with unicode on Python 2!!! if sys.version_info<(3,0): cmd=cmd.encode("utf8") cmd=[c.decode("utf8") for c in shlex.split(cmd)] else: cmd=shlex.split(cmd) assert cmd[0][0]=="." cmd[0]=cmd[0][1:] fn=getattr(self, "command_"+cmd[0], None) if not fn: raise self.Error("Unknown command \"%s\". Enter \".help\" for help" % (cmd[0],)) res=fn(cmd[1:]) ### ### Commands start here ### def _boolean_command(self, name, cmd): "Parse and verify boolean parameter" if len(cmd)!=1 or cmd[0].lower() not in ("on", "off"): raise self.Error(name+" expected ON or OFF") return cmd[0].lower()=="on" # Note that doc text is used for generating help output. def command_backup(self, cmd): """backup ?DB? FILE: Backup DB (default "main") to FILE Copies the contents of the current database to FILE overwriting whatever was in FILE. If you have attached databases then you can specify their name instead of the default of "main". The backup is done at the page level - SQLite copies the pages as is. There is no round trip through SQL code. """ dbname="main" if len(cmd)==1: fname=cmd[0] elif len(cmd)==2: dbname=cmd[0] fname=cmd[1] else: raise self.Error("Backup takes one or two parameters") out=apsw.Connection(fname) b=out.backup("main", self.db, dbname) try: while not b.done: b.step() finally: b.finish() out.close() def command_bail(self, cmd): """bail ON|OFF: Stop after hitting an error (default OFF) If an error is encountered while processing commands or SQL then exit. (Note this is different than SQLite shell which only exits for errors in SQL.) """ self.bail=self._boolean_command("bail", cmd) def command_colour(self, cmd=[]): """colour SCHEME: Selects a colour scheme Residents of both countries that have not adopted the metric system may also spell this command without a 'u'. If using a colour terminal in interactive mode then output is automatically coloured to make it more readable. Use 'off' to turn off colour, and no name or 'default' for the default. """ if len(cmd)>1: raise self.Error("Too many colour schemes") c=cmd and cmd[0] or "default" if c not in self._colours: raise self.Error("No such colour scheme: "+c) self.colour_scheme=c self._out_colour() command_color=command_colour def command_databases(self, cmd): """databases: Lists names and files of attached databases """ if len(cmd): raise self.Error("databases command doesn't take any parameters") self.push_output() self.header=True self.output=self.output_column self.truncate=False self.widths=[3,15,58] try: self.process_sql("pragma database_list", internal=True) finally: self.pop_output() def command_dump(self, cmd): """dump ?TABLE? [TABLE...]: Dumps all or specified tables in SQL text format The table name is treated as like pattern so you can use % as a wildcard. You can use dump to make a text based backup of the database. It is also useful for comparing differences or making the data available to other databases. Indices and triggers for the table(s) are also dumped. Finally views matching the table pattern name are dumped (it isn't possible to work out which views access which table and views can access multiple tables anyway). Note that if you are dumping virtual tables such as used by the FTS3 module then they may use other tables to store information. For example if you create a FTS3 table named *recipes* then it also creates *recipes_content*, *recipes_segdir* etc. Consequently to dump this example correctly use:: .dump recipes recipes_% If the database is empty or no tables/views match then there is no output. """ # Simple tables are easy to dump. More complicated is dealing # with virtual tables, foreign keys etc. # Lock the database while doing the dump so nothing changes # under our feet self.process_sql("BEGIN IMMEDIATE", internal=True) # Used in comment() - see issue 142 outputstrtype=str if sys.version_info<(3,0): outputstrtype=unicode # Python 2.3 can end up with nonsense like "en_us" so we fall # back to ascii in that case outputstrencoding=getattr(self.stdout, "encoding", "ascii") try: codecs.lookup(outputstrencoding) except: outputstrencoding="ascii" def unicodify(s): if not isinstance(s, outputstrtype): # See issue 142 - it may not be in an expected encoding return s.decode(outputstrencoding, "replace") return s try: # first pass -see if virtual tables or foreign keys are in # use. If they are we emit pragmas to deal with them, but # prefer not to emit them v={"virtuals": False, "foreigns": False} def check(name, sql): if name.lower().startswith("sqlite_"): return False sql=sql.lower() if re.match(r"^\s*create\s+virtual\s+.*", sql): v["virtuals"]=True # pragma table_info doesn't tell us if foreign keys # are involved so we guess if any the various strings are # in the sql somewhere if re.match(r".*\b(foreign\s*key|references)\b.*", sql): v["foreigns"]=True return True if len(cmd)==0: cmd=["%"] tables=[] for pattern in cmd: for name,sql in self.db.cursor().execute("SELECT name,sql FROM sqlite_master " "WHERE sql NOT NULL AND type IN ('table','view') " "AND tbl_name LIKE ?1", (pattern,)): if check(name, sql) and name not in tables: tables.append(name) if not tables: return # will we need to analyze anything later? analyze_needed=[] for stat in self.db.cursor().execute("select name from sqlite_master where sql not null and type='table' and tbl_name like 'sqlite_stat%'"): for name in tables: if len(self.db.cursor().execute("select * from "+self._fmt_sql_identifier(stat[0])+" WHERE tbl=?", (name,)).fetchall()): if name not in analyze_needed: analyze_needed.append(name) analyze_needed.sort() def blank(): self.write(self.stdout, "\n") def comment(s): s=unicodify(s) self.write(self.stdout, textwrap.fill(s, 78, initial_indent="-- ", subsequent_indent="-- ")+"\n") pats=", ".join([(x,"(All)")[x=="%"] for x in cmd]) comment("SQLite dump (by APSW %s)" % (apsw.apswversion(),)) comment("SQLite version " + apsw.sqlitelibversion()) comment("Date: " +unicodify(time.strftime("%c"))) comment("Tables like: "+pats) comment("Database: "+self.db.filename) try: import getpass import socket comment("User: %s @ %s" % (unicodify(getpass.getuser()), unicodify(socket.gethostname()))) except ImportError: pass blank() comment("The values of various per-database settings") comment("PRAGMA page_size="+str(self.db.cursor().execute("pragma page_size").fetchall()[0][0])+";\n") comment("PRAGMA encoding='"+self.db.cursor().execute("pragma encoding").fetchall()[0][0]+"';\n") vac={0: "NONE", 1: "FULL", 2: "INCREMENTAL"} vacvalue=self.db.cursor().execute("pragma auto_vacuum").fetchall()[0][0] comment("PRAGMA auto_vacuum="+vac.get(vacvalue, str(vacvalue))+";\n") comment("PRAGMA max_page_count="+str(self.db.cursor().execute("pragma max_page_count").fetchall()[0][0])+";\n") blank() # different python versions have different requirements # about specifying cmp to sort routine so we use this # portable workaround with a decorated list instead dectables=[(x.lower(), x) for x in tables] dectables.sort() tables=[y for x,y in dectables] virtuals=v["virtuals"] foreigns=v["foreigns"] if virtuals: comment("This pragma is needed to restore virtual tables") self.write(self.stdout, "PRAGMA writable_schema=ON;\n") if foreigns: comment("This pragma turns off checking of foreign keys " "as tables would be inconsistent while restoring. It was introduced " "in SQLite 3.6.19.") self.write(self.stdout, "PRAGMA foreign_keys=OFF;\n") if virtuals or foreigns: blank() self.write(self.stdout, "BEGIN TRANSACTION;\n") blank() def sqldef(s): # return formatted sql watching out for embedded # comments at the end forcing trailing ; onto next # line https://sqlite.org/src/info/c04a8b8a4f if "--" in s.split("\n")[-1]: nl="\n" else: nl="" return s+nl+";\n" # do the table dumping loops oldtable=self._output_table try: self.push_output() self.output=self.output_insert # Dump the table for table in tables: for sql in self.db.cursor().execute("SELECT sql FROM sqlite_master WHERE name=?1 AND type='table'", (table,)): comment("Table "+table) # Special treatment for virtual tables - they # get called back on drops and creates and # could thwart us so we have to manipulate # sqlite_master directly if sql[0].lower().split()[:3]==["create", "virtual", "table"]: self.write(self.stdout, "DELETE FROM sqlite_master WHERE name="+apsw.format_sql_value(table)+" AND type='table';\n") self.write(self.stdout, "INSERT INTO sqlite_master(type,name,tbl_name,rootpage,sql) VALUES('table',%s,%s,0,%s);\n" % (apsw.format_sql_value(table), apsw.format_sql_value(table), apsw.format_sql_value(sql[0]))) else: self.write(self.stdout, "DROP TABLE IF EXISTS "+self._fmt_sql_identifier(table)+";\n") self.write(self.stdout, sqldef(sql[0])) self._output_table=self._fmt_sql_identifier(table) self.process_sql("select * from "+self._fmt_sql_identifier(table), internal=True) # Now any indices or triggers first=True for name,sql in self.db.cursor().execute("SELECT name,sql FROM sqlite_master " "WHERE sql NOT NULL AND type IN ('index', 'trigger') " "AND tbl_name=?1 AND name NOT LIKE 'sqlite_%' " "ORDER BY lower(name)", (table,)): if first: comment("Triggers and indices on "+table) first=False self.write(self.stdout, sqldef(sql)) blank() # Views done last. They have to be done in the same order as they are in sqlite_master # as they could refer to each other first=True for name,sql in self.db.cursor().execute("SELECT name,sql FROM sqlite_master " "WHERE sql NOT NULL AND type='view' " "AND name IN ( "+",".join([apsw.format_sql_value(i) for i in tables])+ ") ORDER BY _ROWID_"): if first: comment("Views") first=False self.write(self.stdout, "DROP VIEW IF EXISTS %s;\n" % (self._fmt_sql_identifier(name),)) self.write(self.stdout, sqldef(sql)) if not first: blank() # sqlite sequence # does it exist if len(self.db.cursor().execute("select * from sqlite_master where name='sqlite_sequence'").fetchall()): first=True for t in tables: v=self.db.cursor().execute("select seq from main.sqlite_sequence where name=?1", (t,)).fetchall() if len(v): assert len(v)==1 if first: comment("For primary key autoincrements the next id " "to use is stored in sqlite_sequence") first=False self.write(self.stdout, 'DELETE FROM main.sqlite_sequence WHERE name=%s;\n' % (apsw.format_sql_value(t),)) self.write(self.stdout, 'INSERT INTO main.sqlite_sequence VALUES (%s, %s);\n' % (apsw.format_sql_value(t), v[0][0])) if not first: blank() finally: self.pop_output() self._output_table=oldtable # analyze if analyze_needed: comment("You had used the analyze command on these tables before. Rerun for this new data.") for n in analyze_needed: self.write(self.stdout, "ANALYZE "+self._fmt_sql_identifier(n)+";\n") blank() # user version pragma uv=self.db.cursor().execute("pragma user_version").fetchall()[0][0] if uv: comment("Your database may need this. It is sometimes used to keep track of the schema version (eg Firefox does this).") comment("pragma user_version=%d;" % (uv,)) blank() # Save it all self.write(self.stdout, "COMMIT TRANSACTION;\n") # cleanup pragmas if foreigns: blank() comment("Restoring foreign key checking back on. Note that SQLite 3.6.19 is off by default") self.write(self.stdout, "PRAGMA foreign_keys=ON;\n") if virtuals: blank() comment("Restoring writable schema back to default") self.write(self.stdout, "PRAGMA writable_schema=OFF;\n") # schema reread blank() comment("We need to force SQLite to reread the schema because otherwise it doesn't know that " "the virtual tables we inserted directly into sqlite_master exist. See " "last comments of https://sqlite.org/cvstrac/tktview?tn=3425") self.write(self.stdout, "BEGIN;\nCREATE TABLE no_such_table(x,y,z);\nROLLBACK;\n") finally: self.process_sql("END", internal=True) def command_echo(self, cmd): """echo ON|OFF: If ON then each SQL statement or command is printed before execution (default OFF) The SQL statement or command is sent to error output so that it is not intermingled with regular output. """ self.echo=self._boolean_command("echo", cmd) def set_encoding(self, enc): """Saves *enc* as the default encoding, after verifying that it is valid. You can also include :error to specify error handling - eg 'cp437:replace' Raises an exception on invalid encoding or error """ enc=enc.split(":", 1) if len(enc)>1: enc, errors=enc else: enc=enc[0] errors=None try: codecs.lookup(enc) except LookupError: raise self.Error("No known encoding '%s'" % (enc,)) try: if errors is not None: codecs.lookup_error(errors) except LookupError: raise self.Error("No known codec error handler '%s'" % (errors,)) self.encoding=enc, errors def command_encoding(self, cmd): """encoding ENCODING: Set the encoding used for new files opened via .output and imports SQLite and APSW work internally using Unicode and characters. Files however are a sequence of bytes. An encoding describes how to convert between bytes and characters. The default encoding is utf8 and that is generally the best value to use when other programs give you a choice. You can also specify an error handler. For example 'cp437:replace' will use code page 437 and any Unicode codepoints not present in cp437 will be replaced (typically with something like a question mark). Other error handlers include 'ignore', 'strict' (default) and 'xmlcharrefreplace'. For the default input/output/error streams on startup the shell defers to Python's detection of encoding. For example on Windows it asks what code page is in use and on Unix it looks at the LC_CTYPE environment variable. You can set the PYTHONIOENCODING environment variable to override this detection. This command affects files opened after setting the encoding as well as imports. See the online APSW documentation for more details. """ if len(cmd)!=1: raise self.Error("Encoding takes one argument") self.set_encoding(cmd[0]) def command_exceptions(self, cmd): """exceptions ON|OFF: If ON then detailed tracebacks are shown on exceptions (default OFF) Normally when an exception occurs the error string only is displayed. However it is sometimes useful to get a full traceback. An example would be when you are developing virtual tables and using the shell to exercise them. In addition to displaying each stack frame, the local variables within each frame are also displayed. """ self.exceptions=self._boolean_command("exceptions", cmd) def command_exit(self, cmd): """exit:Exit this program""" if len(cmd): raise self.Error("Exit doesn't take any parameters") sys.exit(0) def command_quit(self, cmd): """quit:Exit this program""" if len(cmd): raise self.Error("Quit doesn't take any parameters") sys.exit(0) def command_explain(self, cmd): """explain ON|OFF: Set output mode suitable for explain (default OFF) Explain shows the underlying SQLite virtual machine code for a statement. You need to prefix the SQL with explain. For example: explain select * from table; This output mode formats the explain output nicely. If you do '.explain OFF' then the output mode and settings in place when you did '.explain ON' are restored. """ if len(cmd)==0 or self._boolean_command("explain", cmd): self.push_output() self.header=True self.widths=[4,13,4,4,4,13,2,13] self.truncate=False self.output=self.output_column else: self.pop_output() def command_find(self, cmd): """find what ?TABLE?: Searches all columns of all tables for a value The find command helps you locate data across your database for example to find a string or any references to an id. You can specify a like pattern to limit the search to a subset of tables (eg specifying 'CUSTOMER%' for all tables beginning with CUSTOMER). The what value will be treated as a string and/or integer if possible. If what contains % or _ then it is also treated as a like pattern. This command will take a long time to execute needing to read all of the relevant tables. """ if len(cmd)<1 or len(cmd)>2: raise self.Error("At least one argument required and at most two accepted") tablefilter="%" if len(cmd)==2: tablefilter=cmd[1] querytemplate=[] queryparams=[] def qp(): # binding for current queryparams return "?"+str(len(queryparams)) s=cmd[0] if '%' in s or '_' in s: queryparams.append(s) querytemplate.append("%s LIKE "+qp()) queryparams.append(s) querytemplate.append("%s = "+qp()) try: i=int(s) queryparams.append(i) querytemplate.append("%s = "+qp()) except ValueError: pass querytemplate=" OR ".join(querytemplate) for (table,) in self.db.cursor().execute("SELECT name FROM sqlite_master WHERE type='table' AND name LIKE ?1", (tablefilter,)): t=self._fmt_sql_identifier(table) query="SELECT * from %s WHERE " % (t,) colq=[] for _,column,_,_,_,_ in self.db.cursor().execute("pragma table_info(%s)" % (t,)): colq.append(querytemplate % ((self._fmt_sql_identifier(column),)*len(queryparams))) query=query+" OR ".join(colq) self.process_sql(query, queryparams, internal=True, summary=("Table "+table+"\n", "\n")) def command_header(self, cmd): """header(s) ON|OFF: Display the column names in output (default OFF) """ self.header=self._boolean_command("header", cmd) command_headers=command_header _help_info=None def command_help(self, cmd): """help ?COMMAND?: Shows list of commands and their usage. If COMMAND is specified then shows detail about that COMMAND. ('.help all' will show detailed help about all commands.) """ if not self._help_info: # buildup help database self._help_info={} for c in dir(self): if not c.startswith("command_"): continue # help is 3 parts # - the syntax string (eg backup ?dbname? filename) # - the one liner description (eg saves database to filename) # - the multi-liner detailed description # We grab this from the doc string for the function in the form # syntax: one liner\nmulti\nliner d=getattr(self, c).__doc__ assert d, c+" command must have documentation" c=c[len("command_"):] if c in ("headers", "color"): continue while d[0]=="\n": d=d[1:] parts=d.split("\n", 1) firstline=parts[0].strip().split(":", 1) assert len(firstline)==2, c+" command must have usage: description doc" if len(parts)==1 or len(parts[1].strip())==0: # work around textwrap bug multi="" else: multi=textwrap.dedent(parts[1]) if c=="mode": if not self._output_modes: self._cache_output_modes() firstline[1]=firstline[1]+" "+" ".join(self._output_modes) multi=multi+"\n\n"+"\n\n".join(self._output_modes_detail) if c=="colour": colours=list(self._colours.keys()) colours.sort() firstline[1]=firstline[1]+" from "+", ".join(colours) if len(multi.strip())==0: # All whitespace multi=None else: multi=multi.strip("\n") # we need to keep \n\n as a newline but turn all others into spaces multi=multi.replace("\n\n", "\x00") multi=multi.replace("\n", " ") multi=multi.replace("\x00", "\n\n") multi=multi.split("\n\n") self._help_info[c]=('.'+firstline[0].strip(), firstline[1].strip(), multi) self.write(self.stderr, "\n") tw=self._terminal_width() if tw<32: tw=32 if len(cmd)==0: commands=list(self._help_info.keys()) commands.sort() w=0 for command in commands: if len(self._help_info[command][0])>w: w=len(self._help_info[command][0]) out=[] for command in commands: hi=self._help_info[command] # usage string out.append(hi[0]) # space padding (including 2 for between columns) out.append(" "*(2+w-len(hi[0]))) # usage message wrapped if need be out.append(("\n"+" "*(2+w)).join(textwrap.wrap(hi[1], tw-w-2))) # newline out.append("\n") self.write(self.stderr, "".join(out)) else: if cmd[0]=="all": cmd=list(self._help_info.keys()) cmd.sort() w=0 for command in self._help_info: if len(self._help_info[command][0])>w: w=len(self._help_info[command][0]) for command in cmd: if command=="headers": command="header" if command not in self._help_info: raise self.Error("No such command \"%s\"" % (command,)) out=[] hi=self._help_info[command] # usage string out.append(hi[0]) # space padding (2) out.append(" "*(2+w-len(hi[0]))) # usage message wrapped if need be out.append(("\n"+" "*(2+w)).join(textwrap.wrap(hi[1], tw-w-2))+"\n") if hi[2]: # newlines out.append("\n") # detailed message for i,para in enumerate(hi[2]): out.append(textwrap.fill(para, tw)+"\n") if i=(3,0): thefile=codecs.open(filename, "r", self.encoding[0]) for line in csv.reader(thefile, **dialect.copy()): yield line thefile.close() return ### ### csv module is not good at unicode so we have to ### indirect unless utf8 is in use ### if self.encoding[0].lower()=="utf8": # no need for tempfile thefile=open(filename, "rb") else: import tempfile thefile=tempfile.TemporaryFile(prefix="apsw_import") thefile.write(codecs.open(filename, "r", self.encoding[0]).read().encode("utf8")) # move back to beginning thefile.seek(0,0) # Ensure all values are utf8 not unicode for k,v in dialect.items(): if isinstance(v, unicode): dialect[k]=v.encode("utf8") for line in csv.reader(thefile, **dialect): # back to unicode again yield [x.decode("utf8") for x in line] thefile.close() def command_autoimport(self, cmd): """autoimport FILENAME ?TABLE?: Imports filename creating a table and automatically working out separators and data types (alternative to .import command) The import command requires that you precisely pre-setup the table and schema, and set the data separators (eg commas or tabs). In many cases this information can be automatically deduced from the file contents which is what this command does. There must be at least two columns and two rows. If the table is not specified then the basename of the file will be used. Additionally the type of the contents of each column is also deduced - for example if it is a number or date. Empty values are turned into nulls. Dates are normalized into YYYY-MM-DD format and DateTime are normalized into ISO8601 format to allow easy sorting and searching. 4 digit years must be used to detect dates. US (swapped day and month) versus rest of the world is also detected providing there is at least one value that resolves the ambiguity. Care is taken to ensure that columns looking like numbers are only treated as numbers if they do not have unnecessary leading zeroes or plus signs. This is to avoid treating phone numbers and similar number like strings as integers. This command can take quite some time on large files as they are effectively imported twice. The first time is to determine the format and the types for each column while the second pass actually imports the data. """ if len(cmd)<1 or len(cmd)>2: raise self.Error("Expected one or two parameters") if not os.path.exists(cmd[0]): raise self.Error("File \"%s\" does not exist" % (cmd[0],)) if len(cmd)==2: tablename=cmd[1] else: tablename=None try: final=None c=self.db.cursor() c.execute("BEGIN IMMEDIATE") final="ROLLBACK" if not tablename: tablename=os.path.splitext(os.path.basename(cmd[0]))[0] if c.execute("pragma table_info(%s)" % (self._fmt_sql_identifier(tablename),)).fetchall(): raise self.Error("Table \"%s\" already exists" % (tablename,)) # The types we support deducing def DateUS(v): # US formatted date with wrong ordering of day and month return DateWorld(v, switchdm=True) def DateWorld(v, switchdm=False): # Sensibly formatted date as used anywhere else in the world y,m,d=self._getdate(v) if switchdm: m,d=d,m if m<1 or m>12 or d<1 or d>31: raise ValueError return "%d-%02d-%02d" % (y,m,d) def DateTimeUS(v): # US date and time return DateTimeWorld(v, switchdm=True) def DateTimeWorld(v, switchdm=False): # Sensible date and time y,m,d,h,M,s=self._getdatetime(v) if switchdm: m,d=d,m if m<1 or m>12 or d<1 or d>31 or h<0 or h>23 or M<0 or M>59 or s<0 or s>65: raise ValueError return "%d-%02d-%02dT%02d:%02d:%02d" % (y,m,d,h,M,s) def Number(v): # we really don't want phone numbers etc to match # Python's float & int constructors allow whitespace which we don't if re.search(r"\s", v): raise ValueError if v=="0": return 0 if v[0]=="+": # idd prefix raise ValueError if re.match("^[0-9]+$", v): if v[0]=="0": raise ValueError # also a phone number return int(v) if v[0]=="0" and not v.startswith("0."): # deceptive not a number raise ValueError return float(v) # Work out the file format formats=[ {"dialect": "excel"}, {"dialect": "excel-tab"}] seps=["|", ";", ":"] if self.separator not in seps: seps.append(self.separator) for sep in seps: formats.append( {"quoting": csv.QUOTE_NONE, "delimiter": sep, "doublequote": False, "quotechar": "\x00"} ) possibles=[] errors=[] encodingissue=False # format is copy() on every use. This appears bizarre and # unnecessary. However Python 2.3 and 2.4 somehow manage # to empty it if not copied. for format in formats: ncols=-1 lines=0 try: for line in self._csvin_wrapper(cmd[0], format.copy()): if lines==0: lines=1 ncols=len(line) # data type guess setup datas=[] for i in range(ncols): datas.append([DateUS, DateWorld, DateTimeUS, DateTimeWorld, Number]) allblanks=[True]*ncols continue if len(line)!=ncols: raise ValueError("Expected %d columns - got %d" % (ncols, len(line))) lines+=1 for i in range(ncols): if not line[i]: continue allblanks[i]=False if not datas[i]: continue # remove datas that give ValueError d=[] for dd in datas[i]: try: dd(line[i]) d.append(dd) except ValueError: pass datas[i]=d if ncols>1 and lines>1: # if a particular column was allblank then clear datas for it for i in range(ncols): if allblanks[i]: datas[i]=[] possibles.append((format.copy(), ncols, lines, datas)) except UnicodeDecodeError: encodingissue=True except: s=str(sys.exc_info()[1]) if s not in errors: errors.append(s) if len(possibles)==0: if encodingissue: raise self.Error("The file is probably not in the current encoding \"%s\" and didn't match a known file format" % (self.encoding[0],)) v="File doesn't appear to match a known type." if len(errors): v+=" Errors reported:\n"+"\n".join([" "+e for e in errors]) raise self.Error(v) if len(possibles)>1: raise self.Error("File matches more than one type!") format, ncols, lines, datas=possibles[0] fmt=format.get("dialect", None) if fmt is None: fmt="(delimited by \"%s\")" % (format["delimiter"],) self.write(self.stdout, "Detected Format %s Columns %d Rows %d\n" % (fmt, ncols, lines)) # Header row reader=self._csvin_wrapper(cmd[0], format) for header in reader: break # Check schema identity=lambda x:x for i in range(ncols): if len(datas[i])>1: raise self.Error("Column #%d \"%s\" has ambiguous data format - %s" % (i+1, header[i], ", ".join([d.__name__ for d in datas[i]]))) if datas[i]: datas[i]=datas[i][0] else: datas[i]=identity # Make the table sql="CREATE TABLE %s(%s)" % (self._fmt_sql_identifier(tablename), ", ".join([self._fmt_sql_identifier(h) for h in header])) c.execute(sql) # prep work for each row sql="INSERT INTO %s VALUES(%s)" % (self._fmt_sql_identifier(tablename), ",".join(["?"]*ncols)) for line in reader: vals=[] for i in range(ncols): l=line[i] if not l: vals.append(None) else: vals.append(datas[i](l)) c.execute(sql, vals) c.execute("COMMIT") self.write(self.stdout, "Auto-import into table \"%s\" complete\n" % (tablename,)) except: if final: self.db.cursor().execute(final) raise def _getdate(self, v): # Returns a tuple of 3 items y,m,d from string v m=re.match(r"^([0-9]+)[^0-9]([0-9]+)[^0-9]([0-9]+)$", v) if not m: raise ValueError y,m,d=int(m.group(1)), int(m.group(2)), int(m.group(3)) if d>1000: # swap order y,m,d=d,m,y if y<1000 or y>9999: raise ValueError return y,m,d def _getdatetime(self, v): # must be at least HH:MM m=re.match(r"^([0-9]+)[^0-9]([0-9]+)[^0-9]([0-9]+)[^0-9]+([0-9]+)[^0-9]([0-9]+)([^0-9]([0-9]+))?$", v) if not m: raise ValueError items=list(m.group(1,2,3,4,5,7)) for i in range(len(items)): if items[i] is None: items[i]=0 items=[int(i) for i in items] if items[2]>1000: items=[items[2], items[1], items[0]]+items[3:] if items[0]<1000 or items[0]>9999: raise ValueError return items def command_indices(self, cmd): """indices TABLE: Lists all indices on table TABLE """ if len(cmd)!=1: raise self.Error("indices takes one table name") self.push_output() self.header=False self.output=self.output_list try: self.process_sql("SELECT name FROM sqlite_master WHERE type='index' AND tbl_name LIKE ?1 " "UNION ALL SELECT name FROM sqlite_temp_master WHERE type='index' AND tbl_name LIKE " "?1 ORDER by name", cmd, internal=True) finally: self.pop_output() def command_load(self, cmd): """load FILE ?ENTRY?: Loads a SQLite extension library Note: Extension loading may not be enabled in the SQLite library version you are using. Extensions are an easy way to add new functions and functionality. For a useful extension look at the bottom of https://sqlite.org/contrib By default sqlite3_extension_init is called in the library but you can specify an alternate entry point. If you get an error about the extension not being found you may need to explicitly specify the directory. For example if it is in the current directory then use: .load ./extension.so """ if len(cmd)<1 or len(cmd)>2: raise self.Error("load takes one or two parameters") try: self.db.enableloadextension(True) except: raise self.Error("Extension loading is not supported") self.db.loadextension(*cmd) _output_modes=None def command_mode(self, cmd): """mode MODE ?TABLE?: Sets output mode to one of""" if len(cmd) in (1,2): w=cmd[0] if w=="tabs": w="list" m=getattr(self, "output_"+w, None) if w!="insert": if len(cmd)==2: raise self.Error("Output mode %s doesn't take parameters" % (cmd[0])) if m: self.output=m # set some defaults self.truncate=True if cmd[0]=="csv": self.separator="," elif cmd[0]=="tabs": self.separator="\t" else: pass #self.separator=self._output_stack[0]["separator"] if w=="insert": if len(cmd)==2: self._output_table=cmd[1] else: self._output_table="table" self._output_table=self._fmt_sql_identifier(self._output_table) return if not self._output_modes: self._cache_output_modes() raise self.Error("Expected a valid output mode: "+", ".join(self._output_modes)) # needed so command completion and help can use it def _cache_output_modes(self): modes=[m[len("output_"):] for m in dir(self) if m.startswith("output_")] modes.append("tabs") modes.sort() self._output_modes=modes detail=[] for m in modes: if m=='tabs': continue d=getattr(self, "output_"+m).__doc__ assert d, "output mode "+m+" needs doc" d=d.replace("\n", " ").strip() while " " in d: d=d.replace(" ", " ") detail.append(m+": "+d) self._output_modes_detail=detail def command_nullvalue(self, cmd): """nullvalue STRING: Print STRING in place of null values This affects textual output modes like column and list and sets how SQL null values are shown. The default is a zero length string. Insert mode and dumps are not affected by this setting. You can use double quotes to supply a zero length string. For example: .nullvalue "" # the default .nullvalue # rather obvious .nullvalue " \\t " # A tab surrounded by spaces """ if len(cmd)!=1: raise self.Error("nullvalue takes exactly one parameter") self.nullvalue=self.fixup_backslashes(cmd[0]) def command_output(self, cmd): """output FILENAME: Send output to FILENAME (or stdout) If the FILENAME is stdout then output is sent to standard output from when the shell was started. The file is opened using the current encoding (change with .encoding command). """ # Flush everything self.stdout.flush() self.stderr.flush() if hasattr(self.stdin, "flush"): try: self.stdin.flush() except IOError: # see issue 117 pass # we will also close stdout but only do so once we have a # replacement so that stdout is always valid if len(cmd)!=1: raise self.Error("You must specify a filename") try: fname=cmd[0] if fname=="stdout": old=None if self.stdout!=self._original_stdout: old=self.stdout self.stdout=self._original_stdout if old is not None: # done here in case close raises exception old.close() return newf=codecs.open(fname, "w", self.encoding[0], self.encoding[1]) old=None if self.stdout!=self._original_stdout: old=self.stdout self.stdout=newf if old is not None: old.close() finally: self._out_colour() def command_print(self, cmd): """print STRING: print the literal STRING If more than one argument is supplied then they are printed space separated. You can use backslash escapes such as \\n and \\t. """ self.write(self.stdout, " ".join([self.fixup_backslashes(i) for i in cmd])+"\n") def command_prompt(self, cmd): """prompt MAIN ?CONTINUE?: Changes the prompts for first line and continuation lines The default is to print 'sqlite> ' for the main prompt where you can enter a dot command or a SQL statement. If the SQL statement is complete (eg not ; terminated) then you are prompted for more using the continuation prompt which defaults to ' ..> '. Example: .prompt "Yes, Master> " "More, Master> " You can use backslash escapes such as \\n and \\t. """ if len(cmd)<1 or len(cmd)>2: raise self.Error("prompt takes one or two arguments") self.prompt=self.fixup_backslashes(cmd[0]) if len(cmd)==2: self.moreprompt=self.fixup_backslashes(cmd[1]) def command_read(self, cmd): """read FILENAME: Processes SQL and commands in FILENAME (or Python if FILENAME ends with .py) Treats the specified file as input (a mixture or SQL and/or dot commands). If the filename ends in .py then it is treated as Python code instead. For Python code the symbol 'shell' refers to the instance of the shell and 'apsw' is the apsw module. """ if len(cmd)!=1: raise self.Error("read takes a single filename") if cmd[0].lower().endswith(".py"): g={} g.update({'apsw': apsw, 'shell': self}) if sys.version_info<(3,0): execfile(cmd[0], g, g) else: # compile step is needed to associate name with code exec(compile(open(cmd[0]).read(), cmd[0], 'exec'), g, g) else: f=codecs.open(cmd[0], "rU", self.encoding[0]) try: try: self.push_input() self.stdin=f self.interactive=False self.input_line_number=0 while True: line=self.getcompleteline() if line is None: break self.process_complete_line(line) except: eval=sys.exc_info()[1] if not isinstance(eval, SystemExit): self._append_input_description() raise finally: self.pop_input() f.close() def command_restore(self, cmd): """restore ?DB? FILE: Restore database from FILE into DB (default "main") Copies the contents of FILE to the current database (default "main"). The backup is done at the page level - SQLite copies the pages as is. There is no round trip through SQL code. """ dbname="main" if len(cmd)==1: fname=cmd[0] elif len(cmd)==2: dbname=cmd[0] fname=cmd[1] else: raise self.Error("Restore takes one or two parameters") input=apsw.Connection(fname) b=self.db.backup(dbname, input, "main") try: while not b.done: b.step() finally: b.finish() input.close() def command_schema(self, cmd): """schema ?TABLE? [TABLE...]: Shows SQL for table If you give one or more tables then their schema is listed (including indices). If you don't specify any then all schemas are listed. TABLE is a like pattern so you can % for wildcards. """ self.push_output() self.output=self.output_list self.header=False try: if len(cmd)==0: cmd=['%'] for n in cmd: self.process_sql("SELECT sql||';' FROM " "(SELECT sql sql, type type, tbl_name tbl_name, name name " "FROM sqlite_master UNION ALL " "SELECT sql, type, tbl_name, name FROM sqlite_temp_master) " "WHERE tbl_name LIKE ?1 AND type!='meta' AND sql NOTNULL AND name NOT LIKE 'sqlite_%' " "ORDER BY substr(type,2,1), name", (n,), internal=True) finally: self.pop_output() def command_separator(self, cmd): """separator STRING: Change separator for output mode and .import You can use quotes and backslashes. For example to set the separator to space tab space you can use: .separator " \\t " The setting is automatically changed when you switch to csv or tabs output mode. You should also set it before doing an import (ie , for CSV and \\t for TSV). """ if len(cmd)!=1: raise self.Error("separator takes exactly one parameter") self.separator=self.fixup_backslashes(cmd[0]) _shows=("echo", "explain", "headers", "mode", "nullvalue", "output", "separator", "width", "exceptions", "encoding") def command_show(self, cmd): """show: Show the current values for various settings.""" if len(cmd)>1: raise self.Error("show takes at most one parameter") if len(cmd): what=cmd[0] if what not in self._shows: raise self.Error("Unknown show: '%s'" % (what,)) else: what=None outs=[] for i in self._shows: k=i if what and i!=what: continue # boolean settings if i in ("echo", "headers", "exceptions"): if i=="headers": i="header" v="off" if getattr(self, i): v="on" elif i=="explain": # we cheat by looking at truncate setting! v="on" if self.truncate: v="off" elif i in ("nullvalue", "separator"): v=self._fmt_c_string(getattr(self, i)) elif i=="mode": if not self._output_modes: self._cache_output_modes() for v in self._output_modes: if self.output==getattr(self, "output_"+v): break else: assert False, "Bug: didn't find output mode" elif i=="output": if self.stdout is self._original_stdout: v="stdout" else: v=getattr(self.stdout, "name", "") elif i=="width": v=" ".join(["%d"%(i,) for i in self.widths]) elif i=="encoding": v=self.encoding[0] if self.encoding[1]: v+=" (Errors "+self.encoding[1]+")" else: assert False, "Bug: unknown show handling" outs.append( (k,v) ) # find width of k column l=0 for k,v in outs: if len(k)>l: l=len(k) for k,v in outs: self.write(self.stderr, "%*.*s: %s\n" % (l,l, k, v)) def command_tables(self, cmd): """tables ?PATTERN?: Lists names of tables matching LIKE pattern This also returns views. """ self.push_output() self.output=self.output_list self.header=False try: if len(cmd)==0: cmd=['%'] # The SQLite shell code filters out sqlite_ prefixes if # you specified an argument else leaves them in. It also # has a hand coded output mode that does space separation # plus wrapping at 80 columns. for n in cmd: self.process_sql("SELECT name FROM sqlite_master " "WHERE type IN ('table', 'view') AND name NOT LIKE 'sqlite_%' " "AND name like ?1 " "UNION ALL " "SELECT name FROM sqlite_temp_master " "WHERE type IN ('table', 'view') AND name NOT LIKE 'sqlite_%' " "ORDER BY 1", (n,), internal=True) finally: self.pop_output() def command_timeout(self, cmd): """timeout MS: Try opening locked tables for MS milliseconds If a database is locked by another process SQLite will keep retrying. This sets how many thousandths of a second it will keep trying for. If you supply zero or a negative number then all busy handlers are disabled. """ if len(cmd)!=1: raise self.Error("timeout takes a number") try: t=int(cmd[0]) except: raise self.Error("%s is not a number" % (cmd[0],)) self.db.setbusytimeout(t) def command_timer(self, cmd): """timer ON|OFF: Control printing of time and resource usage after each query The values displayed are in seconds when shown as floating point or an absolute count. Only items that have changed since starting the query are shown. On non-Windows platforms considerably more information can be shown. """ if self._boolean_command("timer", cmd): try: self.get_resource_usage() except: raise self.Error("Timing not supported by this Python version/platform") self.timer=True else: self.timer=False def command_width(self, cmd): """width NUM NUM ...: Set the column widths for "column" mode In "column" output mode, each column is a fixed width with values truncated to fit. Specify new widths using this command. Use a negative number to right justify and zero for default column width. """ if len(cmd)==0: raise self.Error("You need to specify some widths!") w=[] for i in cmd: try: w.append(int(i)) except: raise self.Error("'%s' is not a valid number" % (i,)) self.widths=w def _terminal_width(self): """Works out the terminal width which is used for word wrapping some output (eg .help)""" try: if sys.platform=="win32": import ctypes, struct h=ctypes.windll.kernel32.GetStdHandle(-12) # -12 is stderr buf=ctypes.create_string_buffer(22) if ctypes.windll.kernel32.GetConsoleScreenBufferInfo(h, buf): _,_,_,_,_,left,top,right,bottom,_,_=struct.unpack("hhhhHhhhhhh", buf.raw) return right-left raise Exception() else: # posix import struct, fcntl, termios s=struct.pack('HHHH', 0,0,0,0) x=fcntl.ioctl(2, termios.TIOCGWINSZ, s) return struct.unpack('HHHH', x)[1] except: try: v=int(os.getenv("COLUMNS")) if v<10: return 80 return v except: return 80 def push_output(self): """Saves the current output settings onto a stack. See :meth:`pop_output` for more details as to why you would use this.""" o={} for k in "separator", "header", "nullvalue", "output", "widths", "truncate": o[k]=getattr(self, k) self._output_stack.append(o) def pop_output(self): """Restores most recently pushed output. There are many output parameters such as nullvalue, mode (list/tcl/html/insert etc), column widths, header etc. If you temporarily need to change some settings then :meth:`push_output`, change the settings and then pop the old ones back. A simple example is implementing a command like .dump. Push the current output, change the mode to insert so we get SQL inserts printed and then pop to go back to what was there before. """ # first item should always be present assert len(self._output_stack) if len(self._output_stack)==1: o=self._output_stack[0] else: o=self._output_stack.pop() for k,v in o.items(): setattr(self,k,v) def _append_input_description(self): """When displaying an error in :meth:`handle_exception` we want to give context such as when the commands being executed came from a .read command (which in turn could execute another .read). """ if self.interactive: return res=[] res.append("Line %d" % (self.input_line_number,)) res.append(": "+getattr(self.stdin, "name", "")) self._input_descriptions.append(" ".join(res)) def fixup_backslashes(self, s): """Implements the various backlash sequences in s such as turning backslash t into a tab. This function is needed because shlex does not do it for us. """ if "\\" not in s: return s # See the resolve_backslashes function in SQLite shell source res=[] i=0 while i=len(s): raise self.Error("Backslash with nothing following") c=s[i] res.append({ "\\": "\\", "r": "\r", "n": "\n", "t": "\t" }.get(c, None)) i+=1 # advance again if res[-1] is None: raise self.Error("Unknown backslash sequence \\"+c) return "".join(res) if sys.version_info<(3,0): def write(self, dest, text): """Writes text to dest. dest will typically be one of self.stdout or self.stderr.""" # ensure text is unicode to catch codeset issues here if type(text)!=unicode: text=unicode(text) try: dest.write(text) except UnicodeEncodeError: ev=sys.exc_info()[1] # See issue108 and try to work around it if ev.args[0]=="ascii" and dest.encoding and ev.args[0]!=dest.encoding and hasattr(dest, "fileno") and \ isinstance(dest.fileno(), int) and dest.fileno()>=0: args=[dest.encoding,] if dest.errors: args.append(dest.errors) dest.write(text.encode(*args)) else: raise _raw_input=raw_input else: def write(self, dest, text): "Writes text to dest. dest will typically be one of self.stdout or self.stderr." dest.write(text) _raw_input=input def getline(self, prompt=""): """Returns a single line of input (may be incomplete SQL) from self.stdin. If EOF is reached then return None. Do not include trailing newline in return. """ self.stdout.flush() self.stderr.flush() try: if self.interactive: if self.stdin is sys.stdin: c=self.colour.prompt, self.colour.prompt_ if self._using_readline and sys.platform!="win32": # these are needed so that readline knows they are non-printing characters c="\x01"+c[0]+"\x02", "\x01"+c[1]+"\x02", line=self._raw_input(c[0]+prompt+c[1])+"\n" # raw_input excludes newline else: self.write(self.stdout, prompt) line=self.stdin.readline() # includes newline unless last line of file doesn't have one else: line=self.stdin.readline() # includes newline unless last line of file doesn't have one self.input_line_number+=1 if sys.version_info<(3,0): if type(line)!=unicode: enc=getattr(self.stdin, "encoding", self.encoding[0]) if not enc: enc=self.encoding[0] line=line.decode(enc) except EOFError: return None if len(line)==0: # always a \n on the end normally so this is EOF return None if line[-1]=="\n": line=line[:-1] return line def getcompleteline(self): """Returns a complete input. For dot commands it will be one line. For SQL statements it will be as many as is necessary to have a :meth:`~apsw.complete` statement (ie semicolon terminated). Returns None on end of file.""" try: self._completion_first=True command=self.getline(self.prompt) if command is None: return None if len(command.strip())==0: return "" if command[0]=="?": command=".help "+command[1:] # incomplete SQL? while command[0]!="." and not apsw.complete(command): self._completion_first=False line=self.getline(self.moreprompt) if line is None: # unexpected eof raise self.Error("Incomplete SQL (line %d of %s): %s\n" % (self.input_line_number, getattr(self.stdin, "name", ""), command)) if line in ("go", "/"): break command=command+"\n"+line return command except KeyboardInterrupt: self.handle_interrupt() return "" def handle_interrupt(self): """Deal with keyboard interrupt (typically Control-C). It will :meth:`~Connection.interrupt` the database and print"^C" if interactive.""" self.db.interrupt() if not self.bail and self.interactive: self.write(self.stderr, "^C\n") return raise def process_complete_line(self, command): """Given some text will call the appropriate method to process it (eg :meth:`process_sql` or :meth:`process_command`)""" try: if len(command.strip())==0: return if command[0]==".": self.process_command(command) else: self.process_sql(command) except KeyboardInterrupt: self.handle_interrupt() def push_input(self): """Saves the current input paramaters to a stack. See :meth:`pop_input`.""" d={} for i in "interactive", "stdin", "input_line_number": d[i]=getattr(self, i) self._input_stack.append(d) def pop_input(self): """Restore most recently pushed input parameters (interactive, self.stdin, linenumber etc). Use this if implementing a command like read. Push the current input, read the file and then pop the input to go back to before. """ assert(len(self._input_stack))>1 d=self._input_stack.pop() for k,v in d.items(): setattr(self, k, v) def complete(self, token, state): """Return a possible completion for readline This function is called with state starting at zero to get the first completion, then one/two/three etc until you return None. The best implementation is to generate the list when state==0, save it, and provide members on each increase. The default implementation extracts the current full input from readline and then calls :meth:`complete_command` or :meth:`complete_sql` as appropriate saving the results for subsequent calls. """ if state==0: import readline # the whole line line=readline.get_line_buffer() # begining and end(+1) of the token in line beg=readline.get_begidx() end=readline.get_endidx() # Are we matching a command? try: if self._completion_first and line.startswith("."): self.completions=self.complete_command(line, token, beg, end) else: self.completions=self.complete_sql(line, token, beg, end) except: # Readline swallows any exceptions we raise. We # shouldn't be raising any so this is to catch that import traceback; traceback.print_exc() raise if state>len(self.completions): return None return self.completions[state] # Taken from https://sqlite.org/lang_keywords.html _sqlite_keywords="""ABORT ACTION ADD AFTER ALL ALTER ANALYZE AND AS ASC ATTACH AUTOINCREMENT BEFORE BEGIN BETWEEN BY CASCADE CASE CAST CHECK COLLATE COLUMN COMMIT CONFLICT CONSTRAINT CREATE CROSS CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP DATABASE DEFAULT DEFERRABLE DEFERRED DELETE DESC DETACH DISTINCT DROP EACH ELSE END ESCAPE EXCEPT EXCLUSIVE EXISTS EXPLAIN FAIL FOR FOREIGN FROM FULL GLOB GROUP HAVING IF IGNORE IMMEDIATE IN INDEX INDEXED INITIALLY INNER INSERT INSTEAD INTERSECT INTO IS ISNULL JOIN KEY LEFT LIKE LIMIT MATCH NATURAL NO NOT NOTNULL NULL OF OFFSET ON OR ORDER OUTER PLAN PRAGMA PRIMARY QUERY RAISE RECURSIVE REFERENCES REGEXP REINDEX RELEASE RENAME REPLACE RESTRICT RIGHT ROLLBACK ROW SAVEPOINT SELECT SET TABLE TEMP TEMPORARY THEN TO TRANSACTION TRIGGER UNION UNIQUE UPDATE USING VACUUM VALUES VIEW VIRTUAL WHEN WHERE WITH WITHOUT""".split() # reserved words need to be quoted. Only a subset of the above are reserved # but what the heck _sqlite_reserved=_sqlite_keywords # add a space after each of them except functions which get parentheses _sqlite_keywords=[x+(" ", "(")[x in ("VALUES", "CAST")] for x in _sqlite_keywords] _sqlite_special_names="""_ROWID_ OID ROWID SQLITE_MASTER SQLITE_SEQUENCE""".split() _sqlite_functions="""abs( changes() char( coalesce( glob( ifnull( hex( instr( last_insert_rowid() length( like( likelihood( load_extension( lower( ltrim( max( min( nullif( printf( quote( random() randomblob( replace( round( rtrim( soundex( sqlite_compileoption_get( sqlite_compileoption_used( sqlite_source_id() sqlite_version() substr( total_changes() trim( typeof( unlikely( unicode( upper( zeroblob( date( time( datetime( julianday( strftime( avg( count( group_concat( sum( total(""".split() _pragmas_bool=("yes", "true", "on", "no", "false", "off") _pragmas={"application_id": None, "auto_vacuum=": ("NONE", "FULL", "INCREMENTAL"), "automatic_index=": _pragmas_bool, "busy_timeout=": None, "cache_size=": None, "case_sensitive_like=": _pragmas_bool, "cache_spill=": _pragmas_bool, "checkpoint_fullfsync=": _pragmas_bool, "collation_list": None, "compile_options": None, "database_list": None, "default_cache_size=": None, "defer_foreign_keys=": _pragmas_bool, "encoding=": None, # ('"UTF-8"', '"UTF-16"', '"UTF-16le"', '"UTF16-16be"'), # too hard to get " to be part of token just in this special case "foreign_key_check": None, "foreign_key_list(": None, "foreign_keys": _pragmas_bool, "freelist_count": None, "fullfsync=": _pragmas_bool, "ignore_check_constraints": _pragmas_bool, "incremental_vacuum(": None, "index_info(": None, "index_list(": None, "integrity_check": None, "journal_mode=": ("DELETE", "TRUNCATE", "PERSIST", "MEMORY", "OFF", "WAL"), "journal_size_limit=": None, "legacy_file_format=": _pragmas_bool, "locking_mode=": ("NORMAL", "EXCLUSIVE"), "max_page_count=": None, "mmap_size=": None, "page_count;": None, "page_size=": None, "query_only=": _pragmas_bool, "quick_check": None, "read_uncommitted=": _pragmas_bool, "recursive_triggers=": _pragmas_bool, "reverse_unordered_selects=": _pragmas_bool, "schema_version": None, "secure_delete=": _pragmas_bool, "shrink_memory": None, "soft_heap_limit=": None, "synchronous=": ("OFF", "NORMAL", "FULL"), "table_info(": None, "temp_store=": ("DEFAULT", "FILE", "MEMORY"), "wal_autocheckpoint=": None, "wal_checkpoint": None, "writable_schema": _pragmas_bool, } def _get_prev_tokens(self, line, end): "Returns the tokens prior to pos end in the line" return re.findall(r'"?\w+"?', line[:end]) def complete_sql(self, line, token, beg, end): """Provide some completions for SQL :param line: The current complete input line :param token: The word readline is looking for matches :param beg: Integer offset of token in line :param end: Integer end of token in line :return: A list of completions, or an empty list if none """ if self._completion_cache is None: cur=self.db.cursor() collations=[row[1] for row in cur.execute("pragma collation_list")] databases=[row[1] for row in cur.execute("pragma database_list")] other=[] for db in databases: if db=="temp": master="sqlite_temp_master" else: master="[%s].sqlite_master" % (db,) for row in cur.execute("select * from "+master).fetchall(): for col in (1,2): if row[col] not in other and not row[col].startswith("sqlite_"): other.append(row[col]) if row[0]=="table": try: for table in cur.execute("pragma [%s].table_info([%s])" % (db, row[1],)).fetchall(): if table[1] not in other: other.append(table[1]) for item in table[2].split(): if item not in other: other.append(item) except apsw.SQLError: # See https://github.com/rogerbinns/apsw/issues/86 pass self._completion_cache=[self._sqlite_keywords, self._sqlite_functions, self._sqlite_special_names, collations, databases, other] for i in range(len(self._completion_cache)): self._completion_cache[i].sort() # be somewhat sensible about pragmas if "pragma " in line.lower(): t=self._get_prev_tokens(line.lower(), end) # pragma foo = bar if len(t)>2 and t[-3]=="pragma": # t[-2] should be a valid one for p in self._pragmas: if p.replace("=","")==t[-2]: vals=self._pragmas[p] if not vals: return [] return [x+";" for x in vals if x.startswith(token)] # at equals? if len(t)>1 and t[-2]=="pragma" and line[:end].replace(" ","").endswith("="): for p in self._pragmas: if p.replace("=","")==t[-1]: vals=self._pragmas[p] if not vals: return [] return vals # pragma foo if len(t)>1 and t[-2]=="pragma": res=[x for x in self._pragmas.keys() if x.startswith(token)] res.sort() return res # pragma if len(t) and t[-1]=="pragma": res=list(self._pragmas.keys()) res.sort() return res # This is currently not context sensitive (eg it doesn't look # to see if last token was 'FROM' and hence next should only # be table names. That is a SMOP like pragmas above res=[] ut=token.upper() for corpus in self._completion_cache: for word in corpus: if word.upper().startswith(ut): # potential match - now match case if word.startswith(token): # exact if word not in res: res.append(word) elif word.lower().startswith(token): # lower if word.lower() not in res: res.append(word.lower()) elif word.upper().startswith(token): # upper if word.upper() not in res: res.append(word.upper()) else: # match letter by letter otherwise readline mangles what was typed in w=token+word[len(token):] if w not in res: res.append(w) return res _builtin_commands=None def complete_command(self, line, token, beg, end): """Provide some completions for dot commands :param line: The current complete input line :param token: The word readline is looking for matches :param beg: Integer offset of token in line :param end: Integer end of token in line :return: A list of completions, or an empty list if none """ if not self._builtin_commands: self._builtin_commands=["."+x[len("command_"):] for x in dir(self) if x.startswith("command_") and x!="command_headers"] if beg==0: # some commands don't need a space because they take no # params but who cares? return [x+" " for x in self._builtin_commands if x.startswith(token)] return None def get_resource_usage(self): """Return a dict of various numbers (ints or floats). The .timer command shows the difference between before and after results of what this returns by calling :meth:`display_timing`""" if sys.platform=="win32": import ctypes, time, platform ctypes.windll.kernel32.GetProcessTimes.argtypes=[ platform.architecture()[0]=='64bit' and ctypes.c_int64 or ctypes.c_int32, ctypes.c_void_p, ctypes.c_void_p, ctypes.c_void_p, ctypes.c_void_p] # All 4 out params have to be present. FILETIME is really # just a 64 bit quantity in 100 nanosecond granularity dummy=ctypes.c_ulonglong() utime=ctypes.c_ulonglong() stime=ctypes.c_ulonglong() rc=ctypes.windll.kernel32.GetProcessTimes( ctypes.windll.kernel32.GetCurrentProcess(), ctypes.byref(dummy), # creation time ctypes.byref(dummy), # exit time ctypes.byref(stime), ctypes.byref(utime)) if rc: return {'Wall clock': time.time(), 'User time': float(utime.value)/10000000, 'System time': float(stime.value)/10000000} return {} else: import resource, time r=resource.getrusage(resource.RUSAGE_SELF) res={'Wall clock': time.time()} for i,desc in ( ("utime", "User time"), ("stime", "System time"), ("maxrss", "Max rss"), ("idrss", "Memory"), ("isrss", "Stack"), ("ixrss", "Shared Memory"), ("minflt", "PF (no I/O)"), ("majflt", "PF (I/O)"), ("inblock", "Blocks in"), ("oublock", "Blocks out"), ("nsignals", "Signals"), ("nvcsw", "Voluntary context switches"), ("nivcsw", "Involunary context switches"), ("msgrcv", "Messages received"), ("msgsnd", "Messages sent"), ("nswap", "Swaps"), ): f="ru_"+i if hasattr(r, f): res[desc]=getattr(r,f) return res def display_timing(self, b4, after): """Writes the difference between b4 and after to self.stderr. The data is dictionaries returned from :meth:`get_resource_usage`.""" v=list(b4.keys()) for i in after: if i not in v: v.append(i) v.sort() for k in v: if k in b4 and k in after: one=b4[k] two=after[k] val=two-one if val: if type(val)==float: self.write(self.stderr, "+ %s: %.4f\n" % (k, val)) else: self.write(self.stderr, "+ %s: %d\n" % (k, val)) ### Colour support def _out_colour(self): # Sets up color for output. Input being interactive doesn't # matter. This method needs to be called on all changes to # output. if getattr(self.stdout, "isatty", False) and self.stdout.isatty(): self.colour=self._colours[self.colour_scheme] else: self.colour=self._colours["off"] # This class returns an empty string for all undefined attributes # so that it doesn't matter if a colour scheme leaves something # out. class _colourscheme: def __init__(self, **kwargs): for k,v in kwargs.items(): setattr(self, k, v) def __nonzero__(self): return True def __str__(self): return "_colourscheme("+str(self.__dict__)+")" def __getattr__(self, k): return "" def colour_value(self, val, formatted): c=self.colour if val is None: return self.vnull+formatted+self.vnull_ if isinstance(val, Shell._basestring): return self.vstring+formatted+self.vstring_ if isinstance(val, Shell._binary_type): return self.vblob+formatted+self.vblob_ # must be a number - we don't distinguish between float/int return self.vnumber+formatted+self.vnumber_ # The colour definitions - the convention is the name to turn # something on and the name with an underscore suffix to turn it # off d=_colourscheme(**dict([(v, "\x1b["+str(n)+"m") for n,v in {0: "reset", 1: "bold", 4: "underline", 22: "bold_", 24: "underline_", 7: "inverse", 27: "inverse_", 30: "fg_black", 31: "fg_red", 32: "fg_green", 33: "fg_yellow", 34: "fg_blue", 35: "fg_magenta", 36: "fg_cyan", 37: "fg_white", 39: "fg_", 40: "bg_black", 41: "bg_red", 42: "bg_green", 43: "bg_yellow", 44: "bg_blue", 45: "bg_magenta", 46: "bg_cyan", 47: "bg_white", 49: "bg_"}.items()])) _colours={"off": _colourscheme(colour_value=lambda x,y: y)} _colours["default"]=_colourscheme(prompt=d.bold, prompt_=d.bold_, error=d.fg_red+d.bold, error_=d.bold_+d.fg_, intro=d.fg_blue+d.bold, intro_=d.bold_+d.fg_, summary=d.fg_blue+d.bold, summary_=d.bold_+d.fg_, header=sys.platform=="win32" and d.inverse or d.underline, header_=sys.platform=="win32" and d.inverse_ or d.underline_, vnull=d.fg_red, vnull_=d.fg_, vstring=d.fg_yellow, vstring_=d.fg_, vblob=d.fg_blue, vblob_=d.fg_, vnumber=d.fg_magenta, vnumber_=d.fg_) if sys.platform=="win32": if not _win_colour: for k in _colours: _colours[k]=_colours["off"] # unpollute namespace del d del _colourscheme try: del n del x del v except: pass def main(): # Docstring must start on second line so dedenting works correctly """ Call this to run the interactive shell. It automatically passes in sys.argv[1:] and exits Python when done. """ try: s=Shell() _,_,cmds=s.process_args(sys.argv[1:]) if len(cmds)==0: s.cmdloop() except: v=sys.exc_info()[1] if getattr(v, "_handle_exception_saw_this", False): pass else: # Where did this exception come from? import traceback; traceback.print_exc() sys.exit(1) if __name__=='__main__': main() apsw-3.8.11.1-r1/tools/apswtrace.py0000755000000000000000000003226612257607024015457 0ustar rootroot#!/usr/bin/env python # # See the accompanying LICENSE file. # # This module lets you automatically trace SQL operations in a program # using APSW without having to modify the program in any way. import time import sys import weakref class APSWTracer(object): def __init__(self, options): if sys.version_info<(3,): self.u=eval("u''") import thread self.threadid=thread.get_ident self.stringtypes=(unicode,str) self.numtypes=(int, long, float) self.binarytypes=(buffer,) else: self.u="" import _thread self.threadid=_thread.get_ident self.stringtypes=(str,) self.numtypes=(int, float) self.binarytypes=(bytes,) self.options=options if options.output in ("-", "stdout"): self._writer=sys.stdout.write elif options.output=="stderr": self._writer=sys.stderr.write else: self._writer=open(options.output, "wt").write try: import apsw apsw.connection_hooks.append(self.connection_hook) except: sys.stderr.write(self.u+"Unable to import apsw\n") raise self.mapping_open_flags=apsw.mapping_open_flags self.zeroblob=apsw.zeroblob self.apswConnection=apsw.Connection self.newcursor={} self.threadsused={} # really want a set self.queries={} self.timings={} self.rowsreturned=0 self.numcursors=0 self.numconnections=0 self.timestart=time.time() def writerpy2(self, s): # s should be a unicode string self._writer(s.encode("utf-8")+"\n") def writerpy3(self, s): self._writer(s+"\n") if sys.version_info<(3,): writer=writerpy2 else: writer=writerpy3 def format(self, obj): if isinstance(obj, dict): return self.formatdict(obj) if isinstance(obj, tuple): return self.formatseq(obj, '()') if isinstance(obj, list): return self.formatseq(obj, '[]') if isinstance(obj, self.stringtypes): return self.formatstring(obj) if obj is True: return "True" if obj is False: return "False" if obj is None: return "None" if isinstance(obj, self.numtypes): return repr(obj) if isinstance(obj, self.binarytypes): return self.formatbinary(obj) if isinstance(obj, self.zeroblob): return "zeroblob(%d)" % (obj.length(),) return repr(obj) def formatstring(self, obj, quote='"', checkmaxlen=True): obj=obj.replace("\n", "\\n").replace("\r", "\\r") if checkmaxlen and len(obj)>self.options.length: obj=obj[:self.options.length]+'..' return self.u+quote+obj+quote def formatdict(self, obj): items=list(obj.items()) items.sort() op=[] for k,v in items: op.append(self.format(k)+": "+self.format(v)) return self.u+"{"+", ".join(op)+"}" def formatseq(self, obj, paren): return self.u+paren[0]+", ".join([self.format(v) for v in obj])+paren[1] def formatbinarypy2(self, obj): if len(obj)=(3,): xrange=range unichr=chr # Sigh try: maxuni=0x10ffff unichr(maxuni) except ValueError: maxuni=0xffff def doit(): random.seed(0) options.tests=[t.strip() for t in options.tests.split(",")] write(" Python %s %s\n" % (sys.executable, str(sys.version_info))) write(" Scale %d\n" % (options.scale,)) write(" Database %s\n" % (options.database,)) write(" Tests %s\n" % (", ".join(options.tests),)) write(" Iterations %d\n" % (options.iterations,)) write("Statement Cache %d\n" % (options.scsize,)) write("\n") if options.apsw: import apsw write(" Testing with APSW file "+apsw.__file__+"\n") write(" APSW version "+apsw.apswversion()+"\n") write(" SQLite lib version "+apsw.sqlitelibversion()+"\n") write(" SQLite headers version "+str(apsw.SQLITE_VERSION_NUMBER)+"\n\n") def apsw_setup(dbfile): con=apsw.Connection(dbfile, statementcachesize=options.scsize) con.createscalarfunction("number_name", number_name, 1) return con if options.pysqlite: try: from pysqlite2 import dbapi2 as pysqlite except ImportError: import sqlite3 as pysqlite write("Testing with pysqlite file "+pysqlite.__file__+"\n") write(" pysqlite version "+pysqlite.version+"\n") write(" SQLite version "+pysqlite.sqlite_version+"\n\n") def pysqlite_setup(dbfile): con=pysqlite.connect(dbfile, isolation_level=None, cached_statements=options.scsize) con.create_function("number_name", 1, number_name) return con ones=("zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen") tens=("", "ten", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety") others=("thousand", "hundred", "zero") def _number_name(n): if n>=1000: txt="%s %s" % (_number_name(int(n/1000)), others[0]) n=n%1000 else: txt="" if n>=100: txt=txt+" "+ones[int(n/100)]+" "+others[1] n=n%100 if n>=20: txt=txt+" "+tens[int(n/10)] n=n%10 if n>0: txt=txt+" "+ones[n] txt=txt.strip() if txt=="": txt=others[2] return txt def unicodify(text): if options.unicode and len(text): newt=[] c=options.unicode/100.0 for t in text: if random.random()>c: newt.append(t) continue while True: t=random.randint(0xa1, maxuni) # we don't want the surrogate range or apostrophe if t<0xd800 or t>0xdfff: break newt.append(unichr(t)) text="".join(newt) return text if options.unicode: ones=tuple([unicodify(s) for s in ones]) tens=tuple([unicodify(s) for s in tens]) others=tuple([unicodify(s) for s in others]) def number_name(n): text=_number_name(n) if options.size: text=text*int(random.randint(0, options.size)/len(text)) return text def getlines(scale=50, bindings=False): random.seed(0) # RogerB added two pragmas so that only memory is used. This means that the # vagaries of disk access times don't alter the results # database schema for i in """PRAGMA page_size=1024; PRAGMA cache_size=8192; PRAGMA locking_mode=EXCLUSIVE; PRAGMA journal_mode = OFF; PRAGMA temp_store = MEMORY; CREATE TABLE t1(a INTEGER, b INTEGER, c TEXT); CREATE TABLE t2(a INTEGER, b INTEGER, c TEXT); CREATE INDEX i2a ON t2(a); CREATE INDEX i2b ON t2(b); SELECT name FROM sqlite_master ORDER BY 1""".split(";"): yield (i,) # 50,000 inserts on an unindexed table yield ("BEGIN",) for i in xrange(1,scale*10000+1): r=random.randint(0,500000) if bindings: yield ("INSERT INTO t1 VALUES(:1, :2, number_name(:2))", (i, r)) else: yield ("INSERT INTO t1 VALUES(%d, %d, '%s')" % (i, r, number_name(r)),) yield ("COMMIT",) # 50,000 inserts on an indexed table t1c_list=[] yield ("BEGIN",) for i in xrange(1,scale*10000+1): r=random.randint(0,500000) x=number_name(r) t1c_list.append(x) if bindings: yield ("INSERT INTO t2 VALUES(:1, :2, number_name(:2))", (i, r)) else: yield ("INSERT INTO t2 VALUES(%d, %d, '%s')" % (i, r, x),) yield ("COMMIT",) # 50 SELECTs on an integer comparison. There is no index so # a full table scan is required. for i in xrange(scale): yield ("SELECT count(*), avg(b) FROM t1 WHERE b>=%d AND b<%d" % (i*100, (i+10)*100),) # 50 SELECTs on an LIKE comparison. There is no index so a full # table scan is required. for i in xrange(scale): yield ("SELECT count(*), avg(b) FROM t1 WHERE c LIKE '%%%s%%'" % (number_name(i),),) # Create indices yield ("BEGIN",) for i in """CREATE INDEX i1a ON t1(a); CREATE INDEX i1b ON t1(b); CREATE INDEX i1c ON t1(c);""".split(";"): yield (i,) yield ("COMMIT",) # 5000 SELECTs on an integer comparison where the integer is # indexed. for i in xrange(scale*100): yield ("SELECT count(*), avg(b) FROM t1 WHERE b>=%d AND b<%d" % (i*100, (i+10)*100),) # 100000 random SELECTs against rowid. for i in xrange(1,scale*2000+1): yield ("SELECT c FROM t1 WHERE rowid=%d" % (1+random.randint(0,50000),),) # 100000 random SELECTs against a unique indexed column. for i in xrange(1,scale*2000+1): yield ("SELECT c FROM t1 WHERE a=%d" % (1+random.randint(0,50000),),) # 50000 random SELECTs against an indexed column text column for i in xrange(scale*1000): if bindings: yield ("SELECT c FROM t1 WHERE c=?", (random.choice(t1c_list),),) else: yield ("SELECT c FROM t1 WHERE c='%s'" % (random.choice(t1c_list),),) # Vacuum if options.database!=":memory:": # opens a disk file yield ("VACUUM",) # 5000 updates of ranges where the field being compared is indexed. yield ("BEGIN",) for i in xrange(scale*100): yield ("UPDATE t1 SET b=b*2 WHERE a>=%d AND a<%d" % (i*2, (i+1)*2),) yield ("COMMIT",) # 50000 single-row updates. An index is used to find the row quickly. yield ("BEGIN",) for i in xrange(scale*1000): if bindings: yield ("UPDATE t1 SET b=? WHERE a=%d" % (i,), (random.randint(0,500000),)) else: yield ("UPDATE t1 SET b=%d WHERE a=%d" % (random.randint(0,500000), i),) yield ("COMMIT",) # 1 big text update that touches every row in the table. yield ("UPDATE t1 SET c=a",) # Many individual text updates. Each row in the table is # touched through an index. yield ("BEGIN",) for i in xrange(1,scale*1000+1): if bindings: yield ("UPDATE t1 SET c=? WHERE a=%d" % (i,), (number_name(random.randint(0,500000)),)) else: yield ("UPDATE t1 SET c='%s' WHERE a=%d" % (number_name(random.randint(0,500000)),i),) yield ("COMMIT",) # Delete all content in a table. yield ("DELETE FROM t1",) # Copy one table into another yield ("INSERT INTO t1 SELECT * FROM t2",) # Delete all content in a table, one row at a time. yield ("DELETE FROM t1 WHERE 1",) # Refill the table yet again yield ("INSERT INTO t1 SELECT * FROM t2",) # Drop the table and recreate it without its indices. yield ("BEGIN",) yield ("DROP TABLE t1",) yield ("CREATE TABLE t1(a INTEGER, b INTEGER, c TEXT)",) yield ("COMMIT",) # Refill the table yet again. This copy should be faster because # there are no indices to deal with. yield ("INSERT INTO t1 SELECT * FROM t2",) # The three following used "ORDER BY random()" but we can't do that # as it causes each run to have different values, and hence different # amounts of sorting that have to go on. The "random()" has been # replaced by "c", the column that has the stringified number # Select 20000 rows from the table at random. yield ("SELECT rowid FROM t1 ORDER BY c LIMIT %d" % (scale*400,),) # Delete 20000 random rows from the table. yield (""" DELETE FROM t1 WHERE rowid IN (SELECT rowid FROM t1 ORDER BY c LIMIT %d)""" % (scale*400,),) yield ("SELECT count(*) FROM t1",) # Delete 20000 more rows at random from the table. yield ("""DELETE FROM t1 WHERE rowid IN (SELECT rowid FROM t1 ORDER BY c LIMIT %d)""" % (scale*400,),) yield ("SELECT count(*) FROM t1",) # Do a correctness test first if options.correctness: write("Correctness test\n") if 'bigstmt' in options.tests: text=";\n".join([x[0] for x in getlines(scale=1)])+";" if 'statements' in options.tests: withbindings=[line for line in getlines(scale=1, bindings=True)] if 'statements_nobindings' in options.tests: withoutbindings=[line for line in getlines(scale=1, bindings=False)] res={} for driver in ('apsw', 'pysqlite'): if not getattr(options, driver): continue for test in options.tests: name=driver+"_"+test write(name+'\t') sys.stdout.flush() if name=='pysqlite_bigstmt': write('limited functionality (ignoring)\n') continue con=globals().get(driver+"_setup")(":memory:") # we always correctness test on memory if test=='bigstmt': cursor=con.cursor() if driver=='apsw': func=cursor.execute else: func=cursor.executescript res[name]=[row for row in func(text)] write(str(len(res[name]))+"\n") continue cursor=con.cursor() if test=='statements': sql=withbindings elif test=='statements_nobindings': sql=withoutbindings l=[] for s in sql: for row in cursor.execute(*s): l.append(row) res[name]=l write(str(len(res[name]))+"\n") # All elements of res should be identical elements=res.keys() elements.sort() for i in range(0,len(elements)-1): write("%s == %s %s\n" % (elements[i], elements[i+1], res[elements[i]]==res[elements[i+1]])) del res text=None withbindings=None withoutbindings=None if options.dump_filename or "bigstmt" in options.tests: text=";\n".join([x[0] for x in getlines(scale=options.scale)])+";" # pysqlite requires final semicolon if options.dump_filename: open(options.dump_filename, "wt").write(text.encode("utf8")) sys.exit(0) if "statements" in options.tests: withbindings=list(getlines(scale=options.scale, bindings=True)) if "statements_nobindings" in options.tests: withoutbindings=list(getlines(scale=options.scale, bindings=False)) # Each test returns the amount of time taken. Note that we include # the close time as well. Otherwise the numbers become a function of # cache and other collection sizes as freeing members gets deferred to # close time. def apsw_bigstmt(con): "APSW big statement" try: for row in con.cursor().execute(text): pass except: import pdb ; pdb.set_trace() pass def pysqlite_bigstmt(con): "pysqlite big statement" for row in con.executescript(text): pass def apsw_statements(con, bindings=withbindings): "APSW individual statements with bindings" cursor=con.cursor() for b in bindings: for row in cursor.execute(*b): pass def pysqlite_statements(con, bindings=withbindings): "pysqlite individual statements with bindings" cursor=con.cursor() for b in bindings: for row in cursor.execute(*b): pass def apsw_statements_nobindings(con): "APSW individual statements without bindings" return apsw_statements(con, withoutbindings) def pysqlite_statements_nobindings(con): "pysqlite individual statements without bindings" return pysqlite_statements(con, withoutbindings) # Do the work write("\nRunning tests - elapsed, CPU (results in seconds, lower is better)\n") for i in range(options.iterations): write("%d/%d\n" % (i+1, options.iterations)) for test in options.tests: # funky stuff is to alternate order each round for driver in ( ("apsw", "pysqlite"), ("pysqlite", "apsw"))[i%2]: if getattr(options, driver): name=driver+"_"+test func=locals().get(name, None) if not func: sys.stderr.write("No such test "+name+"\n") sys.exit(1) if os.path.exists(options.database): os.remove(options.database) write("\t"+func.__name__+(" "*(40-len(func.__name__)))) sys.stdout.flush() con=locals().get(driver+"_setup")(options.database) gc.collect(2) b4cpu=time.clock() b4=time.time() func(con) con.close() # see note above as to why we include this in the timing gc.collect(2) after=time.time() aftercpu=time.clock() write("%0.3f %0.3f\n" % (after-b4, aftercpu-b4cpu)) # Cleanup if using valgrind if options.apsw: if hasattr(apsw, "_fini"): # Cleans out buffer recycle cache apsw._fini() parser=optparse.OptionParser() parser.add_option("--apsw", dest="apsw", action="store_true", default=False, help="Include apsw in testing (%default)") parser.add_option("--pysqlite", dest="pysqlite", action="store_true", default=False, help="Include pysqlite in testing (%default)") parser.add_option("--correctness", dest="correctness", action="store_true", default=False, help="Do a correctness test") parser.add_option("--scale", dest="scale", type="int", default=10, help="How many statements to execute. Each unit takes about 2 seconds per test on memory only databases. [Default %default]") parser.add_option("--database", dest="database", default=":memory:", help="The database file to use [Default %default]") parser.add_option("--tests", dest="tests", default="bigstmt,statements,statements_nobindings", help="What tests to run [Default %default]") parser.add_option("--iterations", dest="iterations", default=4, type="int", metavar="N", help="How many times to run the tests [Default %default]") parser.add_option("--tests-detail", dest="tests_detail", default=False, action="store_true", help="Print details of what the tests do. (Does not run the tests)") parser.add_option("--dump-sql", dest="dump_filename", metavar="FILENAME", help="Name of file to dump SQL to. This is useful for feeding into the SQLite command line shell.") parser.add_option("--sc-size", dest="scsize", type="int", default=100, metavar="N", help="Size of the statement cache. APSW will disable cache with value of zero. Pysqlite ensures a minimum of 5 [Default %default]") parser.add_option("--unicode", dest="unicode", type="int", default=0, help="Percentage of text that is unicode characters [Default %default]") parser.add_option("--data-size", dest="size", type="int", default=0, metavar="SIZE", help="Maximum size in characters of data items - keep this number small unless you are on 64 bits and have lots of memory with a small scale - you can easily consume multiple gigabytes [Default same as original TCL speedtest]") tests_detail="""\ bigstmt: Supplies the SQL as a single string consisting of multiple statements. apsw handles this normally via cursor.execute while pysqlite requires that cursor.executescript is called. The string will be several kilobytes and with a factor of 50 will be in the megabyte range. This is the kind of query you would run if you were restoring a database from a dump. (Note that pysqlite silently ignores returned data which also makes it execute faster). statements: Runs the SQL queries but uses bindings (? parameters). eg:: for i in range(3): cursor.execute("insert into table foo values(?)", (i,)) This test has many hits of the statement cache. statements_nobindings: Runs the SQL queries but doesn't use bindings. eg:: cursor.execute("insert into table foo values(0)") cursor.execute("insert into table foo values(1)") cursor.execute("insert into table foo values(2)") This test has no statement cache hits and shows the overhead of having a statement cache. In theory all the tests above should run in almost identical time as well as when using the SQLite command line shell. This tool shows you what happens in practise. \n""" if __name__=="__main__": options,args=parser.parse_args() if len(args): parser.error("Unexpected arguments "+str(args)) if options.tests_detail: write(tests_detail) sys.exit(0) if not options.apsw and not options.pysqlite and not options.dump_filename: parser.error("You should select at least one of --apsw or --pysqlite") doit() apsw-3.8.11.1-r1/src/0000755000000000000000000000000012576610040012523 5ustar rootrootapsw-3.8.11.1-r1/src/connection.c0000644000000000000000000027665412565422256015062 0ustar rootroot/* Connection handling code See the accompanying LICENSE file. */ /** .. _connections: Connections to a database ************************* A :class:`Connection` encapsulates access to a database. You then use :class:`cursors ` to issue queries against the database. You can have multple :class:`Connections ` open against the same database in the same process, across threads and in other processes. */ /* CALLBACK INFO */ /* details of a registered function passed as user data to sqlite3_create_function */ typedef struct FunctionCBInfo { PyObject_HEAD char *name; /* utf8 function name */ PyObject *scalarfunc; /* the function to call for stepping */ PyObject *aggregatefactory; /* factory for aggregate functions */ } FunctionCBInfo; /* a particular aggregate function instance used as sqlite3_aggregate_context */ typedef struct _aggregatefunctioncontext { PyObject *aggvalue; /* the aggregation value passed as first parameter */ PyObject *stepfunc; /* step function */ PyObject *finalfunc; /* final function */ } aggregatefunctioncontext; /* CONNECTION TYPE */ struct Connection { PyObject_HEAD sqlite3 *db; /* the actual database connection */ unsigned inuse; /* track if we are in use preventing concurrent thread mangling */ struct StatementCache *stmtcache; /* prepared statement cache */ PyObject *dependents; /* tracking cursors & blobs belonging to this connection */ PyObject *dependent_remove; /* dependents.remove for weak ref processing */ /* registered hooks/handlers (NULL or callable) */ PyObject *busyhandler; PyObject *rollbackhook; PyObject *profile; PyObject *updatehook; PyObject *commithook; PyObject *walhook; PyObject *progresshandler; PyObject *authorizer; PyObject *collationneeded; PyObject *exectrace; PyObject *rowtrace; /* if we are using one of our VFS since sqlite doesn't reference count them */ PyObject *vfs; /* used for nested with (contextmanager) statements */ long savepointlevel; /* informational attributes */ PyObject *open_flags; PyObject *open_vfs; /* weak reference support */ PyObject *weakreflist; }; typedef struct Connection Connection; static PyTypeObject ConnectionType; typedef struct _vtableinfo { PyObject *datasource; /* object with create/connect methods */ Connection *connection; /* the Connection this is registered against so we don't have to have a global table mapping sqlite3_db* to Connection* */ } vtableinfo; /* forward declarations */ struct APSWBlob; static void APSWBlob_init(struct APSWBlob *self, Connection *connection, sqlite3_blob *blob); static PyTypeObject APSWBlobType; #ifdef EXPERIMENTAL struct APSWBackup; static void APSWBackup_init(struct APSWBackup *self, Connection *dest, Connection *source, sqlite3_backup *backup); static PyTypeObject APSWBackupType; #endif struct APSWCursor; static void APSWCursor_init(struct APSWCursor *, Connection *); static PyTypeObject APSWCursorType; struct ZeroBlobBind; static PyTypeObject ZeroBlobBindType; static void FunctionCBInfo_dealloc(FunctionCBInfo *self) { if(self->name) PyMem_Free(self->name); Py_CLEAR(self->scalarfunc); Py_CLEAR(self->aggregatefactory); Py_TYPE(self)->tp_free((PyObject*)self); } /** .. class:: Connection This object wraps a `sqlite3 pointer `_. */ /* CONNECTION CODE */ static void Connection_internal_cleanup(Connection *self) { Py_CLEAR(self->busyhandler); Py_CLEAR(self->rollbackhook); Py_CLEAR(self->profile); Py_CLEAR(self->updatehook); Py_CLEAR(self->commithook); Py_CLEAR(self->walhook); Py_CLEAR(self->progresshandler); Py_CLEAR(self->authorizer); Py_CLEAR(self->collationneeded); Py_CLEAR(self->exectrace); Py_CLEAR(self->rowtrace); Py_CLEAR(self->vfs); Py_CLEAR(self->open_flags); Py_CLEAR(self->open_vfs); } static int Connection_close_internal(Connection *self, int force) { Py_ssize_t i; int res; PyObject *etype, *eval, *etb; if(force==2) PyErr_Fetch(&etype, &eval, &etb); /* Traverse dependents calling close. We assume the list may be perturbed by item we just called close on being removed from the list. */ for(i=0; idependents);) { PyObject *item, *closeres, *orig; orig=PyList_GET_ITEM(self->dependents, i); item=PyWeakref_GetObject(orig); if(!item || item==Py_None) { i++; continue; } closeres=Call_PythonMethodV(item, "close", 1, "(i)", !!force); Py_XDECREF(closeres); if(!closeres) { assert(PyErr_Occurred()); if(force==2) apsw_write_unraiseable(NULL); else return 1; } if(idependents) && orig==PyList_GET_ITEM(self->dependents, i)) { /* list was not perturbed */ i++; } } if(self->stmtcache) statementcache_free(self->stmtcache); self->stmtcache=0; PYSQLITE_VOID_CALL( APSW_FAULT_INJECT(ConnectionCloseFail, res=sqlite3_close(self->db), res=SQLITE_IOERR) ); self->db=0; if (res!=SQLITE_OK) { SET_EXC(res, NULL); if(force==2) { PyErr_Format(ExcConnectionNotClosed, "apsw.Connection at address %p. The destructor " "has encountered an error %d closing the connection, but cannot raise an exception.", self, res); apsw_write_unraiseable(NULL); } } Connection_internal_cleanup(self); if(PyErr_Occurred()) { assert(force!=2); AddTraceBackHere(__FILE__, __LINE__, "Connection.close", NULL); return 1; } if(force==2) PyErr_Restore(etype, eval, etb); return 0; } /** .. method:: close([force=False]) Closes the database. If there are any outstanding :class:`cursors `, :class:`blobs ` or :class:`backups ` then they are closed too. It is normally not necessary to call this method as the database is automatically closed when there are no more references. It is ok to call the method multiple times. If your user defined functions or collations have direct or indirect references to the Connection then it won't be automatically garbage collected because of circular referencing that can't be automatically broken. Calling *close* will free all those objects and what they reference. SQLite is designed to survive power failures at even the most awkward moments. Consequently it doesn't matter if it is closed when the process is exited, or even if the exit is graceful or abrupt. In the worst case of having a transaction in progress, that transaction will be rolled back by the next program to open the database, reverting the database to a know good state. If *force* is *True* then any exceptions are ignored. -* sqlite3_close */ /* Closes cursors and blobs belonging to this connection */ static PyObject * Connection_close(Connection *self, PyObject *args) { int force=0; CHECK_USE(NULL); assert(!PyErr_Occurred()); if(!PyArg_ParseTuple(args, "|i:close(force=False)", &force)) return NULL; force=!!force; /* must be zero or one */ if(Connection_close_internal(self, force)) { assert(PyErr_Occurred()); return NULL; } Py_RETURN_NONE; } static void Connection_dealloc(Connection* self) { APSW_CLEAR_WEAKREFS; Connection_close_internal(self, 2); /* Our dependents all hold a refcount on us, so they must have all released before this destructor could be called */ assert(PyList_GET_SIZE(self->dependents)==0); Py_CLEAR(self->dependents); Py_CLEAR(self->dependent_remove); Py_TYPE(self)->tp_free((PyObject*)self); } static void Connection_remove_dependent(Connection *self, PyObject *o) { Py_ssize_t i; for(i=0;idependents);i++) { if(PyWeakref_GetObject(PyList_GET_ITEM(self->dependents, i))==o) { PyList_SetSlice(self->dependents, i, i+1, NULL); break; } } } static PyObject* Connection_new(PyTypeObject *type, APSW_ARGUNUSED PyObject *args, APSW_ARGUNUSED PyObject *kwds) { Connection *self; self = (Connection *)type->tp_alloc(type, 0); if (self != NULL) { self->db=0; self->inuse=0; self->dependents=PyList_New(0); self->dependent_remove=PyObject_GetAttrString(self->dependents, "remove"); self->stmtcache=0; self->busyhandler=0; self->rollbackhook=0; self->profile=0; self->updatehook=0; self->commithook=0; self->walhook=0; self->progresshandler=0; self->authorizer=0; self->collationneeded=0; self->exectrace=0; self->rowtrace=0; self->vfs=0; self->savepointlevel=0; self->open_flags=0; self->open_vfs=0; self->weakreflist=0; } return (PyObject *)self; } /** .. method:: __init__(filename, flags=SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, vfs=None, statementcachesize=100) Opens the named database. You can use ``:memory:`` to get a private temporary in-memory database that is not shared with any other connections. :param flags: One or more of the `open flags `_ orred together :param vfs: The name of the `vfs `_ to use. If :const:`None` then the default vfs will be used. :param statementcachesize: Use zero to disable the statement cache, or a number larger than the total distinct SQL statements you execute frequently. -* sqlite3_open_v2 .. seealso:: * :attr:`apsw.connection_hooks` * :ref:`statementcache` * :ref:`vfs` */ /* forward declaration so we can tell if it is one of ours */ static int apswvfs_xAccess(sqlite3_vfs *vfs, const char *zName, int flags, int *pResOut); static int Connection_init(Connection *self, PyObject *args, PyObject *kwds) { static char *kwlist[]={"filename", "flags", "vfs", "statementcachesize", NULL}; PyObject *hooks=NULL, *hook=NULL, *iterator=NULL, *hookargs=NULL, *hookresult=NULL; char *filename=NULL; int res=0; int flags=SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE; char *vfs=0; int statementcachesize=100; sqlite3_vfs *vfsused=0; if(!PyArg_ParseTupleAndKeywords(args, kwds, "es|izi:Connection(filename, flags=SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE, vfs=None, statementcachesize=100)", kwlist, STRENCODING, &filename, &flags, &vfs, &statementcachesize)) return -1; if(statementcachesize<0) statementcachesize=0; /* Technically there is a race condition as a vfs of the same name could be registered between our find and the open starting. Don't do that! We also have to manage the error message thread safety manually as self->db is null on entry. */ PYSQLITE_VOID_CALL( vfsused=sqlite3_vfs_find(vfs); res=sqlite3_open_v2(filename, &self->db, flags, vfs); if(res!=SQLITE_OK) apsw_set_errmsg(sqlite3_errmsg(self->db)); ); SET_EXC(res, self->db); /* nb sqlite3_open always allocates the db even on error */ if(res!=SQLITE_OK) goto pyexception; if(vfsused && vfsused->xAccess==apswvfs_xAccess) { PyObject *pyvfsused=(PyObject*)(vfsused->pAppData); Py_INCREF(pyvfsused); self->vfs=pyvfsused; } /* record information */ self->open_flags=PyInt_FromLong(flags); if(vfsused) self->open_vfs=convertutf8string(vfsused->zName); /* get detailed error codes */ PYSQLITE_VOID_CALL(sqlite3_extended_result_codes(self->db, 1)); /* call connection hooks */ hooks=PyObject_GetAttrString(apswmodule, "connection_hooks"); if(!hooks) goto pyexception; hookargs=Py_BuildValue("(O)", self); if(!hookargs) goto pyexception; iterator=PyObject_GetIter(hooks); if(!iterator) { AddTraceBackHere(__FILE__, __LINE__, "Connection.__init__", "{s: O}", "connection_hooks", hooks); goto pyexception; } self->stmtcache=statementcache_init(self->db, statementcachesize); if(!self->stmtcache) goto pyexception; while( (hook=PyIter_Next(iterator)) ) { hookresult=PyEval_CallObject(hook, hookargs); if(!hookresult) goto pyexception; Py_DECREF(hook); hook=NULL; Py_DECREF(hookresult); } if(!PyErr_Occurred()) { res=0; goto finally; } pyexception: /* clean up db since it is useless - no need for user to call close */ assert(PyErr_Occurred()); res= -1; sqlite3_close(self->db); /* PYSQLITE_CALL not needed since noone else can have a reference to this connection */ self->db=0; Connection_internal_cleanup(self); assert(PyErr_Occurred()); finally: if(filename) PyMem_Free(filename); Py_XDECREF(hookargs); Py_XDECREF(iterator); Py_XDECREF(hooks); Py_XDECREF(hook); assert(PyErr_Occurred() || res==0); return res; } /** .. method:: blobopen(database, table, column, rowid, writeable) -> blob Opens a blob for :ref:`incremental I/O `. :param database: Name of the database. This will be ``main`` for the main connection and the name you specified for `attached `_ databases. :param table: The name of the table :param column: The name of the column :param rowid: The id that uniquely identifies the row. :param writeable: If True then you can read and write the blob. If False then you can only read it. :rtype: :class:`blob` .. seealso:: * :ref:`Blob I/O example ` * `SQLite row ids `_ -* sqlite3_blob_open */ static PyObject * Connection_blobopen(Connection *self, PyObject *args) { struct APSWBlob *apswblob=0; sqlite3_blob *blob=0; const char *dbname, *tablename, *column; long long rowid; int writing; int res; PyObject *weakref; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(!PyArg_ParseTuple(args, "esesesLi:blobopen(database, table, column, rowid, rd_wr)", STRENCODING, &dbname, STRENCODING, &tablename, STRENCODING, &column, &rowid, &writing)) return NULL; PYSQLITE_CON_CALL(res=sqlite3_blob_open(self->db, dbname, tablename, column, rowid, writing, &blob)); PyMem_Free((void*)dbname); PyMem_Free((void*)tablename); PyMem_Free((void*)column); SET_EXC(res, self->db); if(res!=SQLITE_OK) return NULL; APSW_FAULT_INJECT(BlobAllocFails,apswblob=PyObject_New(struct APSWBlob, &APSWBlobType), (PyErr_NoMemory(), apswblob=NULL)); if(!apswblob) { PYSQLITE_CON_CALL(sqlite3_blob_close(blob)); return NULL; } APSWBlob_init(apswblob, self, blob); weakref=PyWeakref_NewRef((PyObject*)apswblob, self->dependent_remove); PyList_Append(self->dependents, weakref); Py_DECREF(weakref); return (PyObject*)apswblob; } #ifdef EXPERIMENTAL /** .. method:: backup(databasename, sourceconnection, sourcedatabasename) -> backup Opens a :ref:`backup object `. All data will be copied from source database to this database. :param databasename: Name of the database. This will be ``main`` for the main connection and the name you specified for `attached `_ databases. :param sourceconnection: The :class:`Connection` to copy a database from. :param sourcedatabasename: Name of the database in the source (eg ``main``). :rtype: :class:`backup` .. seealso:: * :ref:`Backup` -* sqlite3_backup_init */ static PyObject * Connection_backup(Connection *self, PyObject *args) { struct APSWBackup *apswbackup=0; sqlite3_backup *backup=0; int res=-123456; /* stupid compiler */ PyObject *result=NULL; PyObject *weakref=NULL; Connection *source=NULL; const char *databasename=NULL; const char *sourcedatabasename=NULL; int isetsourceinuse=0; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); /* self (destination) can't be used if there are outstanding blobs, cursors or backups */ if(PyList_GET_SIZE(self->dependents)) { PyObject *args=NULL, *etype, *evalue, *etb; args=PyTuple_New(2); if(!args) goto thisfinally; PyTuple_SET_ITEM(args, 0, MAKESTR("The destination database has outstanding objects open on it. They must all be closed for the backup to proceed (otherwise corruption would be possible.)")); PyTuple_SET_ITEM(args, 1, self->dependents); Py_INCREF(self->dependents); PyErr_SetObject(ExcThreadingViolation, args); PyErr_Fetch(&etype, &evalue, &etb); PyErr_NormalizeException(&etype, &evalue, &etb); PyErr_Restore(etype, evalue, etb); thisfinally: Py_XDECREF(args); goto finally; } if(!PyArg_ParseTuple(args, "esOes:blobopen(databasename, sourceconnection, sourcedatabasename)", STRENCODING, &databasename, &source, STRENCODING, &sourcedatabasename)) return NULL; if(Py_TYPE(source)!=&ConnectionType) { PyErr_Format(PyExc_TypeError, "source connection needs to be a Connection instance"); goto finally; } if(!source->db) { PyErr_Format(PyExc_ValueError, "source connection is closed!"); goto finally; } if(source->inuse) { PyErr_Format(ExcThreadingViolation, "source connection is in concurrent use in another thread"); goto finally; } if(source->db==self->db) { PyErr_Format(PyExc_ValueError, "source and destination are the same which sqlite3_backup doesn't allow"); goto finally; } source->inuse=1; isetsourceinuse=1; APSW_FAULT_INJECT(BackupInitFails, PYSQLITE_CON_CALL( (backup=sqlite3_backup_init(self->db, databasename, source->db, sourcedatabasename), res=backup?SQLITE_OK:sqlite3_extended_errcode(self->db)) ), res=SQLITE_NOMEM); if(res) { SET_EXC(res, self->db); goto finally; } APSW_FAULT_INJECT(BackupNewFails, apswbackup=PyObject_New(struct APSWBackup, &APSWBackupType), apswbackup=(struct APSWBackup*)PyErr_NoMemory()); if(!apswbackup) goto finally; APSWBackup_init(apswbackup, self, source, backup); Py_INCREF(self); Py_INCREF(source); backup=NULL; /* add to dependent lists */ weakref=PyWeakref_NewRef((PyObject*)apswbackup, self->dependent_remove); if(!weakref) goto finally; if(PyList_Append(self->dependents, weakref)) goto finally; Py_DECREF(weakref); weakref=PyWeakref_NewRef((PyObject*)apswbackup, ((Connection*)source)->dependent_remove); if(!weakref) goto finally; if(PyList_Append(((Connection*)source)->dependents, weakref)) goto finally; Py_DECREF(weakref); weakref=0; result=(PyObject*)apswbackup; apswbackup=NULL; finally: /* check errors occurred vs result */ assert(result?(PyErr_Occurred()==NULL):(PyErr_Occurred()!=NULL)); assert(result?(backup==NULL):1); if (backup) PYSQLITE_VOID_CALL(sqlite3_backup_finish(backup)); if (databasename) PyMem_Free((void*)databasename); if (sourcedatabasename) PyMem_Free((void*)sourcedatabasename); Py_XDECREF((PyObject*)apswbackup); Py_XDECREF(weakref); /* if inuse is set then we must be returning result */ assert( (self->inuse) ? (!!result):(result==NULL)); assert( result?(self->inuse):(!self->inuse)); if(isetsourceinuse) source->inuse=0; return result; } #endif /** .. method:: cursor() -> Cursor Creates a new :class:`Cursor` object on this database. :rtype: :class:`Cursor` */ static PyObject * Connection_cursor(Connection *self) { struct APSWCursor* cursor = NULL; PyObject *weakref; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); APSW_FAULT_INJECT(CursorAllocFails,cursor = PyObject_New(struct APSWCursor, &APSWCursorType), (PyErr_NoMemory(), cursor=NULL)); if(!cursor) return NULL; /* incref me since cursor holds a pointer */ Py_INCREF((PyObject*)self); APSWCursor_init(cursor, self); weakref=PyWeakref_NewRef((PyObject*)cursor, self->dependent_remove); PyList_Append(self->dependents, weakref); Py_DECREF(weakref); return (PyObject*)cursor; } /** .. method:: setbusytimeout(millseconds) If the database is locked such as when another connection is making changes, SQLite will keep retrying. This sets the maximum amount of time SQLite will keep retrying before giving up. If the database is still busy then :class:`apsw.BusyError` will be returned. :param milliseconds: Maximum thousandths of a second to wait. If you previously called :meth:`~Connection.setbusyhandler` then calling this overrides that. .. seealso:: * :meth:`Connection.setbusyhandler` * :ref:`Busy handling ` -* sqlite3_busy_timeout */ static PyObject * Connection_setbusytimeout(Connection *self, PyObject *args) { int ms=0; int res; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(!PyArg_ParseTuple(args, "i:setbusytimeout(millseconds)", &ms)) return NULL; PYSQLITE_CON_CALL(res=sqlite3_busy_timeout(self->db, ms)); SET_EXC(res, self->db); if(res!=SQLITE_OK) return NULL; /* free any explicit busyhandler we may have had */ Py_XDECREF(self->busyhandler); self->busyhandler=0; Py_RETURN_NONE; } /** .. method:: changes() -> int Returns the number of database rows that were changed (or inserted or deleted) by the most recently completed INSERT, UPDATE, or DELETE statement. -* sqlite3_changes */ static PyObject * Connection_changes(Connection *self) { CHECK_USE(NULL); CHECK_CLOSED(self,NULL); return PyLong_FromLong(sqlite3_changes(self->db)); } /** .. method:: totalchanges() -> int Returns the total number of database rows that have be modified, inserted, or deleted since the database connection was opened. -* sqlite3_total_changes */ static PyObject * Connection_totalchanges(Connection *self) { CHECK_USE(NULL); CHECK_CLOSED(self,NULL); return PyLong_FromLong(sqlite3_total_changes(self->db)); } /** .. method:: getautocommit() -> bool Returns if the Connection is in auto commit mode (ie not in a transaction). -* sqlite3_get_autocommit */ static PyObject * Connection_getautocommit(Connection *self) { CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if (sqlite3_get_autocommit(self->db)) Py_RETURN_TRUE; Py_RETURN_FALSE; } /** .. method:: last_insert_rowid() -> int Returns the integer key of the most recent insert in the database. -* sqlite3_last_insert_rowid */ static PyObject * Connection_last_insert_rowid(Connection *self) { CHECK_USE(NULL); CHECK_CLOSED(self,NULL); return PyLong_FromLongLong(sqlite3_last_insert_rowid(self->db)); } /** .. method:: interrupt() Causes any pending operations on the database to abort at the earliest opportunity. You can call this from any thread. For example you may have a long running query when the user presses the stop button in your user interface. :exc:`InterruptError` will be raised in the query that got interrupted. -* sqlite3_interrupt */ static PyObject * Connection_interrupt(Connection *self) { CHECK_CLOSED(self, NULL); sqlite3_interrupt(self->db); /* no return value */ Py_RETURN_NONE; } /** .. method:: limit(id[, newval]) -> int If called with one parameter then the current limit for that *id* is returned. If called with two then the limit is set to *newval*. :param id: One of the `runtime limit ids `_ :param newval: The new limit. This is a 32 bit signed integer even on 64 bit platforms. :returns: The limit in place on entry to the call. -* sqlite3_limit .. seealso:: * :ref:`Example ` */ static PyObject * Connection_limit(Connection *self, PyObject *args) { int val=-1, res, id; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(!PyArg_ParseTuple(args, "i|i", &id, &val)) return NULL; res=sqlite3_limit(self->db, id, val); return PyLong_FromLong(res); } static void updatecb(void *context, int updatetype, char const *databasename, char const *tablename, sqlite3_int64 rowid) { /* The hook returns void. That makes it impossible for us to abort immediately due to an error in the callback */ PyGILState_STATE gilstate; PyObject *retval=NULL; Connection *self=(Connection *)context; assert(self); assert(self->updatehook); assert(self->updatehook!=Py_None); gilstate=PyGILState_Ensure(); if(PyErr_Occurred()) goto finally; /* abort hook due to outstanding exception */ retval=PyObject_CallFunction(self->updatehook, "(iO&O&L)", updatetype, convertutf8string, databasename, convertutf8string, tablename, rowid); finally: Py_XDECREF(retval); PyGILState_Release(gilstate); } /** .. method:: setupdatehook(callable) Calls *callable* whenever a row is updated, deleted or inserted. If *callable* is :const:`None` then any existing update hook is removed. The update hook cannot make changes to the database while the query is still executing, but can record them for later use or apply them in a different connection. The update hook is called with 4 paramaters: type (int) :const:`SQLITE_INSERT`, :const:`SQLITE_DELETE` or :const:`SQLITE_UPDATE` database name (string) This is ``main`` for the database or the name specified in `ATTACH `_ table name (string) The table on which the update happened rowid (64 bit integer) The affected row .. seealso:: * :ref:`Example ` -* sqlite3_update_hook */ static PyObject * Connection_setupdatehook(Connection *self, PyObject *callable) { /* sqlite3_update_hook doesn't return an error code */ CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(callable==Py_None) { PYSQLITE_VOID_CALL(sqlite3_update_hook(self->db, NULL, NULL)); callable=NULL; goto finally; } if(!PyCallable_Check(callable)) return PyErr_Format(PyExc_TypeError, "update hook must be callable"); PYSQLITE_VOID_CALL(sqlite3_update_hook(self->db, updatecb, self)); Py_INCREF(callable); finally: Py_XDECREF(self->updatehook); self->updatehook=callable; Py_RETURN_NONE; } static void rollbackhookcb(void *context) { /* The hook returns void. That makes it impossible for us to abort immediately due to an error in the callback */ PyGILState_STATE gilstate; PyObject *retval=NULL; Connection *self=(Connection *)context; assert(self); assert(self->rollbackhook); assert(self->rollbackhook!=Py_None); gilstate=PyGILState_Ensure(); APSW_FAULT_INJECT(RollbackHookExistingError,,PyErr_NoMemory()); if(PyErr_Occurred()) goto finally; /* abort hook due to outstanding exception */ retval=PyEval_CallObject(self->rollbackhook, NULL); finally: Py_XDECREF(retval); PyGILState_Release(gilstate); } /** .. method:: setrollbackhook(callable) Sets a callable which is invoked during a rollback. If *callable* is :const:`None` then any existing rollback hook is removed. The *callable* is called with no parameters and the return value is ignored. -* sqlite3_rollback_hook */ static PyObject * Connection_setrollbackhook(Connection *self, PyObject *callable) { /* sqlite3_rollback_hook doesn't return an error code */ CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(callable==Py_None) { PYSQLITE_VOID_CALL(sqlite3_rollback_hook(self->db, NULL, NULL)); callable=NULL; goto finally; } if(!PyCallable_Check(callable)) return PyErr_Format(PyExc_TypeError, "rollback hook must be callable"); PYSQLITE_VOID_CALL(sqlite3_rollback_hook(self->db, rollbackhookcb, self)); Py_INCREF(callable); finally: Py_XDECREF(self->rollbackhook); self->rollbackhook=callable; Py_RETURN_NONE; } #ifdef EXPERIMENTAL /* sqlite3_profile */ static void profilecb(void *context, const char *statement, sqlite_uint64 runtime) { /* The hook returns void. That makes it impossible for us to abort immediately due to an error in the callback */ PyGILState_STATE gilstate; PyObject *retval=NULL; Connection *self=(Connection *)context; assert(self); assert(self->profile); assert(self->profile!=Py_None); gilstate=PyGILState_Ensure(); if(PyErr_Occurred()) goto finally; /* abort hook due to outstanding exception */ retval=PyObject_CallFunction(self->profile, "(O&K)", convertutf8string, statement, runtime); finally: Py_XDECREF(retval); PyGILState_Release(gilstate); } /** .. method:: setprofile(callable) Sets a callable which is invoked at the end of execution of each statement and passed the statement string and how long it took to execute. (The execution time is in nanoseconds.) Note that it is called only on completion. If for example you do a ``SELECT`` and only read the first result, then you won't reach the end of the statement. -* sqlite3_profile */ static PyObject * Connection_setprofile(Connection *self, PyObject *callable) { /* sqlite3_profile doesn't return an error code */ CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(callable==Py_None) { PYSQLITE_VOID_CALL(sqlite3_profile(self->db, NULL, NULL)); callable=NULL; goto finally; } if(!PyCallable_Check(callable)) return PyErr_Format(PyExc_TypeError, "profile function must be callable"); PYSQLITE_VOID_CALL(sqlite3_profile(self->db, profilecb, self)); Py_INCREF(callable); finally: Py_XDECREF(self->profile); self->profile=callable; Py_RETURN_NONE; } #endif /* EXPERIMENTAL - sqlite3_profile */ static int commithookcb(void *context) { /* The hook returns 0 for commit to go ahead and non-zero to abort commit (turn into a rollback). We return non-zero for errors */ PyGILState_STATE gilstate; PyObject *retval=NULL; int ok=1; /* error state */ Connection *self=(Connection *)context; assert(self); assert(self->commithook); assert(self->commithook!=Py_None); gilstate=PyGILState_Ensure(); APSW_FAULT_INJECT(CommitHookExistingError,,PyErr_NoMemory()); if(PyErr_Occurred()) goto finally; /* abort hook due to outstanding exception */ retval=PyEval_CallObject(self->commithook, NULL); if(!retval) goto finally; /* abort hook due to exeception */ ok=PyObject_IsTrue(retval); assert(ok==-1 || ok==0 || ok==1); if(ok==-1) { ok=1; goto finally; /* abort due to exception in return value */ } finally: Py_XDECREF(retval); PyGILState_Release(gilstate); return ok; } /** .. method:: setcommithook(callable) *callable* will be called just before a commit. It should return zero for the commit to go ahead and non-zero for it to be turned into a rollback. In the case of an exception in your callable, a non-zero (ie rollback) value is returned. .. seealso:: * :ref:`Example ` -* sqlite3_commit_hook */ static PyObject * Connection_setcommithook(Connection *self, PyObject *callable) { /* sqlite3_commit_hook doesn't return an error code */ CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(callable==Py_None) { PYSQLITE_VOID_CALL(sqlite3_commit_hook(self->db, NULL, NULL)); callable=NULL; goto finally; } if(!PyCallable_Check(callable)) return PyErr_Format(PyExc_TypeError, "commit hook must be callable"); PYSQLITE_VOID_CALL(sqlite3_commit_hook(self->db, commithookcb, self)); Py_INCREF(callable); finally: Py_XDECREF(self->commithook); self->commithook=callable; Py_RETURN_NONE; } static int walhookcb(void *context, APSW_ARGUNUSED sqlite3 *db, const char *dbname, int npages) { PyGILState_STATE gilstate; PyObject *retval=NULL; int code=SQLITE_ERROR; Connection *self=(Connection *)context; assert(self); assert(self->walhook); assert(self->walhook!=Py_None); assert(self->db==db); gilstate=PyGILState_Ensure(); retval=PyEval_CallFunction(self->walhook, "(OO&i)", self, convertutf8string, dbname, npages); if(!retval) { assert(PyErr_Occurred()); AddTraceBackHere(__FILE__, __LINE__, "walhookcallback", "{s: O, s: s, s: i}", "Connection", self, "dbname", dbname, "npages", npages); goto finally; } if(!PyIntLong_Check(retval)) { PyErr_Format(PyExc_TypeError, "wal hook must return a number"); AddTraceBackHere(__FILE__, __LINE__, "walhookcallback", "{s: O, s: s, s: i, s: O}", "Connection", self, "dbname", dbname, "npages", npages, "retval", retval); goto finally; } code=(int)PyIntLong_AsLong(retval); finally: Py_XDECREF(retval); PyGILState_Release(gilstate); return code; } /** .. method:: setwalhook(callable) *callable* will be called just after data is committed in :ref:`wal` mode. It should return :const:`SQLITE_OK` or an error code. The callback is called with 3 parameters: * The Connection * The database name (eg "main" or the name of an attached database) * The number of pages in the wal log You can pass in None in order to clear an existing hook. -* sqlite3_wal_hook */ static PyObject * Connection_setwalhook(Connection *self, PyObject *callable) { CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(callable==Py_None) { PYSQLITE_VOID_CALL(sqlite3_wal_hook(self->db, NULL, NULL)); callable=NULL; goto finally; } if(!PyCallable_Check(callable)) return PyErr_Format(PyExc_TypeError, "wal hook must be callable"); PYSQLITE_VOID_CALL(sqlite3_wal_hook(self->db, walhookcb, self)); Py_INCREF(callable); finally: Py_XDECREF(self->walhook); self->walhook=callable; Py_RETURN_NONE; } static int progresshandlercb(void *context) { /* The hook returns 0 for continue and non-zero to abort (rollback). We return non-zero for errors */ PyGILState_STATE gilstate; PyObject *retval=NULL; int ok=1; /* error state */ Connection *self=(Connection *)context; assert(self); assert(self->progresshandler); gilstate=PyGILState_Ensure(); retval=PyEval_CallObject(self->progresshandler, NULL); if(!retval) goto finally; /* abort due to exeception */ ok=PyObject_IsTrue(retval); assert(ok==-1 || ok==0 || ok==1); if(ok==-1) { ok=1; goto finally; /* abort due to exception in result */ } finally: Py_XDECREF(retval); PyGILState_Release(gilstate); return ok; } /** .. method:: setprogresshandler(callable[, nsteps=20]) Sets a callable which is invoked every *nsteps* SQLite inststructions. The callable should return a non-zero value to abort or zero to continue. (If there is an error in your Python *callable* then non-zero will be returned). .. seealso:: * :ref:`Example ` -* sqlite3_progress_handler */ static PyObject * Connection_setprogresshandler(Connection *self, PyObject *args) { /* sqlite3_progress_handler doesn't return an error code */ int nsteps=20; PyObject *callable=NULL; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(!PyArg_ParseTuple(args, "O|i:setprogresshandler(callable, nsteps=20)", &callable, &nsteps)) return NULL; if(callable==Py_None) { PYSQLITE_VOID_CALL(sqlite3_progress_handler(self->db, 0, NULL, NULL)); callable=NULL; goto finally; } if(!PyCallable_Check(callable)) return PyErr_Format(PyExc_TypeError, "progress handler must be callable"); PYSQLITE_VOID_CALL(sqlite3_progress_handler(self->db, nsteps, progresshandlercb, self)); Py_INCREF(callable); finally: Py_XDECREF(self->progresshandler); self->progresshandler=callable; Py_RETURN_NONE; } static int authorizercb(void *context, int operation, const char *paramone, const char *paramtwo, const char *databasename, const char *triggerview) { /* should return one of SQLITE_OK, SQLITE_DENY, or SQLITE_IGNORE. (0, 1 or 2 respectively) */ PyGILState_STATE gilstate; PyObject *retval=NULL; int result=SQLITE_DENY; /* default to deny */ Connection *self=(Connection *)context; assert(self); assert(self->authorizer); assert(self->authorizer!=Py_None); gilstate=PyGILState_Ensure(); APSW_FAULT_INJECT(AuthorizerExistingError,,PyErr_NoMemory()); if(PyErr_Occurred()) goto finally; /* abort due to earlier exception */ retval=PyObject_CallFunction(self->authorizer, "(iO&O&O&O&)", operation, convertutf8string, paramone, convertutf8string, paramtwo, convertutf8string, databasename, convertutf8string, triggerview); if(!retval) goto finally; /* abort due to exeception */ if (PyIntLong_Check(retval)) { result=PyIntLong_AsLong(retval); goto haveval; } PyErr_Format(PyExc_TypeError, "Authorizer must return a number"); AddTraceBackHere(__FILE__, __LINE__, "authorizer callback", "{s: i, s: s:, s: s, s: s}", "operation", operation, "paramone", paramone, "paramtwo", paramtwo, "databasename", databasename, "triggerview", triggerview); haveval: if (PyErr_Occurred()) result=SQLITE_DENY; finally: Py_XDECREF(retval); PyGILState_Release(gilstate); return result; } /** .. method:: setauthorizer(callable) While `preparing `_ statements, SQLite will call any defined authorizer to see if a particular action is ok to be part of the statement. Typical usage would be if you are running user supplied SQL and want to prevent harmful operations. You should also set the :class:`statementcachesize ` to zero. The authorizer callback has 5 parameters: * An `operation code `_ * A string (or None) dependent on the operation `(listed as 3rd) `_ * A string (or None) dependent on the operation `(listed as 4th) `_ * A string name of the database (or None) * Name of the innermost trigger or view doing the access (or None) The authorizer callback should return one of :const:`SQLITE_OK`, :const:`SQLITE_DENY` or :const:`SQLITE_IGNORE`. (:const:`SQLITE_DENY` is returned if there is an error in your Python code). .. seealso:: * :ref:`Example ` * :ref:`statementcache` -* sqlite3_set_authorizer */ static PyObject * Connection_setauthorizer(Connection *self, PyObject *callable) { int res; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(callable==Py_None) { APSW_FAULT_INJECT(SetAuthorizerNullFail, PYSQLITE_CON_CALL(res=sqlite3_set_authorizer(self->db, NULL, NULL)), res=SQLITE_IOERR); if(res!=SQLITE_OK) { SET_EXC(res, self->db); return NULL; } callable=NULL; goto finally; } if(!PyCallable_Check(callable)) return PyErr_Format(PyExc_TypeError, "authorizer must be callable"); APSW_FAULT_INJECT(SetAuthorizerFail, PYSQLITE_CON_CALL(res=sqlite3_set_authorizer(self->db, authorizercb, self)), res=SQLITE_IOERR); if(res!=SQLITE_OK) { SET_EXC(res, self->db); return NULL; } Py_INCREF(callable); finally: Py_XDECREF(self->authorizer); self->authorizer=callable; Py_RETURN_NONE; } static void collationneeded_cb(void *pAux, APSW_ARGUNUSED sqlite3 *db, int eTextRep, const char *name) { PyObject *res=NULL, *pyname=NULL; Connection *self=(Connection*)pAux; PyGILState_STATE gilstate=PyGILState_Ensure(); assert(self->collationneeded); if(!self->collationneeded) goto finally; if(PyErr_Occurred()) goto finally; pyname=convertutf8string(name); if(pyname) res=PyEval_CallFunction(self->collationneeded, "(OO)", self, pyname); if(!pyname || !res) AddTraceBackHere(__FILE__, __LINE__, "collationneeded callback", "{s: O, s: i, s: s}", "Connection", self, "eTextRep", eTextRep, "name", name); Py_XDECREF(res); finally: Py_XDECREF(pyname); PyGILState_Release(gilstate); } /** .. method:: collationneeded(callable) *callable* will be called if a statement requires a `collation `_ that hasn't been registered. Your callable will be passed two parameters. The first is the connection object. The second is the name of the collation. If you have the collation code available then call :meth:`Connection.createcollation`. This is useful for creating collations on demand. For example you may include the `locale `_ in the collation name, but since there are thousands of locales in popular use it would not be useful to :meth:`prereigster ` them all. Using :meth:`~Connection.collationneeded` tells you when you need to register them. .. seealso:: * :meth:`~Connection.createcollation` -* sqlite3_collation_needed */ static PyObject * Connection_collationneeded(Connection *self, PyObject *callable) { int res; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(callable==Py_None) { APSW_FAULT_INJECT(CollationNeededNullFail, PYSQLITE_CON_CALL(res=sqlite3_collation_needed(self->db, NULL, NULL)), res=SQLITE_IOERR); if(res!=SQLITE_OK) { SET_EXC(res, self->db); return NULL; } callable=NULL; goto finally; } if(!PyCallable_Check(callable)) return PyErr_Format(PyExc_TypeError, "collationneeded callback must be callable"); APSW_FAULT_INJECT(CollationNeededFail, PYSQLITE_CON_CALL(res=sqlite3_collation_needed(self->db, self, collationneeded_cb)), res=SQLITE_IOERR); if(res!=SQLITE_OK) { SET_EXC(res, self->db); return NULL; } Py_INCREF(callable); finally: Py_XDECREF(self->collationneeded); self->collationneeded=callable; Py_RETURN_NONE; } static int busyhandlercb(void *context, int ncall) { /* Return zero for caller to get SQLITE_BUSY error. We default to zero in case of error. */ PyGILState_STATE gilstate; PyObject *retval; int result=0; /* default to fail with SQLITE_BUSY */ Connection *self=(Connection *)context; assert(self); assert(self->busyhandler); gilstate=PyGILState_Ensure(); retval=PyObject_CallFunction(self->busyhandler, "i", ncall); if(!retval) goto finally; /* abort due to exeception */ result=PyObject_IsTrue(retval); assert(result==-1 || result==0 || result==1); Py_DECREF(retval); if(result==-1) { result=0; goto finally; /* abort due to exception converting retval */ } finally: PyGILState_Release(gilstate); return result; } /** .. method:: setbusyhandler(callable) Sets the busy handler to callable. callable will be called with one integer argument which is the number of prior calls to the busy callback for the same lock. If the busy callback returns something that evaluates to False, then SQLite returns :const:`SQLITE_BUSY` to the calling code.. If the callback returns something that evaluates to True, then SQLite tries to open the table again and the cycle repeats. If you previously called :meth:`~Connection.setbusytimeout` then calling this overrides that. .. seealso:: * :meth:`Connection.setbusytimeout` * :ref:`Busy handling ` -* sqlite3_busy_handler */ static PyObject * Connection_setbusyhandler(Connection *self, PyObject *callable) { int res=SQLITE_OK; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(callable==Py_None) { APSW_FAULT_INJECT(SetBusyHandlerNullFail, PYSQLITE_CON_CALL(res=sqlite3_busy_handler(self->db, NULL, NULL)), res=SQLITE_IOERR); if(res!=SQLITE_OK) { SET_EXC(res, self->db); return NULL; } callable=NULL; goto finally; } if(!PyCallable_Check(callable)) return PyErr_Format(PyExc_TypeError, "busyhandler must be callable"); APSW_FAULT_INJECT(SetBusyHandlerFail, PYSQLITE_CON_CALL(res=sqlite3_busy_handler(self->db, busyhandlercb, self)), res=SQLITE_IOERR); if(res!=SQLITE_OK) { SET_EXC(res, self->db); return NULL; } Py_INCREF(callable); finally: Py_XDECREF(self->busyhandler); self->busyhandler=callable; Py_RETURN_NONE; } #if defined(EXPERIMENTAL) && !defined(SQLITE_OMIT_LOAD_EXTENSION) /* extension loading */ /** .. method:: enableloadextension(enable) Enables/disables `extension loading `_ which is disabled by default. :param enable: If True then extension loading is enabled, else it is disabled. -* sqlite3_enable_load_extension .. seealso:: * :meth:`~Connection.loadextension` */ static PyObject * Connection_enableloadextension(Connection *self, PyObject *enabled) { int enabledp, res; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); /* get the boolean value */ enabledp=PyObject_IsTrue(enabled); if(enabledp==-1) return NULL; if (PyErr_Occurred()) return NULL; /* call function */ APSW_FAULT_INJECT(EnableLoadExtensionFail, PYSQLITE_CON_CALL(res=sqlite3_enable_load_extension(self->db, enabledp)), res=SQLITE_IOERR); SET_EXC(res, self->db); /* done */ if (res==SQLITE_OK) Py_RETURN_NONE; return NULL; } /** .. method:: loadextension(filename[, entrypoint]) Loads *filename* as an `extension `_ :param filename: The file to load. This must be Unicode or Unicode compatible :param entrypoint: The initialization method to call. If this parameter is not supplied then the SQLite default of ``sqlite3_extension_init`` is used. :raises ExtensionLoadingError: If the extension could not be loaded. The exception string includes more details. -* sqlite3_load_extension .. seealso:: * :meth:`~Connection.enableloadextension` */ static PyObject * Connection_loadextension(Connection *self, PyObject *args) { int res; char *zfile=NULL, *zproc=NULL, *errmsg=NULL; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(!PyArg_ParseTuple(args, "es|z:loadextension(filename, entrypoint=None)", STRENCODING, &zfile, &zproc)) return NULL; PYSQLITE_CON_CALL(res=sqlite3_load_extension(self->db, zfile, zproc, &errmsg)); PyMem_Free(zfile); /* load_extension doesn't set the error message on the db so we have to make exception manually */ if(res!=SQLITE_OK) { assert(errmsg); PyErr_Format(ExcExtensionLoading, "ExtensionLoadingError: %s", errmsg?errmsg:"unspecified"); sqlite3_free(errmsg); return NULL; } Py_RETURN_NONE; } #endif /* EXPERIMENTAL extension loading */ /* USER DEFINED FUNCTION CODE.*/ static PyTypeObject FunctionCBInfoType = { APSW_PYTYPE_INIT "apsw.FunctionCBInfo", /*tp_name*/ sizeof(FunctionCBInfo), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)FunctionCBInfo_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "FunctionCBInfo object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; static FunctionCBInfo * allocfunccbinfo(void) { FunctionCBInfo *res=PyObject_New(FunctionCBInfo, &FunctionCBInfoType); if(res) { res->name=0; res->scalarfunc=0; res->aggregatefactory=0; } return res; } /* converts a python object into a sqlite3_context result */ static void set_context_result(sqlite3_context *context, PyObject *obj) { if(!obj) { assert(PyErr_Occurred()); sqlite3_result_error_code(context, MakeSqliteMsgFromPyException(NULL)); sqlite3_result_error(context, "bad object given to set_context_result", -1); return; } /* DUPLICATE(ish) code: this is substantially similar to the code in APSWCursor_dobinding. If you fix anything here then do it there as well. */ if(obj==Py_None) { sqlite3_result_null(context); return; } #if PY_MAJOR_VERSION < 3 if(PyInt_Check(obj)) { sqlite3_result_int64(context, PyInt_AS_LONG(obj)); return; } #endif if (PyLong_Check(obj)) { sqlite3_result_int64(context, PyLong_AsLongLong(obj)); return; } if (PyFloat_Check(obj)) { sqlite3_result_double(context, PyFloat_AS_DOUBLE(obj)); return; } if (PyUnicode_Check(obj)) { UNIDATABEGIN(obj) APSW_FAULT_INJECT(SetContextResultUnicodeConversionFails,,strdata=(char*)PyErr_NoMemory()); if(strdata) { #ifdef APSW_TEST_LARGE_OBJECTS APSW_FAULT_INJECT(SetContextResultLargeUnicode,,strbytes=0x001234567890L); #endif if(strbytes>APSW_INT32_MAX) { SET_EXC(SQLITE_TOOBIG, NULL); sqlite3_result_error_toobig(context); } else USE16(sqlite3_result_text)(context, strdata, strbytes, SQLITE_TRANSIENT); } else sqlite3_result_error(context, "Unicode conversions failed", -1); UNIDATAEND(obj); return; } #if PY_MAJOR_VERSION < 3 if (PyString_Check(obj)) { const char *val=PyString_AS_STRING(obj); const Py_ssize_t lenval=PyString_GET_SIZE(obj); const char *chk=val; /* check if string is all ascii if less than 10kb in size */ if(lenval<10000) for(;chkAPSW_INT32_MAX) { SET_EXC(SQLITE_TOOBIG, NULL); sqlite3_result_error_toobig(context); } else USE16(sqlite3_result_text)(context, strdata, strbytes, SQLITE_TRANSIENT); } else sqlite3_result_error(context, "Unicode conversions failed", -1); UNIDATAEND(str2); Py_DECREF(str2); } else /* just ascii chars */ sqlite3_result_text(context, val, lenval, SQLITE_TRANSIENT); return; } #endif if (PyObject_CheckReadBuffer(obj)) { const void *buffer; Py_ssize_t buflen; int asrb=PyObject_AsReadBuffer(obj, &buffer, &buflen); APSW_FAULT_INJECT(SetContextResultAsReadBufferFail,,(PyErr_NoMemory(),asrb=-1)); if(asrb!=0) { sqlite3_result_error(context, "PyObject_AsReadBuffer failed", -1); return; } if (buflen>APSW_INT32_MAX) sqlite3_result_error_toobig(context); else sqlite3_result_blob(context, buffer, buflen, SQLITE_TRANSIENT); return; } PyErr_Format(PyExc_TypeError, "Bad return type from function callback"); sqlite3_result_error(context, "Bad return type from function callback", -1); } /* Returns a new reference to a tuple formed from function parameters */ static PyObject * getfunctionargs(sqlite3_context *context, PyObject *firstelement, int argc, sqlite3_value **argv) { PyObject *pyargs=NULL; int i; int extra=0; /* extra first item */ if(firstelement) extra=1; APSW_FAULT_INJECT(GFAPyTuple_NewFail,pyargs=PyTuple_New((long)argc+extra),pyargs=PyErr_NoMemory()); if(!pyargs) { sqlite3_result_error(context, "PyTuple_New failed", -1); goto error; } if(extra) { Py_INCREF(firstelement); PyTuple_SET_ITEM(pyargs, 0, firstelement); } for(i=0;iscalarfunc); APSW_FAULT_INJECT(CBDispatchExistingError,,PyErr_NoMemory()); if(PyErr_Occurred()) { sqlite3_result_error_code(context, MakeSqliteMsgFromPyException(NULL)); sqlite3_result_error(context, "Prior Python Error", -1); goto finalfinally; } pyargs=getfunctionargs(context, NULL, argc, argv); if(!pyargs) goto finally; assert(!PyErr_Occurred()); retval=PyEval_CallObject(cbinfo->scalarfunc, pyargs); if(retval) set_context_result(context, retval); finally: if (PyErr_Occurred()) { char *errmsg=NULL; char *funname=sqlite3_mprintf("user-defined-scalar-%s", cbinfo->name); sqlite3_result_error_code(context, MakeSqliteMsgFromPyException(&errmsg)); sqlite3_result_error(context, errmsg, -1); AddTraceBackHere(__FILE__, __LINE__, funname, "{s: i, s: s}", "NumberOfArguments", argc, "message", errmsg); sqlite3_free(funname); sqlite3_free(errmsg); } finalfinally: Py_XDECREF(pyargs); Py_XDECREF(retval); PyGILState_Release(gilstate); } static aggregatefunctioncontext * getaggregatefunctioncontext(sqlite3_context *context) { aggregatefunctioncontext *aggfc=sqlite3_aggregate_context(context, sizeof(aggregatefunctioncontext)); FunctionCBInfo *cbinfo; PyObject *retval; /* have we seen it before? */ if(aggfc->aggvalue) return aggfc; /* fill in with Py_None so we know it is valid */ aggfc->aggvalue=Py_None; Py_INCREF(Py_None); cbinfo=(FunctionCBInfo*)sqlite3_user_data(context); assert(cbinfo); assert(cbinfo->aggregatefactory); /* call the aggregatefactory to get our working objects */ retval=PyEval_CallObject(cbinfo->aggregatefactory, NULL); if(!retval) return aggfc; /* it should have returned a tuple of 3 items: object, stepfunction and finalfunction */ if(!PyTuple_Check(retval)) { PyErr_Format(PyExc_TypeError, "Aggregate factory should return tuple of (object, stepfunction, finalfunction)"); goto finally; } if(PyTuple_GET_SIZE(retval)!=3) { PyErr_Format(PyExc_TypeError, "Aggregate factory should return 3 item tuple of (object, stepfunction, finalfunction)"); goto finally; } /* we don't care about the type of the zeroth item (object) ... */ /* stepfunc */ if (!PyCallable_Check(PyTuple_GET_ITEM(retval,1))) { PyErr_Format(PyExc_TypeError, "stepfunction must be callable"); goto finally; } /* finalfunc */ if (!PyCallable_Check(PyTuple_GET_ITEM(retval,2))) { PyErr_Format(PyExc_TypeError, "final function must be callable"); goto finally; } aggfc->aggvalue=PyTuple_GET_ITEM(retval,0); aggfc->stepfunc=PyTuple_GET_ITEM(retval,1); aggfc->finalfunc=PyTuple_GET_ITEM(retval,2); Py_INCREF(aggfc->aggvalue); Py_INCREF(aggfc->stepfunc); Py_INCREF(aggfc->finalfunc); Py_DECREF(Py_None); /* we used this earlier as a sentinel */ finally: assert(retval); Py_DECREF(retval); return aggfc; } /* Note that we can't call sqlite3_result_error in the step function as SQLite doesn't want to you to do that (and core dumps!) Consequently if an error is returned, we will still be repeatedly called. */ static void cbdispatch_step(sqlite3_context *context, int argc, sqlite3_value **argv) { PyGILState_STATE gilstate; PyObject *pyargs; PyObject *retval; aggregatefunctioncontext *aggfc=NULL; gilstate=PyGILState_Ensure(); if (PyErr_Occurred()) goto finalfinally; aggfc=getaggregatefunctioncontext(context); if (PyErr_Occurred()) goto finally; assert(aggfc); pyargs=getfunctionargs(context, aggfc->aggvalue, argc, argv); if(!pyargs) goto finally; assert(!PyErr_Occurred()); retval=PyEval_CallObject(aggfc->stepfunc, pyargs); Py_DECREF(pyargs); Py_XDECREF(retval); if(!retval) { assert(PyErr_Occurred()); } finally: if(PyErr_Occurred()) { char *funname=0; FunctionCBInfo *cbinfo=(FunctionCBInfo*)sqlite3_user_data(context); assert(cbinfo); funname=sqlite3_mprintf("user-defined-aggregate-step-%s", cbinfo->name); AddTraceBackHere(__FILE__, __LINE__, funname, "{s: i}", "NumberOfArguments", argc); sqlite3_free(funname); } finalfinally: PyGILState_Release(gilstate); } /* this is somewhat similar to cbdispatch_step, except we also have to do some cleanup of the aggregatefunctioncontext */ static void cbdispatch_final(sqlite3_context *context) { PyGILState_STATE gilstate; PyObject *retval=NULL; aggregatefunctioncontext *aggfc=NULL; PyObject *err_type=NULL, *err_value=NULL, *err_traceback=NULL; gilstate=PyGILState_Ensure(); PyErr_Fetch(&err_type, &err_value, &err_traceback); aggfc=getaggregatefunctioncontext(context); assert(aggfc); APSW_FAULT_INJECT(CBDispatchFinalError,,PyErr_NoMemory()); if((err_type||err_value||err_traceback) || PyErr_Occurred() || !aggfc->finalfunc) { sqlite3_result_error(context, "Prior Python Error in step function", -1); goto finally; } retval=PyObject_CallFunctionObjArgs(aggfc->finalfunc, aggfc->aggvalue, NULL); set_context_result(context, retval); Py_XDECREF(retval); finally: /* we also free the aggregatefunctioncontext here */ assert(aggfc->aggvalue); /* should always be set, perhaps to Py_None */ Py_XDECREF(aggfc->aggvalue); Py_XDECREF(aggfc->stepfunc); Py_XDECREF(aggfc->finalfunc); if(PyErr_Occurred() && (err_type||err_value||err_traceback)) { PyErr_Format(PyExc_Exception, "An exception happened during cleanup of an aggregate function, but there was already error in the step function so only that can be returned"); apsw_write_unraiseable(NULL); } if(err_type||err_value||err_traceback) PyErr_Restore(err_type, err_value, err_traceback); if(PyErr_Occurred()) { char *funname=0; FunctionCBInfo *cbinfo=(FunctionCBInfo*)sqlite3_user_data(context); assert(cbinfo); funname=sqlite3_mprintf("user-defined-aggregate-final-%s", cbinfo->name); AddTraceBackHere(__FILE__, __LINE__, funname, NULL); sqlite3_free(funname); } /* sqlite3 frees the actual underlying memory we used (aggfc itself) */ PyGILState_Release(gilstate); } /* Used for the create function v2 xDestroy callbacks. Note this is called even when supplying NULL for the function implementation (ie deleting it), so XDECREF has to be used. */ static void apsw_free_func(void *funcinfo) { PyGILState_STATE gilstate; gilstate=PyGILState_Ensure(); Py_XDECREF((PyObject*)funcinfo); PyGILState_Release(gilstate); } /** .. method:: createscalarfunction(name, callable[, numargs=-1, deterministic=False]) Registers a scalar function. Scalar functions operate on one set of paramaters once. :param name: The string name of the function. It should be less than 255 characters :param callable: The function that will be called :param numargs: How many arguments the function takes, with -1 meaning any number :param deterministic: When True this means the function always returns the same result for the same input arguments. SQLite's query planner can perform additional optimisations for deterministic functions. For example a random() function is not deterministic while one that returns the length of a string is. .. note:: You can register the same named function but with different *callable* and *numargs*. For example:: connection.createscalarfunction("toip", ipv4convert, 4) connection.createscalarfunction("toip", ipv6convert, 16) connection.createscalarfunction("toip", strconvert, -1) The one with the correct *numargs* will be called and only if that doesn't exist then the one with negative *numargs* will be called. .. seealso:: * :ref:`Example ` * :meth:`~Connection.createaggregatefunction` -* sqlite3_create_function_v2 */ static PyObject * Connection_createscalarfunction(Connection *self, PyObject *args, PyObject *kwargs) { static char *kwlist[]={"name", "callable", "numargs", "deterministic", NULL}; int numargs=-1; PyObject *callable=NULL; PyObject *odeterministic=NULL; int deterministic=0; char *name=0; FunctionCBInfo *cbinfo; int res; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(!PyArg_ParseTupleAndKeywords(args, kwargs, "esO|iO!:createscalarfunction(name,callback, numargs=-1, deterministic=False)", kwlist, STRENCODING, &name, &callable, &numargs, &PyBool_Type, &odeterministic)) return NULL; assert(name); assert(callable); if(odeterministic) { res=PyObject_IsTrue(odeterministic); if (res<0) return NULL; deterministic=res; } if(callable!=Py_None && !PyCallable_Check(callable)) { PyMem_Free(name); PyErr_SetString(PyExc_TypeError, "parameter must be callable"); return NULL; } if(callable==Py_None) { cbinfo=0; } else { cbinfo=allocfunccbinfo(); if(!cbinfo) goto finally; cbinfo->name=name; cbinfo->scalarfunc=callable; Py_INCREF(callable); } PYSQLITE_CON_CALL( res=sqlite3_create_function_v2(self->db, name, numargs, SQLITE_UTF8|(deterministic?SQLITE_DETERMINISTIC:0), cbinfo, cbinfo?cbdispatch_func:NULL, NULL, NULL, apsw_free_func) ); if(res) { /* Note: On error sqlite3_create_function_v2 calls the destructor (apsw_free_func)! */ SET_EXC(res, self->db); goto finally; } if(callable==Py_None) PyMem_Free(name); finally: if(PyErr_Occurred()) return NULL; Py_RETURN_NONE; } /** .. method:: createaggregatefunction(name, factory[, numargs=-1]) Registers an aggregate function. Aggregate functions operate on all the relevant rows such as counting how many there are. :param name: The string name of the function. It should be less than 255 characters :param callable: The function that will be called :param numargs: How many arguments the function takes, with -1 meaning any number When a query starts, the *factory* will be called and must return a tuple of 3 items: a context object This can be of any type a step function This function is called once for each row. The first parameter will be the context object and the remaining parameters will be from the SQL statement. Any value returned will be ignored. a final function This function is called at the very end with the context object as a parameter. The value returned is set as the return for the function. The final function is always called even if an exception was raised by the step function. This allows you to ensure any resources are cleaned up. .. note:: You can register the same named function but with different callables and *numargs*. See :meth:`~Connection.createscalarfunction` for an example. .. seealso:: * :ref:`Example ` * :meth:`~Connection.createscalarfunction` -* sqlite3_create_function_v2 */ static PyObject * Connection_createaggregatefunction(Connection *self, PyObject *args) { int numargs=-1; PyObject *callable; char *name=0; FunctionCBInfo *cbinfo; int res; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(!PyArg_ParseTuple(args, "esO|i:createaggregatefunction(name, factorycallback, numargs=-1)", STRENCODING, &name, &callable, &numargs)) return NULL; assert(name); assert(callable); if(callable!=Py_None && !PyCallable_Check(callable)) { PyMem_Free(name); PyErr_SetString(PyExc_TypeError, "parameter must be callable"); return NULL; } if(callable==Py_None) cbinfo=0; else { cbinfo=allocfunccbinfo(); if(!cbinfo) goto finally; cbinfo->name=name; cbinfo->aggregatefactory=callable; Py_INCREF(callable); } PYSQLITE_CON_CALL( res=sqlite3_create_function_v2(self->db, name, numargs, SQLITE_UTF8, cbinfo, NULL, cbinfo?cbdispatch_step:NULL, cbinfo?cbdispatch_final:NULL, apsw_free_func) ); if(res) { /* Note: On error sqlite3_create_function_v2 calls the destructor (apsw_free_func)! */ SET_EXC(res, self->db); goto finally; } if(callable==Py_None) PyMem_Free(name); finally: if(PyErr_Occurred()) return NULL; Py_RETURN_NONE; } /* USER DEFINED COLLATION CODE.*/ static int collation_cb(void *context, int stringonelen, const void *stringonedata, int stringtwolen, const void *stringtwodata) { PyGILState_STATE gilstate; PyObject *cbinfo=(PyObject*)context; PyObject *pys1=NULL, *pys2=NULL, *retval=NULL; int result=0; assert(cbinfo); gilstate=PyGILState_Ensure(); if(PyErr_Occurred()) goto finally; /* outstanding error */ pys1=convertutf8stringsize(stringonedata, stringonelen); pys2=convertutf8stringsize(stringtwodata, stringtwolen); if(!pys1 || !pys2) goto finally; /* failed to allocate strings */ retval=PyObject_CallFunction(cbinfo, "(OO)", pys1, pys2); if(!retval) { AddTraceBackHere(__FILE__, __LINE__, "Collation_callback", "{s: O, s: O, s: O}", "callback", cbinfo, "stringone", pys1, "stringtwo", pys2); goto finally; /* execution failed */ } if (PyIntLong_Check(retval)) { result=PyIntLong_AsLong(retval); goto haveval; } PyErr_Format(PyExc_TypeError, "Collation callback must return a number"); AddTraceBackHere(__FILE__, __LINE__, "collation callback", "{s: O, s: O}", "stringone", pys1, "stringtwo", pys2); haveval: if(PyErr_Occurred()) result=0; finally: Py_XDECREF(pys1); Py_XDECREF(pys2); Py_XDECREF(retval); PyGILState_Release(gilstate); return result; } static void collation_destroy(void *context) { PyGILState_STATE gilstate=PyGILState_Ensure(); Py_DECREF((PyObject*)context); PyGILState_Release(gilstate); } /** .. method:: createcollation(name, callback) You can control how SQLite sorts (termed `collation `_) when giving the ``COLLATE`` term to a `SELECT `_. For example your collation could take into account locale or do numeric sorting. The *callback* will be called with two items. It should return -1 if the first is less then the second, 0 if they are equal, and 1 if first is greater:: def mycollation(one, two): if one < two: return -1 if one == two: return 0 if one > two: return 1 .. seealso:: * :ref:`Example ` -* sqlite3_create_collation_v2 */ static PyObject * Connection_createcollation(Connection *self, PyObject *args) { PyObject *callable=NULL; char *name=0; int res; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(!PyArg_ParseTuple(args, "esO:createcollation(name,callback)", STRENCODING, &name, &callable)) return NULL; assert(name); assert(callable); if(callable!=Py_None && !PyCallable_Check(callable)) { PyMem_Free(name); PyErr_SetString(PyExc_TypeError, "parameter must be callable"); return NULL; } PYSQLITE_CON_CALL( res=sqlite3_create_collation_v2(self->db, name, SQLITE_UTF8, (callable!=Py_None)?callable:NULL, (callable!=Py_None)?collation_cb:NULL, (callable!=Py_None)?collation_destroy:NULL) ); PyMem_Free(name); if(res!=SQLITE_OK) { SET_EXC(res, self->db); return NULL; } if (callable!=Py_None) Py_INCREF(callable); Py_RETURN_NONE; } /** .. method:: filecontrol(dbname, op, pointer) -> bool Calls the :meth:`~VFSFile.xFileControl` method on the :ref:`VFS` implementing :class:`file access ` for the database. :param dbname: The name of the database to affect (eg "main", "temp", attached name) :param op: A `numeric code `_ with values less than 100 reserved for SQLite internal use. :param pointer: A number which is treated as a ``void pointer`` at the C level. :returns: True or False indicating if the VFS understood the op. If you want data returned back then the *pointer* needs to point to something mutable. Here is an example using `ctypes `_ of passing a Python dictionary to :meth:`~VFSFile.xFileControl` which can then modify the dictionary to set return values:: obj={"foo": 1, 2: 3} # object we want to pass objwrap=ctypes.py_object(obj) # objwrap must live before and after the call else # it gets garbage collected connection.filecontrol( "main", # which db 123, # our op code ctypes.addressof(objwrap)) # get pointer The :meth:`~VFSFile.xFileControl` method then looks like this:: def xFileControl(self, op, pointer): if op==123: # our op code obj=ctypes.py_object.from_address(pointer).value # play with obj - you can use id() to verify it is the same print obj["foo"] obj["result"]="it worked" return True else: # pass to parent/superclass return super(MyFile, self).xFileControl(op, pointer) This is how you set the chunk size by which the database grows. Do not combine it into one line as the c_int would be garbage collected before the filecontrol call is made:: chunksize=ctypes.c_int(32768) connection.filecontrol("main", apsw.SQLITE_FCNTL_CHUNK_SIZE, ctypes.addressof(chunksize)) -* sqlite3_file_control */ static PyObject * Connection_filecontrol(Connection *self, PyObject *args) { PyObject *pyptr; void *ptr=NULL; int res=SQLITE_ERROR, op; char *dbname=NULL; CHECK_USE(NULL); CHECK_CLOSED(self,NULL); if(!PyArg_ParseTuple(args, "esiO", STRENCODING, &dbname, &op, &pyptr)) return NULL; if(PyIntLong_Check(pyptr)) ptr=PyLong_AsVoidPtr(pyptr); else PyErr_Format(PyExc_TypeError, "Argument is not a number (pointer)"); if(PyErr_Occurred()) { AddTraceBackHere(__FILE__, __LINE__, "Connection.filecontrol", "{s: O}", "args", args); goto finally; } PYSQLITE_CON_CALL(res=sqlite3_file_control(self->db, dbname, op, ptr)); if(res!=SQLITE_OK && res!=SQLITE_NOTFOUND) SET_EXC(res, self->db); finally: if(dbname) PyMem_Free(dbname); if(PyErr_Occurred()) return NULL; if(res==SQLITE_NOTFOUND) Py_RETURN_FALSE; Py_RETURN_TRUE; } /** .. method:: sqlite3pointer() -> int Returns the underlying `sqlite3 * `_ for the connection. This method is useful if there are other C level libraries in the same process and you want them to use the APSW connection handle. The value is returned as a number using :meth:`PyLong_FromVoidPtr` under the hood. You should also ensure that you increment the reference count on the :class:`Connection` for as long as the other libraries are using the pointer. It is also a very good idea to call :meth:`sqlitelibversion` and ensure it is the same as the other libraries. */ static PyObject* Connection_sqlite3pointer(Connection *self) { CHECK_USE(NULL); CHECK_CLOSED(self, NULL); return PyLong_FromVoidPtr(self->db); } /** .. method:: wal_autocheckpoint(n) Sets how often the :ref:`wal` checkpointing is run. :param n: A number representing the checkpointing interval or zero/negative to disable auto checkpointing. -* sqlite3_wal_autocheckpoint */ static PyObject * Connection_wal_autocheckpoint(Connection *self, PyObject *arg) { long v; int res; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(!PyIntLong_Check(arg)) return PyErr_Format(PyExc_TypeError, "Parameter must be a number"); v=PyIntLong_AsLong(arg); APSW_FAULT_INJECT(WalAutocheckpointFails, PYSQLITE_CON_CALL(res=sqlite3_wal_autocheckpoint(self->db, (int)v)), res=SQLITE_IOERR); SET_EXC(res, self->db); /* done */ if (res==SQLITE_OK) Py_RETURN_NONE; return NULL; } /** .. method:: wal_checkpoint(dbname=None, mode=apsw.SQLITE_CHECKPOINT_PASSIVE) -> ( int, int ) Does a WAL checkpoint. Has no effect if the database(s) are not in WAL mode. :param dbname: The name of the database or all databases if None :param mode: One of the `checkpoint modes `__. :return: A tuple of the size of the WAL log in frames and the number of frames checkpointed as described in the `documentation `__. -* sqlite3_wal_checkpoint_v2 */ static PyObject * Connection_wal_checkpoint(Connection *self, PyObject *args, PyObject *kwargs) { static char *kwlist[]={"dbname", "mode", NULL}; int res; char *dbname=NULL; int mode=SQLITE_CHECKPOINT_PASSIVE; int nLog=0, nCkpt=0; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(!PyArg_ParseTupleAndKeywords(args, kwargs, "|esi:wal_checkpoint(dbname=None)", kwlist, STRENCODING, &dbname, &mode)) return NULL; APSW_FAULT_INJECT(WalCheckpointFails, PYSQLITE_CON_CALL(res=sqlite3_wal_checkpoint_v2(self->db, dbname, mode, &nLog, &nCkpt)), res=SQLITE_IOERR); SET_EXC(res, self->db); PyMem_Free(dbname); /* done */ if (res==SQLITE_OK) return Py_BuildValue("ii", nLog, nCkpt); return NULL; } #ifdef EXPERIMENTAL static struct sqlite3_module apsw_vtable_module; static void apswvtabFree(void *context); /** .. method:: createmodule(name, datasource) Registers a virtual table. See :ref:`virtualtables` for details. .. seealso:: * :ref:`Example ` -* sqlite3_create_module_v2 */ static PyObject * Connection_createmodule(Connection *self, PyObject *args) { char *name=NULL; PyObject *datasource=NULL; vtableinfo *vti; int res; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(!PyArg_ParseTuple(args, "esO:createmodule(name, datasource)", STRENCODING, &name, &datasource)) return NULL; Py_INCREF(datasource); vti=PyMem_Malloc(sizeof(vtableinfo)); vti->connection=self; vti->datasource=datasource; /* SQLite is really finnicky. Note that it calls the destructor on failure */ APSW_FAULT_INJECT(CreateModuleFail, PYSQLITE_CON_CALL((res=sqlite3_create_module_v2(self->db, name, &apsw_vtable_module, vti, apswvtabFree), vti=NULL)), res=SQLITE_IOERR); PyMem_Free(name); SET_EXC(res, self->db); if(res!=SQLITE_OK) { if(vti) apswvtabFree(vti); return NULL; } Py_RETURN_NONE; } /** .. method:: overloadfunction(name, nargs) Registers a placeholder function so that a virtual table can provide an implementation via :meth:`VTTable.FindFunction`. :param name: Function name :param nargs: How many arguments the function takes Due to :cvstrac:`3507` underlying errors will not be returned. -* sqlite3_overload_function */ static PyObject* Connection_overloadfunction(Connection *self, PyObject *args) { char *name; int nargs, res; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(!PyArg_ParseTuple(args, "esi:overloadfunction(name, nargs)", STRENCODING, &name, &nargs)) return NULL; APSW_FAULT_INJECT(OverloadFails, PYSQLITE_CON_CALL(res=sqlite3_overload_function(self->db, name, nargs)), res=SQLITE_NOMEM); PyMem_Free(name); SET_EXC(res, self->db); if(res) return NULL; Py_RETURN_NONE; } #endif /** .. method:: setexectrace(callable) *callable* is called with the cursor, statement and bindings for each :meth:`~Cursor.execute` or :meth:`~Cursor.executemany` on this Connection, unless the :class:`Cursor` installed its own tracer. Your execution tracer can also abort execution of a statement. If *callable* is :const:`None` then any existing execution tracer is removed. .. seealso:: * :ref:`tracing` * :ref:`rowtracer` * :meth:`Cursor.setexectrace` */ static PyObject * Connection_setexectrace(Connection *self, PyObject *func) { CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(func!=Py_None && !PyCallable_Check(func)) { PyErr_SetString(PyExc_TypeError, "parameter must be callable"); return NULL; } if(func!=Py_None) Py_INCREF(func); Py_XDECREF(self->exectrace); self->exectrace=(func==Py_None)?0:func; Py_RETURN_NONE; } /** .. method:: setrowtrace(callable) *callable* is called with the cursor and row being returned for :class:`cursors ` associated with this Connection, unless the Cursor installed its own tracer. You can change the data that is returned or cause the row to be skipped altogether. If *callable* is :const:`None` then any existing row tracer is removed. .. seealso:: * :ref:`tracing` * :ref:`rowtracer` * :meth:`Cursor.setexectrace` */ static PyObject * Connection_setrowtrace(Connection *self, PyObject *func) { CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(func!=Py_None && !PyCallable_Check(func)) { PyErr_SetString(PyExc_TypeError, "parameter must be callable"); return NULL; } if(func!=Py_None) Py_INCREF(func); Py_XDECREF(self->rowtrace); self->rowtrace=(func==Py_None)?0:func; Py_RETURN_NONE; } /** .. method:: getexectrace() -> callable or None Returns the currently installed (via :meth:`~Connection.setexectrace`) execution tracer. .. seealso:: * :ref:`tracing` */ static PyObject * Connection_getexectrace(Connection *self) { PyObject *ret; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); ret=(self->exectrace)?(self->exectrace):Py_None; Py_INCREF(ret); return ret; } /** .. method:: getrowtrace() -> callable or None Returns the currently installed (via :meth:`~Connection.setrowtrace`) row tracer. .. seealso:: * :ref:`tracing` */ static PyObject * Connection_getrowtrace(Connection *self) { PyObject *ret; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); ret=(self->rowtrace)?(self->rowtrace):Py_None; Py_INCREF(ret); return ret; } /** .. method:: __enter__() -> context You can use the database as a `context manager `_ as defined in :pep:`0343`. When you use *with* a transaction is started. If the block finishes with an exception then the transaction is rolled back, otherwise it is committed. For example:: with connection: connection.cursor().execute("....") with connection: # nested is supported call_function(connection) connection.cursor().execute("...") with connection as db: # You can also use 'as' call_function2(db) db.cursor().execute("...") Behind the scenes the `savepoint `_ functionality introduced in SQLite 3.6.8 is used. */ static PyObject * Connection_enter(Connection *self) { char *sql=0; int res; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); sql=sqlite3_mprintf("SAVEPOINT \"_apsw-%ld\"", self->savepointlevel); if(!sql) return PyErr_NoMemory(); /* exec tracing - we allow it to prevent */ if(self->exectrace && self->exectrace!=Py_None) { int result; PyObject *retval=PyObject_CallFunction(self->exectrace, "OsO", self, sql, Py_None); if(!retval) goto error; result=PyObject_IsTrue(retval); Py_DECREF(retval); if(result==-1) { assert(PyErr_Occurred()); goto error; } if(result==0) { PyErr_Format(ExcTraceAbort, "Aborted by false/null return value of exec tracer"); goto error; } assert(result==1); } APSW_FAULT_INJECT(ConnectionEnterExecFailed, PYSQLITE_CON_CALL(res=sqlite3_exec(self->db, sql, 0, 0, 0)), res=SQLITE_NOMEM); sqlite3_free(sql); SET_EXC(res, self->db); if(res) return NULL; self->savepointlevel++; Py_INCREF(self); return (PyObject*)self; error: assert(PyErr_Occurred()); if(sql) sqlite3_free(sql); return NULL; } /** .. method:: __exit__() -> False Implements context manager in conjunction with :meth:`~Connection.__enter__`. Any exception that happened in the *with* block is raised after commiting or rolling back the savepoint. */ /* A helper function. Returns -1 on memory error, 0 on failure and 1 on success */ static int connection_trace_and_exec(Connection *self, int release, int sp, int continue_on_trace_error) { char *sql; int res; sql=sqlite3_mprintf(release?"RELEASE SAVEPOINT \"_apsw-%ld\"": "ROLLBACK TO SAVEPOINT \"_apsw-%ld\"", sp); if (!sql) { PyErr_NoMemory(); return -1; } if(self->exectrace && self->exectrace!=Py_None) { PyObject *result; PyObject *etype=NULL, *eval=NULL, *etb=NULL; if(PyErr_Occurred()) PyErr_Fetch(&etype, &eval, &etb); result=PyObject_CallFunction(self->exectrace, "OsO", self, sql, Py_None); Py_XDECREF(result); if(etype || eval || etb) PyErr_Restore(etype, eval, etb); if (!result && !continue_on_trace_error) { sqlite3_free(sql); return 0; } } PYSQLITE_CON_CALL(res=sqlite3_exec(self->db, sql, 0, 0, 0)); SET_EXC(res, self->db); sqlite3_free(sql); assert (res==SQLITE_OK || PyErr_Occurred()); return res==SQLITE_OK; } static PyObject * Connection_exit(Connection *self, PyObject *args) { PyObject *etype, *evalue, *etb; long sp; int res; int return_null=0; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); /* the builtin python __exit__ implementations don't error if you call __exit__ without corresponding enters */ if(self->savepointlevel==0) Py_RETURN_FALSE; /* We always pop a level, irrespective of how this function returns - (ie successful or error) */ if(self->savepointlevel) self->savepointlevel--; sp=self->savepointlevel; if(!PyArg_ParseTuple(args, "OOO", &etype, &evalue, &etb)) return NULL; /* try the commit first because it may fail in which case we'll need to roll it back - see issue 98 */ if(etype==Py_None && evalue==Py_None && etb==Py_None) { res=connection_trace_and_exec(self, 1, sp, 0); if(res==-1) return NULL; if(res==1) Py_RETURN_FALSE; assert(res==0); assert(PyErr_Occurred()); return_null=1; } res=connection_trace_and_exec(self, 0, sp, 1); if(res==-1) return NULL; return_null=return_null || res==0; /* we have rolled back, but still need to release the savepoint */ res=connection_trace_and_exec(self, 1, sp, 1); return_null=return_null || res==0; if (return_null) return NULL; Py_RETURN_FALSE; } /** .. method:: config(op[, *args]) :param op: A `configuration operation `__ :param args: Zero or more arguments as appropriate for *op* -* sqlite3_db_config */ static PyObject * Connection_config(Connection *self, PyObject *args) { long opt; int res; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(PyTuple_GET_SIZE(args)<1 || !PyIntLong_Check(PyTuple_GET_ITEM(args, 0))) return PyErr_Format(PyExc_TypeError, "There should be at least one argument with the first being a number"); opt=PyIntLong_AsLong(PyTuple_GET_ITEM(args,0)); if(PyErr_Occurred()) return NULL; switch(opt) { case SQLITE_DBCONFIG_ENABLE_FKEY: case SQLITE_DBCONFIG_ENABLE_TRIGGER: { int opdup, val, current; if(!PyArg_ParseTuple(args, "ii", &opdup, &val)) return NULL; APSW_FAULT_INJECT(DBConfigFails, PYSQLITE_CON_CALL(res=sqlite3_db_config(self->db, opdup, val, ¤t)), res=SQLITE_NOMEM); if(res!=SQLITE_OK) { SET_EXC(res, self->db); return NULL; } return PyInt_FromLong(current); } default: return PyErr_Format(PyExc_ValueError, "Unknown config operation %d", (int)opt); } } /** .. method:: status(op, reset=False) -> (int, int) Returns current and highwater measurements for the database. :param op: A `status parameter `_ :param reset: If *True* then the highwater is set to the current value :returns: A tuple of current value and highwater value .. seealso:: The :func:`status` example which works in exactly the same way. * :ref:`Status example ` -* sqlite3_db_status */ static PyObject * Connection_status(Connection *self, PyObject *args) { int res, op, current=0, highwater=0, reset=0; CHECK_USE(NULL); CHECK_CLOSED(self, NULL); if(!PyArg_ParseTuple(args, "i|i:status(op, reset=False)", &op, &reset)) return NULL; PYSQLITE_CON_CALL(res=sqlite3_db_status(self->db, op, ¤t, &highwater, reset)); SET_EXC(res, NULL); if(res!=SQLITE_OK) return NULL; return Py_BuildValue("(ii)", current, highwater); } /** .. method:: readonly(name) -> bool True or False if the named (attached) database was opened readonly or file permissions don't allow writing. The main database is named "main". An exception is raised if the database doesn't exist. -* sqlite3_db_readonly */ static PyObject* Connection_readonly(Connection *self, PyObject *name) { int res=-1; PyObject *utf8name=NULL; CHECK_CLOSED(self, NULL); utf8name=getutf8string(name); if(!utf8name) return NULL; res=sqlite3_db_readonly(self->db, PyBytes_AS_STRING(utf8name)); Py_DECREF(utf8name); if(res==1) Py_RETURN_TRUE; if(res==0) Py_RETURN_FALSE; return PyErr_Format(exc_descriptors[0].cls, "Unknown database name"); } /** .. method:: db_filename(name) -> String Returns the full filename of the named (attached) database. The main database is named "main". -* sqlite3_db_filename */ static PyObject* Connection_db_filename(Connection *self, PyObject *name) { const char *res; PyObject *utf8name=NULL; CHECK_CLOSED(self, NULL); utf8name=getutf8string(name); if(!utf8name) return NULL; res=sqlite3_db_filename(self->db, PyBytes_AS_STRING(utf8name)); Py_DECREF(utf8name); return convertutf8string(res); } /** .. attribute:: filename The filename of the database. -* sqlite3_db_filename */ static PyObject* Connection_getmainfilename(Connection *self) { CHECK_CLOSED(self, NULL); return convertutf8string(sqlite3_db_filename(self->db, "main")); } static PyGetSetDef Connection_getseters[] = { /* name getter setter doc closure */ {"filename", (getter)Connection_getmainfilename, NULL, "Returns filename of the database", NULL}, /* Sentinel */ {NULL, NULL, NULL, NULL, NULL} }; /** .. attribute:: open_flags The integer flags used to open the database. */ /** .. attribute:: open_vfs The string name of the vfs used to open the database. */ static PyMemberDef Connection_members[] = { /* name type offset flags doc */ {"open_flags", T_OBJECT, offsetof(Connection, open_flags), READONLY, "list of [flagsin, flagsout] used to open connection"}, {"open_vfs", T_OBJECT, offsetof(Connection, open_vfs), READONLY, "VFS name used to open database"}, {0, 0, 0, 0, 0} }; static PyMethodDef Connection_methods[] = { {"cursor", (PyCFunction)Connection_cursor, METH_NOARGS, "Create a new cursor" }, {"close", (PyCFunction)Connection_close, METH_VARARGS, "Closes the connection" }, {"setbusytimeout", (PyCFunction)Connection_setbusytimeout, METH_VARARGS, "Sets the sqlite busy timeout in milliseconds. Use zero to disable the timeout"}, {"interrupt", (PyCFunction)Connection_interrupt, METH_NOARGS, "Causes any pending database operations to abort at the earliest opportunity"}, {"createscalarfunction", (PyCFunction)Connection_createscalarfunction, METH_VARARGS|METH_KEYWORDS, "Creates a scalar function"}, {"createaggregatefunction", (PyCFunction)Connection_createaggregatefunction, METH_VARARGS, "Creates an aggregate function"}, {"setbusyhandler", (PyCFunction)Connection_setbusyhandler, METH_O, "Sets the busy handler"}, {"changes", (PyCFunction)Connection_changes, METH_NOARGS, "Returns the number of rows changed by last query"}, {"totalchanges", (PyCFunction)Connection_totalchanges, METH_NOARGS, "Returns the total number of changes to database since it was opened"}, {"getautocommit", (PyCFunction)Connection_getautocommit, METH_NOARGS, "Returns if the database is in auto-commit mode"}, {"createcollation", (PyCFunction)Connection_createcollation, METH_VARARGS, "Creates a collation function"}, {"last_insert_rowid", (PyCFunction)Connection_last_insert_rowid, METH_NOARGS, "Returns rowid for last insert"}, {"collationneeded", (PyCFunction)Connection_collationneeded, METH_O, "Sets collation needed callback"}, {"setauthorizer", (PyCFunction)Connection_setauthorizer, METH_O, "Sets an authorizer function"}, {"setupdatehook", (PyCFunction)Connection_setupdatehook, METH_O, "Sets an update hook"}, {"setrollbackhook", (PyCFunction)Connection_setrollbackhook, METH_O, "Sets a callable invoked before each rollback"}, {"blobopen", (PyCFunction)Connection_blobopen, METH_VARARGS, "Opens a blob for i/o"}, {"setprogresshandler", (PyCFunction)Connection_setprogresshandler, METH_VARARGS, "Sets a callback invoked periodically during long running calls"}, {"setcommithook", (PyCFunction)Connection_setcommithook, METH_O, "Sets a callback invoked on each commit"}, {"setwalhook", (PyCFunction)Connection_setwalhook, METH_O, "Sets the WAL hook"}, {"limit", (PyCFunction)Connection_limit, METH_VARARGS, "Gets and sets limits"}, #ifdef EXPERIMENTAL {"setprofile", (PyCFunction)Connection_setprofile, METH_O, "Sets a callable invoked with profile information after each statement"}, #if !defined(SQLITE_OMIT_LOAD_EXTENSION) {"enableloadextension", (PyCFunction)Connection_enableloadextension, METH_O, "Enables loading of SQLite extensions from shared libraries"}, {"loadextension", (PyCFunction)Connection_loadextension, METH_VARARGS, "loads SQLite extension"}, #endif {"createmodule", (PyCFunction)Connection_createmodule, METH_VARARGS, "registers a virtual table"}, {"overloadfunction", (PyCFunction)Connection_overloadfunction, METH_VARARGS, "overloads function for virtual table"}, {"backup", (PyCFunction)Connection_backup, METH_VARARGS, "starts a backup"}, #endif {"filecontrol", (PyCFunction)Connection_filecontrol, METH_VARARGS, "file control"}, {"sqlite3pointer", (PyCFunction)Connection_sqlite3pointer, METH_NOARGS, "gets underlying pointer"}, {"setexectrace", (PyCFunction)Connection_setexectrace, METH_O, "Installs a function called for every statement executed"}, {"setrowtrace", (PyCFunction)Connection_setrowtrace, METH_O, "Installs a function called for every row returned"}, {"getexectrace", (PyCFunction)Connection_getexectrace, METH_NOARGS, "Returns the current exec tracer function"}, {"getrowtrace", (PyCFunction)Connection_getrowtrace, METH_NOARGS, "Returns the current row tracer function"}, {"__enter__", (PyCFunction)Connection_enter, METH_NOARGS, "Context manager entry"}, {"__exit__", (PyCFunction)Connection_exit, METH_VARARGS, "Context manager exit"}, {"wal_autocheckpoint", (PyCFunction)Connection_wal_autocheckpoint, METH_O, "Set wal checkpoint threshold"}, {"wal_checkpoint", (PyCFunction)Connection_wal_checkpoint, METH_VARARGS|METH_KEYWORDS, "Do immediate WAL checkpoint"}, {"config", (PyCFunction)Connection_config, METH_VARARGS, "Configure this connection"}, {"status", (PyCFunction)Connection_status, METH_VARARGS, "Information about this connection"}, {"readonly", (PyCFunction)Connection_readonly, METH_O, "Check if database is readonly"}, {"db_filename", (PyCFunction)Connection_db_filename, METH_O, "Return filename of main or attached database"}, {0, 0, 0, 0} /* Sentinel */ }; static PyTypeObject ConnectionType = { APSW_PYTYPE_INIT "apsw.Connection", /*tp_name*/ sizeof(Connection), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)Connection_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "Connection object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(Connection, weakreflist), /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ Connection_methods, /* tp_methods */ Connection_members, /* tp_members */ Connection_getseters, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)Connection_init, /* tp_init */ 0, /* tp_alloc */ Connection_new, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; apsw-3.8.11.1-r1/src/pyutil.c0000644000000000000000000002203312565422256014225 0ustar rootroot/* Cross Python version compatibility code See the accompanying LICENSE file. */ /* Python 2.5 compatibility when size_t types become 64 bit. SQLite3 is limited to 32 bit sizes even on a 64 bit machine. */ #if PY_VERSION_HEX < 0x02050000 typedef int Py_ssize_t; #endif /* Python 2.3 doesn't have these */ #ifndef Py_RETURN_NONE #define Py_RETURN_NONE return Py_INCREF(Py_None), Py_None #endif #ifndef Py_RETURN_TRUE #define Py_RETURN_TRUE return Py_INCREF(Py_True), Py_True #define Py_RETURN_FALSE return Py_INCREF(Py_False), Py_False #endif /* fun with objects - this is defined in Python 3 */ #ifndef Py_TYPE #define Py_TYPE(x) ((x)->ob_type) #endif #ifndef Py_REFCNT #define Py_REFCNT(x) (((PyObject*)x)->ob_refcnt) #endif #ifndef Py_CLEAR #define Py_CLEAR(exp) \ do \ { \ if(exp) \ { \ PyObject *_tmpclear=(PyObject*)(exp); \ exp=0; \ Py_DECREF(_tmpclear); \ } \ } while(0) #endif /* define as zero if not present - introduced in Python 2.6 */ #ifndef Py_TPFLAGS_HAVE_VERSION_TAG #define Py_TPFLAGS_HAVE_VERSION_TAG 0 #endif /* How to make a string from a utf8 constant */ #if PY_MAJOR_VERSION < 3 #define MAKESTR PyString_FromString #else #define MAKESTR PyUnicode_FromString #endif /* Py 2 vs 3 can't decide how to start type initialization */ #if PY_MAJOR_VERSION < 3 /* The zero is ob_size */ #define APSW_PYTYPE_INIT \ PyObject_HEAD_INIT(NULL) 0, #else #define APSW_PYTYPE_INIT PyVarObject_HEAD_INIT(NULL,0) #endif /* version tag? */ #if PY_VERSION_HEX >= 0x02060000 #define APSW_PYTYPE_VERSION ,0 #else #define APSW_PYTYPE_VERSION #endif /* PyUnicode_READY needs to be called - Python 3.3 regression bug - http://bugs.python.org/issue16145 - gave up because other things crashed */ #define APSW_UNICODE_READY(x,y) do {} while(0) #if PY_MAJOR_VERSION < 3 #define PyBytes_Check PyString_Check #define PyBytes_FromStringAndSize PyString_FromStringAndSize #define PyBytes_AsString PyString_AsString #define PyBytes_AS_STRING PyString_AS_STRING #define PyBytes_GET_SIZE PyString_GET_SIZE #define _PyBytes_Resize _PyString_Resize #define PyBytes_CheckExact PyString_CheckExact #define PyBytesObject PyStringObject #define PyIntLong_Check(x) (PyInt_Check((x)) || PyLong_Check((x))) #define PyIntLong_AsLong(x) ( (PyInt_Check((x))) ? ( PyInt_AsLong((x)) ) : ( (PyLong_AsLong((x))))) #define PyBytes_FromFormat PyString_FromFormat #else #define PyIntLong_Check PyLong_Check #define PyIntLong_AsLong PyLong_AsLong #define PyInt_FromLong PyLong_FromLong #define PyObject_Unicode PyObject_Str #endif /* we clear weakref lists when close is called on a blob/cursor as well as when it is deallocated */ #define APSW_CLEAR_WEAKREFS \ do { \ if(self->weakreflist) \ { \ PyObject_ClearWeakRefs((PyObject*)self); \ self->weakreflist=0; \ } \ } while(0) #if PY_VERSION_HEX<0x02040000 /* Introduced in Python 2.4 */ static int PyDict_Contains(PyObject *dict, PyObject *key) { return !!PyDict_GetItem(dict, key); } #endif /* Calls the named method of object with the provided args */ static PyObject* Call_PythonMethod(PyObject *obj, const char *methodname, int mandatory, PyObject *args) { PyObject *method=NULL; PyObject *res=NULL; /* we may be called when there is already an error. eg if you return an error in a cursor method, then SQLite calls vtabClose which calls us. We don't want to clear pre-existing errors, but we do want to clear ones when the function doesn't exist but is optional */ PyObject *etype=NULL, *evalue=NULL, *etraceback=NULL; void *pyerralreadyoccurred=PyErr_Occurred(); if(pyerralreadyoccurred) PyErr_Fetch(&etype, &evalue, &etraceback); /* we should only be called with ascii methodnames so no need to do character set conversions etc */ #if PY_VERSION_HEX < 0x02050000 method=PyObject_GetAttrString(obj, (char*)methodname); #else method=PyObject_GetAttrString(obj, methodname); #endif assert(method!=obj); if (!method) { if(!mandatory) { /* pretend method existed and returned None */ PyErr_Clear(); res=Py_None; Py_INCREF(res); } goto finally; } res=PyEval_CallObject(method, args); if(!pyerralreadyoccurred && PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "Call_PythonMethod", "{s: s, s: i, s: O, s: O}", "methodname", methodname, "mandatory", mandatory, "args", args, "method", method); finally: if(pyerralreadyoccurred) PyErr_Restore(etype, evalue, etraceback); Py_XDECREF(method); return res; } static PyObject * Call_PythonMethodV(PyObject *obj, const char *methodname, int mandatory, const char *format, ...) { PyObject *args=NULL, *result=NULL; va_list list; va_start (list, format); args=Py_VaBuildValue(format, list); va_end(list); if (args) result=Call_PythonMethod(obj, methodname, mandatory, args); Py_XDECREF(args); return result; } /* CONVENIENCE FUNCTIONS */ /* Return a PyBuffer (py2) or PyBytes (py3) */ #if PY_MAJOR_VERSION < 3 static PyObject * converttobytes(const void *ptr, Py_ssize_t size) { PyObject *item; item=PyBuffer_New(size); if(item) { void *buffy=0; Py_ssize_t size2=size; int aswb=PyObject_AsWriteBuffer(item, &buffy, &size2); APSW_FAULT_INJECT(AsWriteBufferFails,,(PyErr_NoMemory(),aswb=-1)); if(aswb==0) memcpy(buffy, ptr, size); else { Py_DECREF(item); item=NULL; } } return item; } #else #define converttobytes PyBytes_FromStringAndSize #endif /* Convert a pointer and size UTF-8 string into a Python object. Pointer must be non-NULL. New behaviour in 3.3.8 - always return Unicode strings */ static PyObject * convertutf8stringsize(const char *str, Py_ssize_t size) { assert(str); assert(size>=0); /* Performance optimization: If str is all ascii then we can just make a unicode object and fill in the chars. PyUnicode_DecodeUTF8 is rather long */ if(size<16384) { int isallascii=1; int i=size; const char *p=str; while(isallascii && i) { isallascii=! (*p & 0x80); i--; p++; } if(i==0 && isallascii) { Py_UNICODE *out; PyObject *res=PyUnicode_FromUnicode(NULL, size); if(!res) return res; APSW_UNICODE_READY(res, return NULL); out=PyUnicode_AS_UNICODE(res); i=size; while(i) { i--; *out=*str; out++; str++; } return res; } } return PyUnicode_DecodeUTF8(str, size, NULL); } /* Convert a NULL terminated UTF-8 string into a Python object. None is returned if NULL is passed in. */ static PyObject * convertutf8string(const char *str) { if(!str) Py_RETURN_NONE; return convertutf8stringsize(str, strlen(str)); } /* Returns a PyBytes/String encoded in UTF8 - new reference. Use PyBytes/String_AsString on the return value to get a const char * to utf8 bytes */ static PyObject * getutf8string(PyObject *string) { PyObject *inunicode=NULL; PyObject *utf8string=NULL; if(PyUnicode_CheckExact(string)) { inunicode=string; Py_INCREF(string); } #if PY_MAJOR_VERSION < 3 else if(PyString_CheckExact(string)) { /* A python 2 performance optimisation. If the string consists only of ascii characters then it is already valid utf8. And in py2 pybytes and pystring are the same thing. This avoids doing a conversion to unicode and then a conversion to utf8. We only do this optimisation for strings that aren't ridiculously long. */ if(PyString_GET_SIZE(string)<16384) { int isallascii=1; int i=PyString_GET_SIZE(string); const char *p=PyString_AS_STRING(string); while(isallascii && i) { isallascii=! (*p & 0x80); i--; p++; } if(i==0 && isallascii) { Py_INCREF(string); return string; } } } #endif if(!inunicode) inunicode=PyUnicode_FromObject(string); if(!inunicode) return NULL; assert(!PyErr_Occurred()); utf8string=PyUnicode_AsUTF8String(inunicode); Py_DECREF(inunicode); return utf8string; } apsw-3.8.11.1-r1/src/vtable.c0000644000000000000000000013376512565422256014173 0ustar rootroot/* Virtual table code See the accompanying LICENSE file. */ #ifdef EXPERIMENTAL /** .. _virtualtables: Virtual Tables ************** `Virtual Tables `__ are a feature introduced in SQLite 3.3.7. They let a developer provide an underlying table implementations, while still presenting a normal SQL interface to the user. The person writing SQL doesn't need to know or care that some of the tables come from elsewhere. Some examples of how you might use this: * Translating to/from information stored in other formats (eg a csv/ini format file) * Accessing the data remotely (eg you could make a table that backends into Amazon's API) * Dynamic information (eg currently running processes, files and directories, objects in your program) * Information that needs reformatting (eg if you have complex rules about how to convert strings to/from Unicode in the dataset) * Information that isn't relationally correct (eg if you have data that has ended up with duplicate "unique" keys with code that dynamically corrects it) * There are other examples on the `SQLite page `__ You need to have 3 types of object. A :class:`module `, a :class:`virtual table ` and a :class:`cursor `. These are documented below. You can also read the `SQLite C method documentation `__. At the C level, they are just one set of methods. At the Python/APSW level, they are split over the 3 types of object. The leading **x** is omitted in Python. You can return SQLite error codes (eg :const:`SQLITE_READONLY`) by raising the appropriate exceptions (eg :exc:`ReadOnlyError`). :meth:`exceptionfor` is a useful helper function to do the mapping. */ /** .. class:: VTModule .. note:: There is no actual *VTModule* class - it is just shown this way for documentation convenience. Your module instance should implement all the methods documented here. A module instance is used to create the virtual tables. Once you have a module object, you register it with a connection by calling :meth:`Connection.createmodule`:: # make an instance mymod=MyModuleClass() # register the vtable on connection con con.createmodule("modulename", mymod) # tell SQLite about the table con.cursor().execute("create VIRTUAL table tablename USING modulename('arg1', 2)") The create step is to tell SQLite about the existence of the table. Any number of tables referring to the same module can be made this way. Note the (optional) arguments which are passed to the module. */ typedef struct { sqlite3_vtab used_by_sqlite; /* I don't touch this */ PyObject *vtable; /* object implementing vtable */ PyObject *functions; /* functions returned by vtabFindFunction */ } apsw_vtable; static struct { const char *methodname; const char *declarevtabtracebackname; const char *pyexceptionname; } create_or_connect_strings[]= { { "Create", "VirtualTable.xCreate.sqlite3_declare_vtab", "VirtualTable.xCreate" }, { "Connect", "VirtualTable.xConnect.sqlite3_declare_vtab", "VirtualTable.xConnect" } }; static int apswvtabCreateOrConnect(sqlite3 *db, void *pAux, int argc, const char *const *argv, sqlite3_vtab **pVTab, char **errmsg, /* args above are to Create/Connect method */ int stringindex) { PyGILState_STATE gilstate; vtableinfo *vti; PyObject *args=NULL, *pyres=NULL, *schema=NULL, *vtable=NULL; apsw_vtable *avi=NULL; int res=SQLITE_OK; int i; gilstate=PyGILState_Ensure(); vti=(vtableinfo*) pAux; assert(db==vti->connection->db); args=PyTuple_New(1+argc); if(!args) goto pyexception; Py_INCREF((PyObject*)(vti->connection)); PyTuple_SET_ITEM(args, 0, (PyObject*)(vti->connection)); for(i=0;idatasource, create_or_connect_strings[stringindex].methodname, 1, args); if(!pyres) goto pyexception; /* pyres should be a tuple of two values - a string of sql describing the table and an object implementing it */ if(!PySequence_Check(pyres) || PySequence_Size(pyres)!=2) { PyErr_Format(PyExc_TypeError, "Expected two values - a string with the table schema and a vtable object implementing it"); goto pyexception; } vtable=PySequence_GetItem(pyres, 1); if(!vtable) goto pyexception; avi=PyMem_Malloc(sizeof(apsw_vtable)); if(!avi) goto pyexception; assert((void*)avi==(void*)&(avi->used_by_sqlite)); /* detect if wierd padding happens */ memset(avi, 0, sizeof(apsw_vtable)); schema=PySequence_GetItem(pyres, 0); if(!schema) goto pyexception; { PyObject *utf8schema=getutf8string(schema); const char *cp_utf8schema; if(!utf8schema) goto pyexception; cp_utf8schema=PyBytes_AsString(utf8schema); _PYSQLITE_CALL_E(db, res=sqlite3_declare_vtab(db, cp_utf8schema)); Py_DECREF(utf8schema); if(res!=SQLITE_OK) { SET_EXC(res, db); AddTraceBackHere(__FILE__, __LINE__, create_or_connect_strings[stringindex].declarevtabtracebackname, "{s: O}", "schema", schema); goto finally; } } assert(res==SQLITE_OK); *pVTab=(sqlite3_vtab*)avi; avi->vtable=vtable; Py_INCREF(avi->vtable); avi=NULL; goto finally; pyexception: /* we had an exception in python code */ res=MakeSqliteMsgFromPyException(errmsg); AddTraceBackHere(__FILE__, __LINE__, create_or_connect_strings[stringindex].pyexceptionname, "{s: s, s: s, s: s, s: O}", "modulename", argv[0], "database", argv[1], "tablename", argv[2], "schema", schema?schema:Py_None); finally: /* cleanup */ Py_XDECREF(args); Py_XDECREF(pyres); Py_XDECREF(schema); Py_XDECREF(vtable); if(avi) PyMem_Free(avi); PyGILState_Release(gilstate); return res; } /** .. method:: Connect(connection, modulename, databasename, tablename, *args) -> [ sql string, table object ] The parameters and return are identical to :meth:`~VTModule.Create`. This method is called when there are additional references to the table. :meth:`~VTModule.Create` will be called the first time and :meth:`~VTModule.Connect` after that. The advise is to create caches, generated data and other heavyweight processing on :meth:`~VTModule.Create` calls and then find and reuse that on the subsequent :meth:`~VTModule.Connect` calls. The corresponding call is :meth:`VTTable.Disconnect`. If you have a simple virtual table implemtation, then just set :meth:`~VTModule.Connect` to be the same as :meth:`~VTModule.Create`:: class MyModule: def Create(self, connection, modulename, databasename, tablename, *args): # do lots of hard work Connect=Create */ static int apswvtabCreate(sqlite3 *db, void *pAux, int argc, const char *const *argv, sqlite3_vtab **pVTab, char **errmsg) { return apswvtabCreateOrConnect(db, pAux, argc, argv, pVTab, errmsg, 0); } /** .. method:: Create(connection, modulename, databasename, tablename, *args) -> [ sql string, table object ] Called when a table is first created on a :class:`connection `. :param connection: An instance of :class:`Connection` :param modulename: The string name under which the module was :meth:`registered ` :param databasename: The name of the database. This will be ``main`` for directly opened files and the name specified in `ATTACH `_ statements. :param tablename: Name of the table the user wants to create. :param args: Any arguments that were specified in the `create virtual table `_ statement. :returns: A list of two items. The first is a SQL `create table `_ statement. The columns are parsed so that SQLite knows what columns and declared types exist for the table. The second item is an object that implements the :class:`table ` methods. The corresponding call is :meth:`VTTable.Destroy`. */ static int apswvtabConnect(sqlite3 *db, void *pAux, int argc, const char *const *argv, sqlite3_vtab **pVTab, char **errmsg) { return apswvtabCreateOrConnect(db, pAux, argc, argv, pVTab, errmsg, 1); } /** .. class:: VTTable .. note:: There is no actual *VTTable* class - it is just shown this way for documentation convenience. Your table instance should implement the methods documented here. The :class:`VTTable` object contains knowledge of the indices, makes cursors and can perform transactions. .. _vtablestructure: A virtual table is structured as a series of rows, each of which has the same columns. The value in a column must be one of the `5 supported types `_, but the type can be different between rows for the same column. The virtual table routines identify the columns by number, starting at zero. Each row has a **unique** 64 bit integer `rowid `_ with the :class:`Cursor ` routines operating on this number, as well as some of the :class:`Table ` routines such as :meth:`UpdateChangeRow `. */ static void apswvtabFree(void *context) { vtableinfo *vti=(vtableinfo*)context; PyGILState_STATE gilstate; gilstate=PyGILState_Ensure(); Py_XDECREF(vti->datasource); /* connection was a borrowed reference so no decref needed */ PyMem_Free(vti); PyGILState_Release(gilstate); } static struct { const char *methodname; const char *pyexceptionname; } destroy_disconnect_strings[]= { { "Destroy", "VirtualTable.xDestroy" }, { "Disconnect", "VirtualTable.xDisconnect" } }; /* See SQLite ticket 2099 */ static int apswvtabDestroyOrDisconnect(sqlite3_vtab *pVtab, int stringindex) { PyObject *vtable, *res=NULL; PyGILState_STATE gilstate; int sqliteres=SQLITE_OK; gilstate=PyGILState_Ensure(); vtable=((apsw_vtable*)pVtab)->vtable; /* mandatory for Destroy, optional for Disconnect */ res=Call_PythonMethod(vtable, destroy_disconnect_strings[stringindex].methodname, (stringindex==0), NULL); /* sqlite 3.3.8 ignore return code for disconnect so we always free */ if (res || stringindex==1) { /* see SQLite ticket 2127 */ if(pVtab->zErrMsg) sqlite3_free(pVtab->zErrMsg); Py_DECREF(vtable); Py_XDECREF( ((apsw_vtable*)pVtab)->functions ); PyMem_Free(pVtab); goto finally; } if(stringindex==0) { /* ::TODO:: waiting on ticket 2099 to know if the pVtab should also be freed in case of error return with Destroy. */ #if 0 /* see SQLite ticket 2127 */ if(pVtab->zErrMsg) sqlite3_free(pVtab->zErrMsg); Py_DECREF(vtable); PyMem_Free(pVtab); #endif } /* pyexception: we had an exception in python code */ sqliteres=MakeSqliteMsgFromPyException(&(pVtab->zErrMsg)); AddTraceBackHere(__FILE__, __LINE__, destroy_disconnect_strings[stringindex].pyexceptionname, "{s: O}", "self", vtable); finally: Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: Destroy() The opposite of :meth:`VTModule.Create`. This method is called when the table is no longer used. Note that you must always release resources even if you intend to return an error, as it will not be called again on error. SQLite may also :cvstrac:`leak memory <2099>` if you return an error. */ static int apswvtabDestroy(sqlite3_vtab *pVTab) { return apswvtabDestroyOrDisconnect(pVTab, 0); } /** .. method:: Disconnect() The opposite of :meth:`VTModule.Connect`. This method is called when a reference to a virtual table is no longer used, but :meth:`VTTable.Destroy` will be called when the table is no longer used. */ static int apswvtabDisconnect(sqlite3_vtab *pVTab) { return apswvtabDestroyOrDisconnect(pVTab, 1); } /** .. method:: BestIndex(constraints, orderbys) This is a complex method. To get going initially, just return :const:`None` and you will be fine. Implementing this method reduces the number of rows scanned in your table to satisfy queries, but only if you have an index or index like mechanism available. .. note:: The implementation of this method differs slightly from the `SQLite documentation `__ for the C API. You are not passed "unusable" constraints. The argv/constraintarg positions are not off by one. In the C api, you have to return position 1 to get something passed to :meth:`VTCursor.Filter` in position 0. With the APSW implementation, you return position 0 to get Filter arg 0, position 1 to get Filter arg 1 etc. The purpose of this method is to ask if you have the ability to determine if a row meets certain constraints that doesn't involve visiting every row. An example constraint is ``price > 74.99``. In a traditional SQL database, queries with constraints can be speeded up `with indices `_. If you return None, then SQLite will visit every row in your table and evaluate the constraint itself. Your index choice returned from BestIndex will also be passed to the :meth:`~VTCursor.Filter` method on your cursor object. Note that SQLite may call this method multiple times trying to find the most efficient way of answering a complex query. **constraints** You will be passed the contraints as a sequence of tuples containing two items. The first item is the column number and the second item is the operation. Example query: ``select * from foo where price > 74.99 and quantity<=10 and customer='Acme Widgets'`` If customer is column 0, price column 2 and quantity column 5 then the constraints will be:: (2, apsw.SQLITE_INDEX_CONSTRAINT_GT), (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ) Note that you do not get the value of the constraint (ie "Acme Widgets", 74.99 and 10 in this example). If you do have any suitable indices then you return a sequence the same length as constraints with the members mapping to the constraints in order. Each can be one of None, an integer or a tuple of an integer and a boolean. Conceptually SQLite is giving you a list of constraints and you are returning a list of the same length describing how you could satisfy each one. Each list item returned corresponding to a constraint is one of: None This means you have no index for that constraint. SQLite will have to iterate over every row for it. integer This is the argument number for the constraintargs being passed into the :meth:`~VTCursor.Filter` function of your :class:`cursor ` (the values "Acme Widgets", 74.99 and 10 in the example). (integer, boolean) By default SQLite will check what you return. For example if you said that you had an index on price, SQLite will still check that each row you returned is greater than 74.99. If you set the boolean to False then SQLite won't do that double checking. Example query: ``select * from foo where price > 74.99 and quantity<=10 and customer=='Acme Widgets'``. customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price. +----------------------------------------+--------------------------------+ | Constraints (in) | Constraints used (out) | +========================================+================================+ | :: | :: | | | | | (2, apsw.SQLITE_INDEX_CONSTRAINT_GT), | 1, | | (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), | None, | | (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ) | 0 | | | | +----------------------------------------+--------------------------------+ When your :class:`~VTCursor.Filter` method in the cursor is called, constraintarg[0] will be "Acme Widgets" (customer constraint value) and constraintarg[1] will be 74.99 (price constraint value). You can also return an index number (integer) and index string to use (SQLite attaches no significance to these values - they are passed as is to your :meth:`VTCursor.Filter` method as a way for the BestIndex method to let the :meth:`~VTCursor.Filter` method know which of your indices or similar mechanism to use. **orderbys** The second argument to BestIndex is a sequence of orderbys because the query requested the results in a certain order. If your data is already in that order then SQLite can give the results back as is. If not, then SQLite will have to sort the results first. Example query: ``select * from foo order by price desc, quantity asc`` Price is column 2, quantity column 5 so orderbys will be:: (2, True), # True means descending, False is ascending (5, False) **Return** You should return up to 5 items. Items not present in the return have a default value. 0: constraints used (default None) This must either be None or a sequence the same length as constraints passed in. Each item should be as specified above saying if that constraint is used, and if so which constraintarg to make the value be in your :meth:`VTCursor.Filter` function. 1: index number (default zero) This value is passed as is to :meth:`VTCursor.Filter` 2: index string (default None) This value is passed as is to :meth:`VTCursor.Filter` 3: orderby consumed (default False) Return True if your output will be in exactly the same order as the orderbys passed in 4: estimated cost (default a huge number) Approximately how many disk operations are needed to provide the results. SQLite uses the cost to optimise queries. For example if the query includes *A or B* and A has 2,000 operations and B has 100 then it is best to evaluate B before A. **A complete example** Query is ``select * from foo where price>74.99 and quantity<=10 and customer=="Acme Widgets" order by price desc, quantity asc``. Customer is column 0, price column 2 and quantity column 5. You can index on customer equality and price. :: BestIndex(constraints, orderbys) constraints= ( (2, apsw.SQLITE_INDEX_CONSTRAINT_GT), (5, apsw.SQLITE_INDEX_CONSTRAINT_LE), (0, apsw.SQLITE_INDEX_CONSTRAINT_EQ) ) orderbys= ( (2, True), (5, False) ) # You return ( (1, None, 0), # constraints used 27, # index number "idx_pr_cust", # index name False, # results are not in orderbys order 1000 # about 1000 disk operations to access index ) # Your Cursor.Filter method will be called with: 27, # index number you returned "idx_pr_cust", # index name you returned "Acme Widgets", # constraintarg[0] - customer 74.99 # constraintarg[1] - price */ static int apswvtabBestIndex(sqlite3_vtab *pVtab, sqlite3_index_info *indexinfo) { PyGILState_STATE gilstate; PyObject *vtable; PyObject *constraints=NULL, *orderbys=NULL; PyObject *res=NULL, *indices=NULL; int i,j; int nconstraints=0; int sqliteres=SQLITE_OK; gilstate=PyGILState_Ensure(); vtable=((apsw_vtable*)pVtab)->vtable; /* count how many usable constraints there are */ for(i=0;inConstraint;i++) if (indexinfo->aConstraint[i].usable) nconstraints++; constraints=PyTuple_New(nconstraints); if(!constraints) goto pyexception; /* fill them in */ for(i=0, j=0;inConstraint;i++) { PyObject *constraint=NULL; if(!indexinfo->aConstraint[i].usable) continue; constraint=Py_BuildValue("(iB)", indexinfo->aConstraint[i].iColumn, indexinfo->aConstraint[i].op); if(!constraint) goto pyexception; PyTuple_SET_ITEM(constraints, j, constraint); j++; } /* group bys */ orderbys=PyTuple_New(indexinfo->nOrderBy); if(!orderbys) goto pyexception; /* fill them in */ for(i=0;inOrderBy;i++) { PyObject *order=NULL; order=Py_BuildValue("(iN)", indexinfo->aOrderBy[i].iColumn, PyBool_FromLong(indexinfo->aOrderBy[i].desc)); if(!order) goto pyexception; PyTuple_SET_ITEM(orderbys, i, order); } /* actually call the function */ res=Call_PythonMethodV(vtable, "BestIndex", 1, "(OO)", constraints, orderbys); if(!res) goto pyexception; /* do we have useful index information? */ if(res==Py_None) goto finally; /* check we have a sequence */ if(!PySequence_Check(res) || PySequence_Size(res)>5) { PyErr_Format(PyExc_TypeError, "Bad result from BestIndex. It should be a sequence of up to 5 items"); AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xBestIndex.result_check", "{s: O, s: O}", "self", vtable, "result", res); goto pyexception; } /* dig the argv indices out */ if(PySequence_Size(res)==0) goto finally; indices=PySequence_GetItem(res, 0); if(indices!=Py_None) { if(!PySequence_Check(indices) || PySequence_Size(indices)!=nconstraints) { PyErr_Format(PyExc_TypeError, "Bad constraints (item 0 in BestIndex return). It should be a sequence the same length as the constraints passed in (%d) items", nconstraints); AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xBestIndex.result_indices", "{s: O, s: O, s: O}", "self", vtable, "result", res, "indices", indices); goto pyexception; } /* iterate through the items - i is the SQLite sequence number and j is the apsw one (usable entries) */ for(i=0,j=0;inConstraint;i++) { PyObject *constraint=NULL, *argvindex=NULL, *omit=NULL; int omitv; if(!indexinfo->aConstraint[i].usable) continue; constraint=PySequence_GetItem(indices, j); if(PyErr_Occurred() || !constraint) goto pyexception; j++; /* it can be None */ if(constraint==Py_None) { Py_DECREF(constraint); continue; } /* or an integer */ if(PyIntLong_Check(constraint)) { indexinfo->aConstraintUsage[i].argvIndex=PyIntLong_AsLong(constraint)+1; Py_DECREF(constraint); continue; } /* or a sequence two items long */ if(!PySequence_Check(constraint) || PySequence_Size(constraint)!=2) { PyErr_Format(PyExc_TypeError, "Bad constraint (#%d) - it should be one of None, an integer or a tuple of an integer and a boolean", j); AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xBestIndex.result_constraint", "{s: O, s: O, s: O, s: O}", "self", vtable, "result", res, "indices", indices, "constraint", constraint); Py_DECREF(constraint); goto pyexception; } argvindex=PySequence_GetItem(constraint, 0); omit=PySequence_GetItem(constraint, 1); if(!argvindex || !omit) goto constraintfail; if(!PyIntLong_Check(argvindex)) { PyErr_Format(PyExc_TypeError, "argvindex for constraint #%d should be an integer", j); AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xBestIndex.result_constraint_argvindex", "{s: O, s: O, s: O, s: O, s: O}", "self", vtable, "result", res, "indices", indices, "constraint", constraint, "argvindex", argvindex); goto constraintfail; } omitv=PyObject_IsTrue(omit); if(omitv==-1) goto constraintfail; indexinfo->aConstraintUsage[i].argvIndex=PyIntLong_AsLong(argvindex)+1; indexinfo->aConstraintUsage[i].omit=omitv; Py_DECREF(constraint); Py_DECREF(argvindex); Py_DECREF(omit); continue; constraintfail: Py_DECREF(constraint); Py_XDECREF(argvindex); Py_XDECREF(omit); goto pyexception; } } /* item #1 is idxnum */ if(PySequence_Size(res)<2) goto finally; { PyObject *idxnum=PySequence_GetItem(res, 1); if(!idxnum) goto pyexception; if(idxnum!=Py_None) { if(!PyIntLong_Check(idxnum)) { PyErr_Format(PyExc_TypeError, "idxnum must be an integer"); AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xBestIndex.result_indexnum", "{s: O, s: O, s: O}", "self", vtable, "result", res, "indexnum", idxnum); Py_DECREF(idxnum); goto pyexception; } indexinfo->idxNum=PyIntLong_AsLong(idxnum); } Py_DECREF(idxnum); } /* item #2 is idxStr */ if(PySequence_Size(res)<3) goto finally; { PyObject *utf8str=NULL, *idxstr=NULL; idxstr=PySequence_GetItem(res, 2); if(!idxstr) goto pyexception; if(idxstr!=Py_None) { utf8str=getutf8string(idxstr); if(!utf8str) { Py_DECREF(idxstr); goto pyexception; } indexinfo->idxStr=sqlite3_mprintf("%s", PyBytes_AsString(utf8str)); indexinfo->needToFreeIdxStr=1; } Py_XDECREF(utf8str); Py_DECREF(idxstr); } /* item 3 is orderByConsumed */ if(PySequence_Size(res)<4) goto finally; { PyObject *orderbyconsumed=NULL; int iorderbyconsumed; orderbyconsumed=PySequence_GetItem(res, 3); if(!orderbyconsumed) goto pyexception; if(orderbyconsumed!=Py_None) { iorderbyconsumed=PyObject_IsTrue(orderbyconsumed); if(iorderbyconsumed==-1) { Py_DECREF(orderbyconsumed); goto pyexception; } indexinfo->orderByConsumed=iorderbyconsumed; } Py_DECREF(orderbyconsumed); } /* item 4 (final) is estimated cost */ if(PySequence_Size(res)<5) goto finally; assert(PySequence_Size(res)==5); { PyObject *estimatedcost=NULL, *festimatedcost=NULL; estimatedcost=PySequence_GetItem(res,4); if(!estimatedcost) goto pyexception; if(estimatedcost!=Py_None) { festimatedcost=PyNumber_Float(estimatedcost); if(!festimatedcost) { Py_DECREF(estimatedcost); goto pyexception; } indexinfo->estimatedCost=PyFloat_AsDouble(festimatedcost); } Py_XDECREF(festimatedcost); Py_DECREF(estimatedcost); } goto finally; pyexception: /* we had an exception in python code */ assert(PyErr_Occurred()); sqliteres=MakeSqliteMsgFromPyException(&(pVtab->zErrMsg)); AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xBestIndex", "{s: O, s: O, s: (OO)}", "self", vtable, "result", res?res:Py_None, "args", constraints?constraints:Py_None, orderbys?orderbys:Py_None); finally: Py_XDECREF(indices); Py_XDECREF(res); Py_XDECREF(constraints); Py_XDECREF(orderbys); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: Begin() This function is used as part of transactions. You do not have to provide the method. */ /** .. method:: Sync() This function is used as part of transactions. You do not have to provide the method. */ /** .. method:: Commit() This function is used as part of transactions. You do not have to provide the method. */ /** .. method:: Rollback() This function is used as part of transactions. You do not have to provide the method. */ static struct { const char *methodname; const char *pyexceptionname; } transaction_strings[]= { { "Begin", "VirtualTable.Begin" }, { "Sync", "VirtualTable.Sync" }, { "Commit", "VirtualTable.Commit" }, { "Rollback", "VirtualTable.Rollback" }, }; static int apswvtabTransactionMethod(sqlite3_vtab *pVtab, int stringindex) { PyObject *vtable, *res=NULL; PyGILState_STATE gilstate; int sqliteres=SQLITE_OK; gilstate=PyGILState_Ensure(); vtable=((apsw_vtable*)pVtab)->vtable; res=Call_PythonMethod(vtable, transaction_strings[stringindex].methodname, 0, NULL); if(res) goto finally; /* pyexception: we had an exception in python code */ sqliteres=MakeSqliteMsgFromPyException(&(pVtab->zErrMsg)); AddTraceBackHere(__FILE__, __LINE__, transaction_strings[stringindex].pyexceptionname, "{s: O}", "self", vtable); finally: Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } static int apswvtabBegin(sqlite3_vtab *pVtab) { return apswvtabTransactionMethod(pVtab, 0); } static int apswvtabSync(sqlite3_vtab *pVtab) { return apswvtabTransactionMethod(pVtab, 1); } static int apswvtabCommit(sqlite3_vtab *pVtab) { return apswvtabTransactionMethod(pVtab, 2); } static int apswvtabRollback(sqlite3_vtab *pVtab) { return apswvtabTransactionMethod(pVtab, 3); } /** .. method:: Open() Returns a :class:`cursor ` object. */ typedef struct { sqlite3_vtab_cursor used_by_sqlite; /* I don't touch this */ PyObject *cursor; /* Object implementing cursor */ } apsw_vtable_cursor; static int apswvtabOpen(sqlite3_vtab *pVtab, sqlite3_vtab_cursor **ppCursor) { PyObject *vtable=NULL, *res=NULL; PyGILState_STATE gilstate; apsw_vtable_cursor *avc=NULL; int sqliteres=SQLITE_OK; gilstate=PyGILState_Ensure(); vtable=((apsw_vtable*)pVtab)->vtable; res=Call_PythonMethod(vtable, "Open", 1, NULL); if(!res) goto pyexception; avc=PyMem_Malloc(sizeof(apsw_vtable_cursor)); assert((void*)avc==(void*)&(avc->used_by_sqlite)); /* detect if wierd padding happens */ memset(avc, 0, sizeof(apsw_vtable_cursor)); avc->cursor=res; res=NULL; *ppCursor=(sqlite3_vtab_cursor*)avc; goto finally; pyexception: /* we had an exception in python code */ assert(PyErr_Occurred()); sqliteres=MakeSqliteMsgFromPyException(&(pVtab->zErrMsg)); AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xOpen", "{s: O}", "self", vtable); finally: Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: UpdateDeleteRow(rowid) Delete the row with the specified *rowid*. :param rowid: 64 bit integer */ /** .. method:: UpdateInsertRow(rowid, fields) -> newrowid Insert a row with the specified *rowid*. :param rowid: :const:`None` if you should choose the rowid yourself, else a 64 bit integer :param fields: A tuple of values the same length and order as columns in your table :returns: If *rowid* was :const:`None` then return the id you assigned to the row. If *rowid* was not :const:`None` then the return value is ignored. */ /** .. method:: UpdateChangeRow(row, newrowid, fields) Change an existing row. You may also need to change the rowid - for example if the query was ``UPDATE table SET rowid=rowid+100 WHERE ...`` :param row: The existing 64 bit integer rowid :param newrowid: If not the same as *row* then also change the rowid to this. :param fields: A tuple of values the same length and order as columns in your table */ static int apswvtabUpdate(sqlite3_vtab *pVtab, int argc, sqlite3_value **argv, sqlite3_int64 *pRowid) { PyObject *vtable, *args=NULL, *res=NULL; PyGILState_STATE gilstate; int sqliteres=SQLITE_OK; int i; const char *methodname="unknown"; assert(argc); /* should always be >0 */ gilstate=PyGILState_Ensure(); vtable=((apsw_vtable*)pVtab)->vtable; /* case 1 - argc=1 means delete row */ if(argc==1) { methodname="UpdateDeleteRow"; args=Py_BuildValue("(O&)", convert_value_to_pyobject, argv[0]); if(!args) goto pyexception; } /* case 2 - insert a row */ else if(sqlite3_value_type(argv[0])==SQLITE_NULL) { PyObject *newrowid; methodname="UpdateInsertRow"; args=PyTuple_New(2); if(!args) goto pyexception; if(sqlite3_value_type(argv[1])==SQLITE_NULL) { newrowid=Py_None; Py_INCREF(newrowid); } else { newrowid=convert_value_to_pyobject(argv[1]); if(!newrowid) goto pyexception; } PyTuple_SET_ITEM(args, 0, newrowid); } /* otherwise changing a row */ else { PyObject *oldrowid=NULL, *newrowid=NULL; methodname="UpdateChangeRow"; args=PyTuple_New(3); oldrowid=convert_value_to_pyobject(argv[0]); APSW_FAULT_INJECT(VtabUpdateChangeRowFail,newrowid=convert_value_to_pyobject(argv[1]), newrowid=PyErr_NoMemory()); if(!args || !oldrowid || !newrowid) { Py_XDECREF(oldrowid); Py_XDECREF(newrowid); goto pyexception; } PyTuple_SET_ITEM(args,0,oldrowid); PyTuple_SET_ITEM(args,1,newrowid); } /* new row values */ if(argc!=1) { PyObject *fields=NULL; fields=PyTuple_New(argc-2); if(!fields) goto pyexception; for(i=0;i+2zErrMsg); AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xUpdate", "{s: O, s: i, s: s, s: O}", "self", vtable, "argc", argc, "methodname", methodname, "args", args?args:Py_None); finally: Py_XDECREF(args); Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: FindFunction(name, nargs) Called to find if the virtual table has its own implementation of a particular scalar function. You should return the function if you have it, else return None. You do not have to provide this method. This method is called while SQLite is `preparing `_ a query. If a query is in the :ref:`statement cache ` then *FindFunction* won't be called again. If you want to return different implementations for the same function over time then you will need to disable the :ref:`statement cache `. :param name: The function name :param nargs: How many arguments the function takes .. seealso:: * :meth:`Connection.overloadfunction` */ /* We have to save everything returned for the lifetime of the table as we don't know when it is no longer used due to `SQLite ticket 2095 `_. This taps into the existing scalar function code in connection.c */ static int apswvtabFindFunction(sqlite3_vtab *pVtab, int nArg, const char *zName, void (**pxFunc)(sqlite3_context*, int, sqlite3_value**), void **ppArg) { PyGILState_STATE gilstate; int sqliteres=0; PyObject *vtable, *res=NULL; FunctionCBInfo *cbinfo=NULL; apsw_vtable *av=(apsw_vtable*)pVtab; gilstate=PyGILState_Ensure(); vtable=av->vtable; res=Call_PythonMethodV(vtable, "FindFunction", 0, "(Ni)", convertutf8string(zName), nArg); if(res!=Py_None) { if(!av->functions) { APSW_FAULT_INJECT(FindFunctionAllocFailed, av->functions=PyList_New(0), av->functions=PyErr_NoMemory()); } if(!av->functions) { assert(PyErr_Occurred()); goto error; } cbinfo=allocfunccbinfo(); if(!cbinfo) goto error; cbinfo->name=apsw_strdup(zName); if(!cbinfo->name) goto error; cbinfo->scalarfunc=res; res=NULL; sqliteres=1; *pxFunc=cbdispatch_func; *ppArg=cbinfo; PyList_Append(av->functions, (PyObject*)cbinfo); } error: Py_XDECREF(res); Py_XDECREF(cbinfo); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: Rename(newname) Notification that the table will be given a new name. If you return without raising an exception, then SQLite renames the table (you don't have to do anything). If you raise an exception then the renaming is prevented. You do not have to provide this method. */ static int apswvtabRename(sqlite3_vtab *pVtab, const char *zNew) { PyGILState_STATE gilstate; PyObject *vtable, *res=NULL, *newname=NULL; int sqliteres=SQLITE_OK; gilstate=PyGILState_Ensure(); vtable=((apsw_vtable*)pVtab)->vtable; APSW_FAULT_INJECT(VtabRenameBadName, newname=convertutf8string(zNew), newname=PyErr_NoMemory()); if(!newname) { sqliteres=SQLITE_ERROR; goto finally; } /* Marked as optional since sqlite does the actual renaming */ res=Call_PythonMethodV(vtable, "Rename", 0, "(N)", newname); if(!res) { sqliteres=MakeSqliteMsgFromPyException(NULL); AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xRename", "{s: O, s: s}", "self", vtable, "newname", zNew); } finally: Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /** .. class:: VTCursor .. note:: There is no actual *VTCursor* class - it is just shown this way for documentation convenience. Your cursor instance should implement all the methods documented here. The :class:`VTCursor` object is used for iterating over a table. There may be many cursors simultaneously so each one needs to keep track of where it is. .. seealso:: :ref:`Virtual table structure ` */ /** .. method:: Filter(indexnum, indexname, constraintargs) This method is always called first to initialize an iteration to the first row of the table. The arguments come from the :meth:`~VTTable.BestIndex` method in the :class:`table ` object with constraintargs being a tuple of the constraints you requested. If you always return None in BestIndex then indexnum will be zero, indexstring will be None and constraintargs will be empty). */ static int apswvtabFilter(sqlite3_vtab_cursor *pCursor, int idxNum, const char *idxStr, int argc, sqlite3_value **sqliteargv) { PyObject *cursor, *argv=NULL, *res=NULL; PyGILState_STATE gilstate; int sqliteres=SQLITE_OK; int i; gilstate=PyGILState_Ensure(); cursor=((apsw_vtable_cursor*)pCursor)->cursor; argv=PyTuple_New(argc); if(!argv) goto pyexception; for(i=0;ipVtab->zErrMsg)); /* SQLite flaw: errMsg should be on the cursor not the table! */ AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xFilter", "{s: O}", "self", cursor); finally: Py_XDECREF(argv); Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: Eof() -> bool Called to ask if we are at the end of the table. It is called after each call to Filter and Next. :returns: False if the cursor is at a valid row of data, else True .. note:: This method can only return True or False to SQLite. If you have an exception in the method or provide a non-boolean return then True (no more data) will be returned to SQLite. */ static int apswvtabEof(sqlite3_vtab_cursor *pCursor) { PyObject *cursor, *res=NULL; PyGILState_STATE gilstate; int sqliteres=0; /* nb a true/false value not error code */ gilstate=PyGILState_Ensure(); /* is there already an error? */ if(PyErr_Occurred()) goto finally; cursor=((apsw_vtable_cursor*)pCursor)->cursor; res=Call_PythonMethod(cursor, "Eof", 1, NULL); if(!res) goto pyexception; sqliteres=PyObject_IsTrue(res); if(sqliteres==0 || sqliteres==1) goto finally; pyexception: /* we had an exception in python code */ assert(PyErr_Occurred()); sqliteres=MakeSqliteMsgFromPyException(&(pCursor->pVtab->zErrMsg)); /* SQLite flaw: errMsg should be on the cursor not the table! */ AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xEof", "{s: O}", "self", cursor); finally: Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: Column(number) Requests the value of the specified column *number* of the current row. If *number* is -1 then return the rowid. :returns: Must be one one of the :ref:`5 supported types ` */ /* forward decln */ static void set_context_result(sqlite3_context *context, PyObject *obj); static int apswvtabColumn(sqlite3_vtab_cursor *pCursor, sqlite3_context *result, int ncolumn) { PyObject *cursor, *res=NULL; PyGILState_STATE gilstate; int sqliteres=SQLITE_OK; gilstate=PyGILState_Ensure(); cursor=((apsw_vtable_cursor*)pCursor)->cursor; res=Call_PythonMethodV(cursor, "Column", 1, "(i)", ncolumn); if(!res) goto pyexception; set_context_result(result, res); if(!PyErr_Occurred()) goto finally; pyexception: /* we had an exception in python code */ assert(PyErr_Occurred()); sqliteres=MakeSqliteMsgFromPyException(&(pCursor->pVtab->zErrMsg)); /* SQLite flaw: errMsg should be on the cursor not the table! */ AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xColumn", "{s: O, s: O}", "self", cursor, "res", res?res:Py_None); finally: Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: Next() Move the cursor to the next row. Do not have an exception if there is no next row. Instead return False when :meth:`~VTCursor.Eof` is subsequently called. If you said you had indices in your :meth:`VTTable.BestIndex` return, and they were selected for use as provided in the parameters to :meth:`~VTCursor.Filter` then you should move to the next appropriate indexed and constrained row. */ static int apswvtabNext(sqlite3_vtab_cursor *pCursor) { PyObject *cursor, *res=NULL; PyGILState_STATE gilstate; int sqliteres=SQLITE_OK; gilstate=PyGILState_Ensure(); cursor=((apsw_vtable_cursor*)pCursor)->cursor; res=Call_PythonMethod(cursor, "Next", 1, NULL); if(res) goto finally; /* pyexception: we had an exception in python code */ assert(PyErr_Occurred()); sqliteres=MakeSqliteMsgFromPyException(&(pCursor->pVtab->zErrMsg)); /* SQLite flaw: errMsg should be on the cursor not the table! */ AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xNext", "{s: O}", "self", cursor); finally: Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: Close() This is the destructor for the cursor. Note that you must cleanup. The method will not be called again if you raise an exception. */ static int apswvtabClose(sqlite3_vtab_cursor *pCursor) { PyObject *cursor, *res=NULL; PyGILState_STATE gilstate; char **zErrMsgLocation=&(pCursor->pVtab->zErrMsg); /* we free pCursor but still need this field */ int sqliteres=SQLITE_OK; gilstate=PyGILState_Ensure(); cursor=((apsw_vtable_cursor*)pCursor)->cursor; res=Call_PythonMethod(cursor, "Close", 1, NULL); PyMem_Free(pCursor); /* always free */ if(res) goto finally; /* pyexception: we had an exception in python code */ assert(PyErr_Occurred()); sqliteres=MakeSqliteMsgFromPyException(zErrMsgLocation); /* SQLite flaw: errMsg should be on the cursor not the table! */ AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xClose", "{s: O}", "self", cursor); finally: Py_DECREF(cursor); /* this is where cursor gets freed */ Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /** .. method:: Rowid() -> 64 bit integer Return the current rowid. */ static int apswvtabRowid(sqlite3_vtab_cursor *pCursor, sqlite3_int64 *pRowid) { PyObject *cursor, *res=NULL, *pyrowid=NULL; PyGILState_STATE gilstate; int sqliteres=SQLITE_OK; gilstate=PyGILState_Ensure(); cursor=((apsw_vtable_cursor*)pCursor)->cursor; res=Call_PythonMethod(cursor, "Rowid", 1, NULL); if(!res) goto pyexception; /* extract result */ pyrowid=PyNumber_Long(res); if(!pyrowid) goto pyexception; *pRowid=PyLong_AsLongLong(pyrowid); if(!PyErr_Occurred()) /* could be bigger than 64 bits */ goto finally; pyexception: /* we had an exception in python code */ assert(PyErr_Occurred()); sqliteres=MakeSqliteMsgFromPyException(&(pCursor->pVtab->zErrMsg)); /* SQLite flaw: errMsg should be on the cursor not the table! */ AddTraceBackHere(__FILE__, __LINE__, "VirtualTable.xRowid", "{s: O}", "self", cursor); finally: Py_XDECREF(pyrowid); Py_XDECREF(res); PyGILState_Release(gilstate); return sqliteres; } /* it would be nice to use C99 style initializers here ... */ static struct sqlite3_module apsw_vtable_module= { 1, /* version */ apswvtabCreate, /* methods */ apswvtabConnect, apswvtabBestIndex, apswvtabDisconnect, apswvtabDestroy, apswvtabOpen, apswvtabClose, apswvtabFilter, apswvtabNext, apswvtabEof, apswvtabColumn, apswvtabRowid, apswvtabUpdate, apswvtabBegin, apswvtabSync, apswvtabCommit, apswvtabRollback, apswvtabFindFunction, apswvtabRename }; /** Troubleshooting virtual tables ============================== Virtual Tables are a relatively recent addition to SQLite and haven't been widely used yet. They do work well if all your routines work perfectly. A big help is using the local variables recipe as described in :ref:`augmented stack traces ` which will give you more details in errors, and shows an example with the complex :meth:`~VTTable.BestIndex` function. You may also find errors compounding. For example if you have an error in the Filter method of a cursor, SQLite then closes the cursor. If you also return an error in the Close method then the first error may mask the second or vice versa. .. note:: SQLite may ignore responses from your methods if they don't make sense. For example in BestIndex, if you set multiple arguments to have the same constraintargs position then your Filter won't receive any constraintargs at all. */ #endif /* EXPERIMENTAL */ /* end of Virtual table code */ apsw-3.8.11.1-r1/src/statementcache.c0000644000000000000000000005401712565422256015676 0ustar rootroot/* A prepared statment cache for SQLite See the accompanying LICENSE file. */ /* This is used while executing a statement. It may either be residing in the statement cache (doing the prepare to make a sqlite3_stmt is relatively expensive) or standalone if the cache was full. A doubly linked list is used to keep track of a least/most recent use. */ /* The following keys are considered for a cache entry: - The original text passed in (PyString/PyUnicode) - The utf8 of the original text (APSWBuffer) - The utf8 of the first statement (APSWBuffer) Currently only the first two are implemented. */ /* Some defines */ /* Set to zero to disable statement object recycling. Even small amount makes a big difference with diminishing returns based on how many the user program goes through without freeing and the interpretter gc intervals. */ #define SC_NRECYCLE 32 /* The maximum length of something in bytes that we would consider putting in the statement cache */ #define SC_MAXSIZE 16384 /* Define to do statement cache statistics */ /* #define SC_STATS */ typedef struct APSWStatement { PyObject_HEAD sqlite3_stmt *vdbestatement; /* the sqlite level vdbe code */ unsigned inuse; /* indicates an element is inuse when in cache preventing simulataneous use */ unsigned incache; /* indicates APSWStatement resides in cache */ PyObject *utf8; /* The text of the statement, also the key in the cache */ PyObject *next; /* If not null, the utf8 text of the remaining statements in multi statement queries. */ Py_ssize_t querylen; /* How many bytes of utf8 made up the query (used for exectrace) */ PyObject *origquery; /* The original query object, also a key in the cache pointing to this same statement - could be NULL */ struct APSWStatement *lru_prev; /* previous item in lru list (ie more recently used than this one) */ struct APSWStatement *lru_next; /* next item in lru list (ie less recently used than this one) */ } APSWStatement; static PyTypeObject APSWStatementType; typedef struct StatementCache { sqlite3 *db; /* database connection */ PyObject *cache; /* the actual cache itself */ unsigned numentries; /* how many APSWStatement entries we have in cache */ unsigned maxentries; /* maximum number of entries */ APSWStatement *mru; /* most recently used entry (head of the list) */ APSWStatement *lru; /* least recently used entry (tail of the list) */ #ifdef SC_STATS unsigned st_cachemiss; /* entry was not in cache */ unsigned st_cachehit; /* entry was in cache */ unsigned st_hitinuse; /* was a hit but was inuse */ #endif #if SC_NRECYCLE > 0 APSWStatement* recyclelist[SC_NRECYCLE]; /* recycle these rather than go through repeated malloc/free */ unsigned nrecycle; /* index of last entry in recycle list */ #endif } StatementCache; #ifndef NDEBUG static void statementcache_sanity_check(StatementCache *sc) { unsigned itemcountfwd, itemcountbackwd, i; APSWStatement *last, *item; #if SC_NRECYCLE > 0 /* also check the recycle list */ for(i=0;inrecycle;i++) assert(Py_REFCNT(sc->recyclelist[i])==1); assert(sc->nrecycle<=SC_NRECYCLE); #endif /* make sure everything is fine */ if(!sc->mru || !sc->lru) { /* list should be empty */ assert(!sc->mru); assert(!sc->lru); return; } if(sc->mru == sc->lru) { /* should be exactly one item */ assert(!sc->mru->lru_prev); assert(!sc->mru->lru_next); assert(sc->mru->incache); assert(sc->mru->vdbestatement); assert(!sc->mru->inuse); return; } /* Must be two or more items. If there are any loops then this function will execute forever. */ /* check items going forward */ last=NULL; itemcountfwd=0; item=sc->mru; while(item) { /* check item thinks it is in cache */ assert(item->incache==1); /* should not be inuse - inuse items are removed from lru list */ assert(!item->inuse); /* does prev actually go to prev? */ assert(item->lru_prev==last); /* check for loops */ assert(item->lru_prev!=item); assert(item->lru_next!=item); assert(item->lru_prev!=item->lru_next); itemcountfwd++; last=item; item=item->lru_next; } assert(sc->lru==last); /* check items going backwards */ last=NULL; itemcountbackwd=0; item=sc->lru; while(item) { /* does next actually go to next? */ assert(item->lru_next==last); /* check for loops */ assert(item->lru_next!=item); assert(item->lru_prev!=item); assert(item->lru_prev!=item->lru_next); /* statement not null */ assert(item->vdbestatement); itemcountbackwd++; last=item; item=item->lru_prev; } /* count should be same going forwards as going back */ assert(itemcountbackwd==itemcountfwd); } /* verifies a particular value is not in the dictionary */ static void assert_not_in_dict(PyObject *dict, PyObject *check) { PyObject *key, *value; Py_ssize_t pos=0; while(PyDict_Next(dict, &pos, &key, &value)) assert(check!=value); } #else #define statementcache_sanity_check(x) #define assert_not_in_dict(x,y) #endif /* re-prepare for SQLITE_SCHEMA */ static int statementcache_reprepare(StatementCache *sc, APSWStatement *statement) { int res, res2; sqlite3_stmt *newvdbe=0; const char *tail; const char *buffer; Py_ssize_t buflen; int usepreparev2; usepreparev2=sqlite3_bind_parameter_count(statement->vdbestatement); buffer=APSWBuffer_AS_STRING(statement->utf8); buflen=APSWBuffer_GET_SIZE(statement->utf8); /* see statementcache_prepare */ assert(buffer[buflen+1-1]==0); PYSQLITE_SC_CALL(res=usepreparev2? sqlite3_prepare_v2(sc->db, buffer, buflen+1, &newvdbe, &tail): /* PYSQLITE_SC_CALL */ sqlite3_prepare(sc->db, buffer, buflen+1, &newvdbe, &tail) /* PYSQLITE_SC_CALL */ ); if(res!=SQLITE_OK) goto error; /* the query size certainly shouldn't have changed! */ assert(statement->querylen==tail-buffer); APSW_FAULT_INJECT(TransferBindingsFail, PYSQLITE_SC_CALL(res=sqlite3_transfer_bindings(statement->vdbestatement, newvdbe)), res=SQLITE_NOMEM); if(res!=SQLITE_OK) goto error; PYSQLITE_SC_CALL(sqlite3_finalize(statement->vdbestatement)); statement->vdbestatement=newvdbe; return SQLITE_OK; error: SET_EXC(res, sc->db); AddTraceBackHere(__FILE__, __LINE__, "sqlite3_prepare", "{s: N}", "sql", convertutf8stringsize(buffer, buflen)); /* we don't want to clobber the errmsg so pretend everything is ok */ res2=res; res=SQLITE_OK; if(newvdbe) PYSQLITE_SC_CALL(sqlite3_finalize(newvdbe)); return res2; } /* Internal prepare routine after doing utf8 conversion. Returns a new reference. Must be reentrant */ static APSWStatement* statementcache_prepare(StatementCache *sc, PyObject *query, int usepreparev2) { APSWStatement *val=NULL; const char *buffer; const char *tail; Py_ssize_t buflen; int res; PyObject *utf8=NULL; if(!APSWBuffer_Check(query)) { /* Check to see if query is already in cache. The size checks are to avoid calculating hashes on long strings */ if( sc->cache && sc->numentries && ((PyUnicode_CheckExact(query) && PyUnicode_GET_DATA_SIZE(query) < SC_MAXSIZE) #if PY_MAJOR_VERSION < 3 || (PyString_CheckExact(query) && PyString_GET_SIZE(query) < SC_MAXSIZE) #endif )) { val=(APSWStatement*)PyDict_GetItem(sc->cache, query); if(val) { utf8=val->utf8; Py_INCREF(utf8); goto cachehit; } } utf8=getutf8string(query); if(!utf8) return NULL; { /* Make a buffer of utf8 which then owns underlying bytes */ PyObject *tmp=APSWBuffer_FromObject(utf8, 0, PyBytes_GET_SIZE(utf8)); Py_DECREF(utf8); if(!tmp) return NULL; utf8=tmp; } } else { utf8=query; query=NULL; Py_INCREF(utf8); } assert(APSWBuffer_Check(utf8)); /* if we have cache and utf8 is reasonable size? */ if(sc->cache && sc->numentries && APSWBuffer_GET_SIZE(utf8) < SC_MAXSIZE) { /* then is it in the cache? */ val=(APSWStatement*)PyDict_GetItem(sc->cache, utf8); } /* by this point we have created utf8 or added a reference to it */ cachehit: assert(APSWBuffer_Check(utf8)); #ifdef SC_STATS if(val) { sc->st_cachehit++; if(val->inuse) sc->st_hitinuse++; } else sc->st_cachemiss++; #endif if(val) { if(!val->inuse) { /* yay, one we can use */ assert(val->incache); assert(val->vdbestatement); val->inuse=1; /* unlink from lru tracking */ if(sc->mru==val) sc->mru=val->lru_next; if(sc->lru==val) sc->lru=val->lru_prev; if(val->lru_prev) { assert(val->lru_prev->lru_next==val); val->lru_prev->lru_next=val->lru_next; } if(val->lru_next) { assert(val->lru_next->lru_prev==val); val->lru_next->lru_prev=val->lru_prev; } val->lru_prev=val->lru_next=0; statementcache_sanity_check(sc); _PYSQLITE_CALL_V(sqlite3_clear_bindings(val->vdbestatement)); Py_INCREF( (PyObject*)val); assert(PyObject_RichCompareBool(utf8, val->utf8, Py_EQ)==1); APSWBuffer_XDECREF_unlikely(utf8); return val; } /* someone else is using it so we can't */ val=NULL; } #if SC_NRECYCLE > 0 if(sc->nrecycle) { val=sc->recyclelist[--sc->nrecycle]; assert(Py_REFCNT(val)==1); assert(!val->incache); assert(!val->inuse); if(val->vdbestatement) _PYSQLITE_CALL_V(sqlite3_finalize(val->vdbestatement)); APSWBuffer_XDECREF_likely(val->utf8); APSWBuffer_XDECREF_unlikely(val->next); Py_XDECREF(val->origquery); val->lru_prev=val->lru_next=0; statementcache_sanity_check(sc); } #else assert(!val); #endif if(!val) { /* have to make one */ val=PyObject_New(APSWStatement, &APSWStatementType); if(!val) goto error; /* zero it - other fields are set below */ val->incache=0; val->lru_prev=0; val->lru_next=0; } statementcache_sanity_check(sc); val->utf8=utf8; val->next=NULL; val->vdbestatement=NULL; val->inuse=1; Py_XINCREF(query); val->origquery=query; buffer=APSWBuffer_AS_STRING(utf8); buflen=APSWBuffer_GET_SIZE(utf8); /* If buffer[lengthpassedin-1] is not zero then SQLite makes a duplicate copy of the entire string passed in. The buffer we originally got from getutf8string will always have had an extra zero on the end. The assert is just to make sure */ assert(buffer[buflen+1-1]==0); PYSQLITE_SC_CALL(res=(usepreparev2)? sqlite3_prepare_v2(sc->db, buffer, buflen+1, &val->vdbestatement, &tail): /* PYSQLITE_SC_CALL */ sqlite3_prepare(sc->db, buffer, buflen+1, &val->vdbestatement, &tail)); /* PYSQLITE_SC_CALL */ /* Handle error. We would have a Python error if vtable.FindFunction had an error */ if(res!=SQLITE_OK || PyErr_Occurred()) { SET_EXC(res, sc->db); AddTraceBackHere(__FILE__, __LINE__, "sqlite3_prepare", "{s: N}", "sql", convertutf8stringsize(buffer, buflen)); goto error; } val->querylen=tail-buffer; /* is there a next statement (ignore semicolons and white space) */ while( (tail-buffernext=APSWBuffer_FromObject(utf8, tail-buffer, buflen-(tail-buffer)); if(!val->next) goto error; } return val; error: if(val) { val->inuse=0; #if SC_NRECYCLE > 0 if(sc->nrecyclerecyclelist[sc->nrecycle++]=val; } else #endif /* Getting this line to execute is hard as the statement would have come from the recyclelist in the first place so there will be a spot to return it to. The only way to do it would be some violent threading to refill the recyclelist between this statement being taken out and returned */ Py_DECREF(val); } return NULL; } /* Consumes reference on stmt. This routine must be reentrant. If reprepare_on_schema then if SQLITE_SCHEMA is the error, we reprepare the statement and don't finalize. */ static int statementcache_finalize(StatementCache *sc, APSWStatement *stmt, int reprepare_on_schema) { int res; /* PyDict_Contains will end up whining in comparison function if there is an existing exception hanging over our head */ assert(!PyErr_Occurred()); statementcache_sanity_check(sc); assert(stmt->inuse); /* we do not release the lock until the last possible moment, otherwise another thread could enter and reuse what we are in the middle of disposing of */ PYSQLITE_SC_CALL(res=sqlite3_reset(stmt->vdbestatement)); if(res==SQLITE_SCHEMA && reprepare_on_schema) { res=statementcache_reprepare(sc, stmt); if(res==SQLITE_OK) return SQLITE_SCHEMA; } /* is it going to be put in cache? */ if(stmt->incache || (sc->cache && stmt->vdbestatement && APSWBuffer_GET_SIZE(stmt->utf8) < SC_MAXSIZE && !PyDict_Contains(sc->cache, stmt->utf8))) { /* add ourselves to cache */ if(!stmt->incache) { assert(!PyDict_Contains(sc->cache, stmt->utf8)); assert_not_in_dict(sc->cache, (PyObject*)stmt); PyDict_SetItem(sc->cache, stmt->utf8, (PyObject*)stmt); if(stmt->origquery) /* something equal to this query may already be in cache which would cause an eviction of an unrelated item and all sorts of grief */ if (!PyDict_Contains(sc->cache, stmt->origquery)) PyDict_SetItem(sc->cache, stmt->origquery, (PyObject*)stmt); stmt->incache=1; sc->numentries += 1; } assert(PyDict_Contains(sc->cache, stmt->utf8)); /* do we need to do an evict? */ while(sc->numentries > sc->maxentries) { APSWStatement *evictee=sc->lru; statementcache_sanity_check(sc); assert(evictee!=stmt); /* we were inuse and so should not be on evict list */ /* no possibles to evict? */ if(!sc->lru) break; /* only entry? */ if(!evictee->lru_prev) { assert(sc->mru==evictee); /* points to sole entry */ assert(sc->lru==evictee); /* points to sole entry */ assert(!evictee->lru_prev); /* should be anyone before */ assert(!evictee->lru_next); /* or after */ sc->mru=NULL; sc->lru=NULL; goto delevictee; } /* take out lru member */ sc->lru=evictee->lru_prev; assert(sc->lru->lru_next==evictee); sc->lru->lru_next=NULL; delevictee: assert(!evictee->inuse); assert(evictee->incache); statementcache_sanity_check(sc); /* only references should be the dict */ assert(Py_REFCNT(evictee)==1+!!evictee->origquery); #if SC_NRECYCLE > 0 /* we don't gc to run on object */ Py_INCREF(evictee); #endif if(evictee->origquery) { assert(evictee==(APSWStatement*)PyDict_GetItem(sc->cache, evictee->origquery)); PyDict_DelItem(sc->cache, evictee->origquery); Py_DECREF(evictee->origquery); evictee->origquery=NULL; } assert(evictee==(APSWStatement*)PyDict_GetItem(sc->cache, evictee->utf8)); PyDict_DelItem(sc->cache, evictee->utf8); assert_not_in_dict(sc->cache, (PyObject*)evictee); assert(!PyErr_Occurred()); #if SC_NRECYCLE > 0 if(sc->nrecyclerecyclelist[sc->nrecycle++]=evictee; evictee->incache=0; } else { Py_DECREF(evictee); } #endif sc->numentries -= 1; statementcache_sanity_check(sc); } statementcache_sanity_check(sc); /* plumb ourselves into head of lru list */ assert(stmt->inuse); stmt->inuse=0; stmt->lru_next=sc->mru; stmt->lru_prev=NULL; if(sc->mru) sc->mru->lru_prev=stmt; sc->mru=stmt; if(!sc->lru) sc->lru=stmt; statementcache_sanity_check(sc); } stmt->inuse=0; #if SC_NRECYCLE > 0 if(!stmt->incache && sc->nrecyclerecyclelist[sc->nrecycle++]=stmt; } else #endif { Py_DECREF(stmt); } return res; } /* returns SQLITE_OK on success. ppstmt will be next statement on success else null on error. reference will be consumed on ppstmt passed in and new reference on one returned */ static int statementcache_next(StatementCache *sc, APSWStatement **ppstmt, int usepreparev2) { PyObject *next=(*ppstmt)->next; int res; assert(next); Py_INCREF(next); res=statementcache_finalize(sc, *ppstmt, 0); /* INUSE_CALL not needed here */ /* defensive coding. res will never be an error as errors would have been returned from earlier step call */ assert(res==SQLITE_OK); if(res!=SQLITE_OK) goto error; /* statementcache_prepare already sets exception */ *ppstmt=statementcache_prepare(sc, next, usepreparev2); /* INUSE_CALL not needed here */ res=(*ppstmt)?SQLITE_OK:SQLITE_ERROR; error: APSWBuffer_XDECREF_unlikely(next); return res; } static StatementCache* statementcache_init(sqlite3 *db, unsigned nentries) { StatementCache *sc=(StatementCache*)PyMem_Malloc(sizeof(StatementCache)); if(!sc) return NULL; memset(sc, 0, sizeof(StatementCache)); sc->db=db; /* sc->cache is left as null if we aren't caching */ if (nentries) { APSW_FAULT_INJECT(StatementCacheAllocFails, sc->cache=PyDict_New(), sc->cache=PyErr_NoMemory()); if(!sc->cache) { PyMem_Free(sc); return NULL; } } sc->maxentries=nentries; sc->mru=NULL; sc->lru=NULL; #if SC_NRECYCLE > 0 sc->nrecycle=0; #endif return sc; } static void statementcache_free(StatementCache *sc) { #if SC_NRECYCLE>0 while(sc->nrecycle) { PyObject *o=(PyObject*)sc->recyclelist[--sc->nrecycle]; Py_DECREF(o); } #endif Py_XDECREF(sc->cache); PyMem_Free(sc); #ifdef SC_STATS fprintf(stderr, "SC Miss: %u Hit: %u HitButInuse: %u\n", sc->st_cachemiss, sc->st_cachehit, sc->st_hitinuse); #endif } static void APSWStatement_dealloc(APSWStatement *stmt) { if(stmt->vdbestatement) _PYSQLITE_CALL_V(sqlite3_finalize(stmt->vdbestatement)); assert(stmt->inuse==0); APSWBuffer_XDECREF_likely(stmt->utf8); APSWBuffer_XDECREF_likely(stmt->next); Py_XDECREF(stmt->origquery); Py_TYPE(stmt)->tp_free((PyObject*)stmt); } /* Convert a utf8 buffer to PyUnicode */ static PyObject * convertutf8buffertounicode(PyObject *buffer) { assert(APSWBuffer_Check(buffer)); return convertutf8stringsize(APSWBuffer_AS_STRING(buffer), APSWBuffer_GET_SIZE(buffer)); } /* Convert a utf8 buffer and size to PyUnicode */ static PyObject * convertutf8buffersizetounicode(PyObject *buffer, Py_ssize_t len) { assert(APSWBuffer_Check(buffer)); assert(len<=APSWBuffer_GET_SIZE(buffer)); return convertutf8stringsize(APSWBuffer_AS_STRING(buffer), len); } static PyTypeObject APSWStatementType = { APSW_PYTYPE_INIT "apsw.APSWStatement", /*tp_name*/ sizeof(APSWStatement), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)APSWStatement_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "APSWStatement object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; apsw-3.8.11.1-r1/src/exceptions.c0000644000000000000000000002174512565422256015071 0ustar rootroot/* Exception code, data and macros See the accompanying LICENSE file. */ /* EXCEPTION TYPES */ static PyObject *APSWException; /* root exception class */ static PyObject *ExcThreadingViolation; /* thread misuse */ static PyObject *ExcIncomplete; /* didn't finish previous query */ static PyObject *ExcBindings; /* wrong number of bindings */ static PyObject *ExcComplete; /* query is finished */ static PyObject *ExcTraceAbort; /* aborted by exectrace */ static PyObject *ExcExtensionLoading; /* error loading extension */ static PyObject *ExcConnectionNotClosed; /* connection wasn't closed when destructor called */ static PyObject *ExcCursorClosed; /* cursor object was closed when function called */ static PyObject *ExcConnectionClosed; /* connection was closed when function called */ static PyObject *ExcVFSNotImplemented; /* base vfs doesn't implment function */ static PyObject *ExcVFSFileClosed; /* attempted operation on closed file */ static PyObject *ExcForkingViolation; /* used object across a fork */ static void make_exception(int res, sqlite3 *db); /* If res indicates an SQLite error then do all the exception creation work. We don't overwrite earlier exceptions hence the PyErr_Occurred check */ #define SET_EXC(res,db) { if(res != SQLITE_OK && !PyErr_Occurred()) make_exception(res,db); } /* A dictionary we store the last error from each thread in. Used thread local storage previously. The key is a PyLong of the thread id and the value is a PyBytes. */ static PyObject *tls_errmsg; /* This method is called with the database mutex held but the GIL released. Previous code used thread local storage which is a bit too OS dependent (eg required a DllMain under Windows) but it didn't need any Python code. It is safe to acquire the GIL since the db mutex has been acquired first so we are no different than a user defined function. */ static void apsw_set_errmsg(const char *msg) { PyObject *key=NULL, *value=NULL; PyObject *etype, *eval, *etb; PyGILState_STATE gilstate=PyGILState_Ensure(); /* dictionary operations whine if there is an outstanding error */ PyErr_Fetch(&etype, &eval, &etb); if(!tls_errmsg) { tls_errmsg=PyDict_New(); if(!tls_errmsg) goto finally; } key=PyLong_FromLong(PyThread_get_thread_ident()); if(!key) goto finally; value=PyBytes_FromStringAndSize(msg, strlen(msg)); if(!value) goto finally; PyDict_SetItem(tls_errmsg, key, value); finally: Py_XDECREF(key); Py_XDECREF(value); PyErr_Restore(etype, eval, etb); PyGILState_Release(gilstate); } static const char * apsw_get_errmsg(void) { const char *retval=NULL; PyObject *key=NULL, *value; /* set should always have been called first */ assert(tls_errmsg); key=PyLong_FromLong(PyThread_get_thread_ident()); if(!key) goto finally; value=PyDict_GetItem(tls_errmsg, key); if(value) retval=PyBytes_AsString(value); finally: Py_XDECREF(key); /* value is borrowed */ return retval; } static struct { int code; const char *name; PyObject *cls;} exc_descriptors[]= { /* Generic Errors */ {SQLITE_ERROR, "SQL", NULL}, {SQLITE_MISMATCH, "Mismatch", NULL}, {SQLITE_NOTFOUND, "NotFound", NULL}, /* Internal Errors */ {SQLITE_INTERNAL, "Internal", NULL}, /* NOT USED */ {SQLITE_PROTOCOL, "Protocol", NULL}, {SQLITE_MISUSE, "Misuse", NULL}, {SQLITE_RANGE, "Range", NULL}, /* permissions etc */ {SQLITE_PERM, "Permissions", NULL}, {SQLITE_READONLY, "ReadOnly", NULL}, {SQLITE_CANTOPEN, "CantOpen", NULL}, {SQLITE_AUTH, "Auth", NULL}, /* abort/busy/etc */ {SQLITE_ABORT, "Abort", NULL}, {SQLITE_BUSY, "Busy", NULL}, {SQLITE_LOCKED, "Locked", NULL}, {SQLITE_INTERRUPT,"Interrupt", NULL}, {SQLITE_SCHEMA, "SchemaChange", NULL}, {SQLITE_CONSTRAINT, "Constraint", NULL}, /* memory/disk/corrupt etc */ {SQLITE_NOMEM, "NoMem", NULL}, {SQLITE_IOERR, "IO", NULL}, {SQLITE_CORRUPT, "Corrupt", NULL}, {SQLITE_FULL, "Full", NULL}, {SQLITE_TOOBIG, "TooBig", NULL}, {SQLITE_NOLFS, "NoLFS", NULL}, {SQLITE_EMPTY, "Empty", NULL}, {SQLITE_FORMAT, "Format", NULL}, {SQLITE_NOTADB, "NotADB", NULL}, {-1, 0, 0} }; /* EXCEPTION CODE */ /* MS compiler is stupid and requires this pulled out */ typedef struct {PyObject **var; const char *name; } APSWExceptionMapping; static int init_exceptions(PyObject *m) { char buffy[100]; /* more than enough for anyone :-) */ unsigned int i; PyObject *obj; APSWExceptionMapping apswexceptions[]={ {&ExcThreadingViolation, "ThreadingViolationError"}, {&ExcIncomplete, "IncompleteExecutionError"}, {&ExcBindings, "BindingsError"}, {&ExcComplete, "ExecutionCompleteError"}, {&ExcTraceAbort, "ExecTraceAbort"}, {&ExcExtensionLoading, "ExtensionLoadingError"}, {&ExcConnectionNotClosed, "ConnectionNotClosedError"}, {&ExcConnectionClosed, "ConnectionClosedError"}, {&ExcCursorClosed, "CursorClosedError"}, {&ExcVFSNotImplemented, "VFSNotImplementedError"}, {&ExcVFSFileClosed, "VFSFileClosedError"}, {&ExcForkingViolation, "ForkingViolationError"} }; /* PyModule_AddObject uses borrowed reference so we incref whatever we give to it, so we still have a copy to use */ /* custom ones first */ APSWException=PyErr_NewException("apsw.Error", NULL, NULL); if(!APSWException) return -1; Py_INCREF(APSWException); if(PyModule_AddObject(m, "Error", (PyObject *)APSWException)) return -1; for(i=0; i`_ happen. Instead you should use bindings:: sql="insert into example values(?, ?)" cursor.execute(sql, ("string", 8390823904)) # You can also use dictionaries sql="insert into example values(:title, :isbn)" cursor.execute(sql, {"title": "string", "isbn": 8390823904}) # You can use local variables as the dictionary title="..." isbn="...." cursor.execute(sql, locals()) Cursors are cheap. Use as many as you need. It is safe to use them across threads, such as calling :meth:`~Cursor.execute` in one thread, passing the cursor to another thread that then calls :meth:`Cursor.next`. The only thing you can't do is call methods at exactly the same time on the same cursor in two different threads - eg trying to call :meth:`~Cursor.execute` in both at the same time, or :meth:`~Cursor.execute` in one and :meth:`Cursor.next` in another. (If you do attempt this, it will be detected and :exc:`ThreadingViolationError` will be raised.) Behind the scenes a :class:`Cursor` maps to a `SQLite statement `_. APSW maintains a :ref:`cache ` so that the mapping is very fast, and the SQLite objects are reused when possible. A unique feature of APSW is that your query can be multiple semi-colon separated statements. For example:: cursor.execute("select ... ; insert into ... ; update ... ; select ...") .. note:: SQLite fetches data as it is needed. If table *example* had 10 million rows it would only get the next row as requested (the for loop effectively calls :meth:`~Cursor.next` to get each row). This code would not work as expected:: for row in cursor.execute("select * from example"): cursor.execute("insert .....") The nested :meth:`~Cursor.execute` would start a new query abandoning any remaining results from the ``SELECT`` cursor. There are two ways to work around this. Use a different cursor:: for row in cursor1.execute("select * from example"): cursor2.execute("insert ...") You can also get all the rows immediately by filling in a list:: rows=list( cursor.execute("select * from example") ) for row in rows: cursor.execute("insert ...") This last approach is recommended since you don't have to worry about the database changing while doing the ``select``. You should also understand transactions and where to put the transaction boundaries. .. note:: Cursors on the same :ref:`Connection ` are not isolated from each other. Anything done on one cursor is immediately visible to all other Cursors on the same connection. This still applies if you start transactions. Connections are isolated from each other with cursors on other connections not seeing changes until they are committed. .. seealso:: * `SQLite transactions `_ * `Atomic commit `_ * `Example of changing the database while running a query problem `_ * :ref:`Benchmarking` */ /** .. class:: Cursor You obtain cursors by calling :meth:`Connection.cursor`. */ /* CURSOR TYPE */ struct APSWCursor { PyObject_HEAD Connection *connection; /* pointer to parent connection */ unsigned inuse; /* track if we are in use preventing concurrent thread mangling */ struct APSWStatement *statement; /* statement we are currently using */ /* what state we are in */ enum { C_BEGIN, C_ROW, C_DONE } status; /* bindings for query */ PyObject *bindings; /* dict or sequence */ Py_ssize_t bindingsoffset; /* for sequence tracks how far along we are when dealing with multiple statements */ /* iterator for executemany, original query string */ PyObject *emiter; PyObject *emoriginalquery; /* tracing functions */ PyObject *exectrace; PyObject *rowtrace; /* weak reference support */ PyObject *weakreflist; PyObject *description_cache[2]; }; typedef struct APSWCursor APSWCursor; static PyTypeObject APSWCursorType; /* CURSOR CODE */ /* Macro for getting a tracer. If our tracer is NULL or None then return 0 else return connection tracer */ #define ROWTRACE ( (self->rowtrace && self->rowtrace!=Py_None) ? self->rowtrace : ( (self->rowtrace==Py_None) ? 0 : self->connection->rowtrace ) ) #define EXECTRACE ( (self->exectrace && self->exectrace!=Py_None) ? self->exectrace : ( (self->exectrace==Py_None) ? 0 : self->connection->exectrace ) ) /* Do finalization and free resources. Returns the SQLITE error code. If force is 2 then don't raise any exceptions */ static int resetcursor(APSWCursor *self, int force) { int res=SQLITE_OK; PyObject *nextquery=self->statement?self->statement->next:NULL; PyObject *etype, *eval, *etb; Py_CLEAR(self->description_cache[0]); Py_CLEAR(self->description_cache[1]); if(force) PyErr_Fetch(&etype, &eval, &etb); Py_XINCREF(nextquery); if(self->statement) { INUSE_CALL(res=statementcache_finalize(self->connection->stmtcache, self->statement, !force)); if(!force) /* we don't care about errors when forcing */ { if(res==SQLITE_SCHEMA) { Py_XDECREF(nextquery); return res; } SET_EXC(res, self->connection->db); } self->statement=0; } Py_CLEAR(self->bindings); self->bindingsoffset= -1; if(!force && self->status!=C_DONE && nextquery) { if (res==SQLITE_OK) { /* We still have more, so this is actually an abort. */ res=SQLITE_ERROR; if(!PyErr_Occurred()) { PyErr_Format(ExcIncomplete, "Error: there are still remaining sql statements to execute"); AddTraceBackHere(__FILE__, __LINE__, "resetcursor", "{s: N}", "remaining", convertutf8buffertounicode(nextquery)); } } } Py_XDECREF(nextquery); if(!force && self->status!=C_DONE && self->emiter) { PyObject *next; INUSE_CALL(next=PyIter_Next(self->emiter)); if(next) { Py_DECREF(next); res=SQLITE_ERROR; assert(PyErr_Occurred()); } } Py_CLEAR(self->emiter); Py_CLEAR(self->emoriginalquery); self->status=C_DONE; if (PyErr_Occurred()) { assert(res); AddTraceBackHere(__FILE__, __LINE__, "resetcursor", "{s: i}", "res", res); } if(force) PyErr_Restore(etype, eval, etb); return res; } static int APSWCursor_close_internal(APSWCursor *self, int force) { PyObject *err_type, *err_value, *err_traceback; int res; if(force==2) PyErr_Fetch(&err_type, &err_value, &err_traceback); res=resetcursor(self, force); if(force==2) PyErr_Restore(err_type, err_value, err_traceback); else { if(res) { assert(PyErr_Occurred()); return 1; } assert(!PyErr_Occurred()); } /* Remove from connection dependents list. Has to be done before we decref self->connection otherwise connection could dealloc and we'd still be in list */ if(self->connection) Connection_remove_dependent(self->connection, (PyObject*)self); /* executemany iterator */ Py_CLEAR(self->emiter); /* no need for tracing */ Py_CLEAR(self->exectrace); Py_CLEAR(self->rowtrace); /* we no longer need connection */ Py_CLEAR(self->connection); Py_CLEAR(self->description_cache[0]); Py_CLEAR(self->description_cache[1]); return 0; } static void APSWCursor_dealloc(APSWCursor * self) { APSW_CLEAR_WEAKREFS; APSWCursor_close_internal(self, 2); Py_TYPE(self)->tp_free((PyObject*)self); } static void APSWCursor_init(APSWCursor *self, Connection *connection) { self->connection=connection; self->statement=0; self->status=C_DONE; self->bindings=0; self->bindingsoffset=0; self->emiter=0; self->emoriginalquery=0; self->exectrace=0; self->rowtrace=0; self->inuse=0; self->weakreflist=NULL; self->description_cache[0]=0; self->description_cache[1]=0; } static const char *description_formats[]={ "(O&O&)", "(O&O&OOOOO)" }; static PyObject * APSWCursor_internal_getdescription(APSWCursor *self, int fmtnum) { int ncols,i; PyObject *result=NULL; PyObject *column=NULL; assert(sizeof(description_formats)==sizeof(self->description_cache)); CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); if(!self->statement) { assert(self->description_cache[0]==0); assert(self->description_cache[1]==0); return PyErr_Format(ExcComplete, "Can't get description for statements that have completed execution"); } if(self->description_cache[fmtnum]) { Py_INCREF(self->description_cache[fmtnum]); return self->description_cache[fmtnum]; } ncols=sqlite3_column_count(self->statement->vdbestatement); result=PyTuple_New(ncols); if(!result) goto error; for(i=0;istatement->vdbestatement, i), coldesc=sqlite3_column_decltype(self->statement->vdbestatement, i)) ); APSW_FAULT_INJECT(GetDescriptionFail, column=Py_BuildValue(description_formats[fmtnum], convertutf8string, colname, convertutf8string, coldesc, Py_None, Py_None, Py_None, Py_None, Py_None), column=PyErr_NoMemory() ); if(!column) goto error; PyTuple_SET_ITEM(result, i, column); /* owned by result now */ column=0; } Py_INCREF(result); self->description_cache[fmtnum]=result; return result; error: Py_XDECREF(result); Py_XDECREF(column); return NULL; } /** .. method:: getdescription() -> tuple Returns a tuple describing each column in the result row. The return is identical for every row of the results. You can only call this method once you have started executing a statement and before you have finished:: # This will error cursor.getdescription() for row in cursor.execute("select ....."): # this works print cursor.getdescription() print row The information about each column is a tuple of ``(column_name, declared_column_type)``. The type is what was declared in the ``CREATE TABLE`` statement - the value returned in the row will be whatever type you put in for that row and column. (This is known as `manifest typing `_ which is also the way that Python works. The variable ``a`` could contain an integer, and then you could put a string in it. Other static languages such as C or other SQL databases only let you put one type in - eg ``a`` could only contain an integer or a string, but never both.) Example:: cursor.execute("create table books(title string, isbn number, wibbly wobbly zebra)") cursor.execute("insert into books values(?,?,?)", (97, "fjfjfj", 3.7)) cursor.execute("insert into books values(?,?,?)", ("fjfjfj", 3.7, 97)) for row in cursor.execute("select * from books"): print cursor.getdescription() print row Output:: # row 0 - description (('title', 'string'), ('isbn', 'number'), ('wibbly', 'wobbly zebra')) # row 0 - values (97, 'fjfjfj', 3.7) # row 1 - description (('title', 'string'), ('isbn', 'number'), ('wibbly', 'wobbly zebra')) # row 1 - values ('fjfjfj', 3.7, 97) -* sqlite3_column_name sqlite3_column_decltype */ static PyObject* APSWCursor_getdescription(APSWCursor *self) { return APSWCursor_internal_getdescription(self, 0); } /** .. attribute:: description Based on the `DB-API cursor property `__, this returns the same as :meth:`getdescription` but with 5 Nones appended. See also :issue:`131`. */ static PyObject *APSWCursor_getdescription_dbapi(APSWCursor *self) { return APSWCursor_internal_getdescription(self, 1); } /* internal function - returns SQLite error code (ie SQLITE_OK if all is well) */ static int APSWCursor_dobinding(APSWCursor *self, int arg, PyObject *obj) { /* DUPLICATE(ish) code: this is substantially similar to the code in set_context_result. If you fix anything here then do it there as well. */ int res=SQLITE_OK; assert(!PyErr_Occurred()); if(obj==Py_None) PYSQLITE_CUR_CALL(res=sqlite3_bind_null(self->statement->vdbestatement, arg)); /* Python uses a 'long' for storage of PyInt. This could be a 32bit or 64bit quantity depending on the platform. */ #if PY_MAJOR_VERSION < 3 else if(PyInt_Check(obj)) { long v=PyInt_AS_LONG(obj); PYSQLITE_CUR_CALL(res=sqlite3_bind_int64(self->statement->vdbestatement, arg, v)); } #endif else if (PyLong_Check(obj)) { /* nb: PyLong_AsLongLong can cause Python level error */ long long v=PyLong_AsLongLong(obj); PYSQLITE_CUR_CALL(res=sqlite3_bind_int64(self->statement->vdbestatement, arg, v)); } else if (PyFloat_Check(obj)) { double v=PyFloat_AS_DOUBLE(obj); PYSQLITE_CUR_CALL(res=sqlite3_bind_double(self->statement->vdbestatement, arg, v)); } else if (PyUnicode_Check(obj)) { const void *badptr=NULL; UNIDATABEGIN(obj) APSW_FAULT_INJECT(DoBindingUnicodeConversionFails,,strdata=(char*)PyErr_NoMemory()); badptr=strdata; #ifdef APSW_TEST_LARGE_OBJECTS APSW_FAULT_INJECT(DoBindingLargeUnicode,,strbytes=0x001234567890L); #endif if(strdata) { if(strbytes>APSW_INT32_MAX) { SET_EXC(SQLITE_TOOBIG, NULL); } else PYSQLITE_CUR_CALL(res=USE16(sqlite3_bind_text)(self->statement->vdbestatement, arg, strdata, strbytes, SQLITE_TRANSIENT)); } UNIDATAEND(obj); if(!badptr) { assert(PyErr_Occurred()); return -1; } } #if PY_MAJOR_VERSION < 3 else if (PyString_Check(obj)) { const char *val=PyString_AS_STRING(obj); const size_t lenval=PyString_GET_SIZE(obj); const char *chk=val; if(lenval<10000) for(;chkAPSW_INT32_MAX) { SET_EXC(SQLITE_TOOBIG, NULL); res=SQLITE_TOOBIG; } else PYSQLITE_CUR_CALL(res=USE16(sqlite3_bind_text)(self->statement->vdbestatement, arg, strdata, strbytes, SQLITE_TRANSIENT)); } UNIDATAEND(str2); Py_DECREF(str2); if(!badptr) { assert(PyErr_Occurred()); return -1; } } else { assert(lenvalstatement->vdbestatement, arg, val, lenval, SQLITE_TRANSIENT)); } } #endif else if (PyObject_CheckReadBuffer(obj)) { const void *buffer; Py_ssize_t buflen; int asrb; APSW_FAULT_INJECT(DoBindingAsReadBufferFails,asrb=PyObject_AsReadBuffer(obj, &buffer, &buflen), (PyErr_NoMemory(), asrb=-1)); if(asrb!=0) return -1; if (buflen>APSW_INT32_MAX) { SET_EXC(SQLITE_TOOBIG, NULL); return -1; } PYSQLITE_CUR_CALL(res=sqlite3_bind_blob(self->statement->vdbestatement, arg, buffer, buflen, SQLITE_TRANSIENT)); } else if(PyObject_TypeCheck(obj, &ZeroBlobBindType)==1) { PYSQLITE_CUR_CALL(res=sqlite3_bind_zeroblob(self->statement->vdbestatement, arg, ((ZeroBlobBind*)obj)->blobsize)); } else { PyErr_Format(PyExc_TypeError, "Bad binding argument type supplied - argument #%d: type %s", (int)(arg+self->bindingsoffset), Py_TYPE(obj)->tp_name); return -1; } if(res!=SQLITE_OK) { SET_EXC(res, self->connection->db); return -1; } if(PyErr_Occurred()) return -1; return 0; } /* internal function */ static int APSWCursor_dobindings(APSWCursor *self) { int nargs, arg, res=-1, sz=0; PyObject *obj; assert(!PyErr_Occurred()); assert(self->bindingsoffset>=0); nargs=sqlite3_bind_parameter_count(self->statement->vdbestatement); if(nargs==0 && !self->bindings) return 0; /* common case, no bindings needed or supplied */ if (nargs>0 && !self->bindings) { PyErr_Format(ExcBindings, "Statement has %d bindings but you didn't supply any!", nargs); return -1; } /* a dictionary? */ if (self->bindings && PyDict_Check(self->bindings)) { for(arg=1;arg<=nargs;arg++) { PyObject *keyo=NULL; const char *key; PYSQLITE_CUR_CALL(key=sqlite3_bind_parameter_name(self->statement->vdbestatement, arg)); if(!key) { PyErr_Format(ExcBindings, "Binding %d has no name, but you supplied a dict (which only has names).", arg-1); return -1; } assert(*key==':' || *key=='$'); key++; /* first char is a colon or dollar which we skip */ keyo=PyUnicode_DecodeUTF8(key, strlen(key), NULL); if(!keyo) return -1; obj=PyDict_GetItem(self->bindings, keyo); Py_DECREF(keyo); if(!obj) /* this is where we could error on missing keys */ continue; if(APSWCursor_dobinding(self,arg,obj)!=SQLITE_OK) { assert(PyErr_Occurred()); return -1; } } return 0; } /* it must be a fast sequence */ /* verify the number of args supplied */ if (self->bindings) sz=PySequence_Fast_GET_SIZE(self->bindings); /* there is another statement after this one ... */ if(self->statement->next && sz-self->bindingsoffsetbindings)?sz:0, (int)(self->bindingsoffset)); return -1; } /* no more statements */ if(!self->statement->next && sz-self->bindingsoffset!=nargs) { PyErr_Format(ExcBindings, "Incorrect number of bindings supplied. The current statement uses %d and there are %d supplied. Current offset is %d", nargs, (self->bindings)?sz:0, (int)(self->bindingsoffset)); return -1; } res=SQLITE_OK; /* nb sqlite starts bind args at one not zero */ for(arg=1;arg<=nargs;arg++) { obj=PySequence_Fast_GET_ITEM(self->bindings, arg-1+self->bindingsoffset); if(APSWCursor_dobinding(self, arg, obj)) { assert(PyErr_Occurred()); return -1; } } self->bindingsoffset+=nargs; assert(res==0); return 0; } static int APSWCursor_doexectrace(APSWCursor *self, Py_ssize_t savedbindingsoffset) { PyObject *retval=NULL; PyObject *sqlcmd=NULL; PyObject *bindings=NULL; PyObject *exectrace; int result; exectrace=EXECTRACE; assert(exectrace); assert(self->statement); /* make a string of the command */ sqlcmd=convertutf8buffersizetounicode(self->statement->utf8, self->statement->querylen); if(!sqlcmd) return -1; /* now deal with the bindings */ if(self->bindings) { if(PyDict_Check(self->bindings)) { bindings=self->bindings; Py_INCREF(self->bindings); } else { APSW_FAULT_INJECT(DoExecTraceBadSlice, bindings=PySequence_GetSlice(self->bindings, savedbindingsoffset, self->bindingsoffset), bindings=PyErr_NoMemory()); if(!bindings) { Py_DECREF(sqlcmd); return -1; } } } else { bindings=Py_None; Py_INCREF(bindings); } retval=PyObject_CallFunction(exectrace, "ONN", self, sqlcmd, bindings); if(!retval) { assert(PyErr_Occurred()); return -1; } result=PyObject_IsTrue(retval); Py_DECREF(retval); assert (result==-1 || result==0 || result ==1); if(result==-1) { assert(PyErr_Occurred()); return -1; } if(result) return 0; /* callback didn't want us to continue */ PyErr_Format(ExcTraceAbort, "Aborted by false/null return value of exec tracer"); return -1; } static PyObject* APSWCursor_dorowtrace(APSWCursor *self, PyObject *retval) { PyObject *rowtrace=ROWTRACE; assert(rowtrace); return PyObject_CallFunction(rowtrace, "OO", self, retval); } /* Returns a borrowed reference to self if all is ok, else NULL on error */ static PyObject * APSWCursor_step(APSWCursor *self) { int res; int savedbindingsoffset=0; /* initialised to stop stupid compiler from whining */ for(;;) { assert(!PyErr_Occurred()); PYSQLITE_CUR_CALL(res=(self->statement->vdbestatement)?(sqlite3_step(self->statement->vdbestatement)):(SQLITE_DONE)); switch(res&0xff) { case SQLITE_ROW: self->status=C_ROW; return (PyErr_Occurred())?(NULL):((PyObject*)self); case SQLITE_DONE: if (PyErr_Occurred()) { self->status=C_DONE; return NULL; } break; default: /* FALLTHRU */ case SQLITE_ERROR: /* SQLITE_BUSY is handled here as well */ /* there was an error - we need to get actual error code from sqlite3_finalize */ self->status=C_DONE; if(PyErr_Occurred()) /* we don't care about further errors from the sql */ resetcursor(self, 1); else { res=resetcursor(self, 0); /* this will get the error code for us */ assert(res!=SQLITE_OK); } if(res==SQLITE_SCHEMA && !PyErr_Occurred()) { self->status=C_BEGIN; continue; } return NULL; } assert(res==SQLITE_DONE); /* done with that statement, are there any more? */ self->status=C_DONE; if(!self->statement->next) { PyObject *next; /* in executemany mode ?*/ if(!self->emiter) { /* no more so we finalize */ res=resetcursor(self, 0); assert(res==SQLITE_OK); return (PyObject*)self; } /* we are in executemany mode */ INUSE_CALL(next=PyIter_Next(self->emiter)); if(PyErr_Occurred()) { assert(!next); return NULL; } if(!next) { res=resetcursor(self, 0); assert(res==SQLITE_OK); return (PyObject*)self; } /* we need to clear just completed and restart original executemany statement */ INUSE_CALL(statementcache_finalize(self->connection->stmtcache, self->statement, 0)); self->statement=NULL; /* don't need bindings from last round if emiter.next() */ Py_CLEAR(self->bindings); self->bindingsoffset=0; /* verify type of next before putting in bindings */ if(PyDict_Check(next)) self->bindings=next; else { self->bindings=PySequence_Fast(next, "You must supply a dict or a sequence"); /* we no longer need next irrespective of what happens in line above */ Py_DECREF(next); if(!self->bindings) return NULL; } assert(self->bindings); } /* finalise and go again */ if(!self->statement) { /* we are going again in executemany mode */ assert(self->emiter); INUSE_CALL(self->statement=statementcache_prepare(self->connection->stmtcache, self->emoriginalquery, 1)); res=(self->statement)?SQLITE_OK:SQLITE_ERROR; } else { /* next sql statement */ INUSE_CALL(res=statementcache_next(self->connection->stmtcache, &self->statement, !!self->bindings)); SET_EXC(res, self->connection->db); } if (res!=SQLITE_OK) { assert((res&0xff)!=SQLITE_BUSY); /* finalize shouldn't be returning busy, only step */ assert(!self->statement); return NULL; } assert(self->statement); savedbindingsoffset=self->bindingsoffset; assert(!PyErr_Occurred()); Py_CLEAR(self->description_cache[0]); Py_CLEAR(self->description_cache[1]); if(APSWCursor_dobindings(self)) { assert(PyErr_Occurred()); return NULL; } if(EXECTRACE) { if(APSWCursor_doexectrace(self, savedbindingsoffset)) { assert(self->status==C_DONE); assert(PyErr_Occurred()); return NULL; } } assert(self->status==C_DONE); self->status=C_BEGIN; } /* you can't actually get here */ assert(0); return NULL; } /** .. method:: execute(statements[, bindings]) -> iterator Executes the statements using the supplied bindings. Execution returns when the first row is available or all statements have completed. :param statements: One or more SQL statements such as ``select * from books`` or ``begin; insert into books ...; select last_insert_rowid(); end``. :param bindings: If supplied should either be a sequence or a dictionary. Each item must be one of the :ref:`supported types ` If you use numbered bindings in the query then supply a sequence. Any sequence will work including lists and iterators. For example:: cursor.execute("insert into books values(?,?)", ("title", "number")) .. note:: A common gotcha is wanting to insert a single string but not putting it in a tuple:: cursor.execute("insert into books values(?)", "a title") The string is a sequence of 8 characters and so it will look like you are supplying 8 bindings when only one is needed. Use a one item tuple with a trailing comma like this:: cursor.execute("insert into books values(?)", ("a title",) ) If you used names in the statement then supply a dictionary as the binding. It is ok to be missing entries from the dictionary - None/null will be used. For example:: cursor.execute("insert into books values(:title, :isbn, :rating)", {"title": "book title", "isbn": 908908908}) The return is the cursor object itself which is also an iterator. This allows you to write:: for row in cursor.execute("select * from books"): print row :raises TypeError: The bindings supplied were neither a dict nor a sequence :raises BindingsError: You supplied too many or too few bindings for the statements :raises IncompleteExecutionError: There are remaining unexecuted queries from your last execute -* sqlite3_prepare_v2 sqlite3_step sqlite3_bind_int64 sqlite3_bind_null sqlite3_bind_text sqlite3_bind_double sqlite3_bind_blob sqlite3_bind_zeroblob .. seealso:: * :ref:`executionmodel` * :ref:`Example ` */ static PyObject * APSWCursor_execute(APSWCursor *self, PyObject *args) { int res; int savedbindingsoffset=-1; PyObject *retval=NULL; PyObject *query; CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); res=resetcursor(self, /* force= */ 0); if(res!=SQLITE_OK) { assert(PyErr_Occurred()); return NULL; } assert(!self->bindings); assert(PyTuple_Check(args)); if(PyTuple_GET_SIZE(args)<1 || PyTuple_GET_SIZE(args)>2) return PyErr_Format(PyExc_TypeError, "Incorrect number of arguments. execute(statements [,bindings])"); query=PyTuple_GET_ITEM(args, 0); if (PyTuple_GET_SIZE(args)==2) if (PyTuple_GET_ITEM(args, 1)!=Py_None) self->bindings=PyTuple_GET_ITEM(args, 1); if(self->bindings) { if(PyDict_Check(self->bindings)) Py_INCREF(self->bindings); else { self->bindings=PySequence_Fast(self->bindings, "You must supply a dict or a sequence"); if(!self->bindings) return NULL; } } assert(!self->statement); assert(!PyErr_Occurred()); INUSE_CALL(self->statement=statementcache_prepare(self->connection->stmtcache, query, !!self->bindings)); if (!self->statement) { AddTraceBackHere(__FILE__, __LINE__, "APSWCursor_execute.sqlite3_prepare", "{s: O, s: O}", "Connection", self->connection, "statement", query); return NULL; } assert(!PyErr_Occurred()); self->bindingsoffset=0; savedbindingsoffset=0; if(APSWCursor_dobindings(self)) { assert(PyErr_Occurred()); return NULL; } if(EXECTRACE) { if(APSWCursor_doexectrace(self, savedbindingsoffset)) { assert(PyErr_Occurred()); return NULL; } } self->status=C_BEGIN; retval=APSWCursor_step(self); if (!retval) { assert(PyErr_Occurred()); return NULL; } Py_INCREF(retval); return retval; } /** .. method:: executemany(statements, sequenceofbindings) -> iterator This method is for when you want to execute the same statements over a sequence of bindings. Conceptually it does this:: for binding in sequenceofbindings: cursor.execute(statements, binding) Example:: rows=( (1, 7), (2, 23), (4, 92), (12, 12) ) cursor.executemany("insert into nums values(?,?)", rows) The return is the cursor itself which acts as an iterator. Your statements can return data. See :meth:`~Cursor.execute` for more information. */ static PyObject * APSWCursor_executemany(APSWCursor *self, PyObject *args) { int res; PyObject *retval=NULL; PyObject *theiterable=NULL; PyObject *next=NULL; PyObject *query=NULL; int savedbindingsoffset=-1; CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); res=resetcursor(self, /* force= */ 0); if(res!=SQLITE_OK) { assert(PyErr_Occurred()); return NULL; } assert(!self->bindings); assert(!self->emiter); assert(!self->emoriginalquery); assert(self->status=C_DONE); if(!PyArg_ParseTuple(args, "OO:executemany(statements, sequenceofbindings)", &query, &theiterable)) return NULL; self->emiter=PyObject_GetIter(theiterable); if (!self->emiter) return PyErr_Format(PyExc_TypeError, "2nd parameter must be iterable"); INUSE_CALL(next=PyIter_Next(self->emiter)); if(!next && PyErr_Occurred()) return NULL; if(!next) { /* empty list */ Py_INCREF(self); return (PyObject*)self; } if(PyDict_Check(next)) self->bindings=next; else { self->bindings=PySequence_Fast(next, "You must supply a dict or a sequence"); Py_DECREF(next); /* _Fast makes new reference */ if(!self->bindings) return NULL; } assert(!self->statement); assert(!PyErr_Occurred()); assert(!self->statement); INUSE_CALL(self->statement=statementcache_prepare(self->connection->stmtcache, query, 1)); if (!self->statement) { AddTraceBackHere(__FILE__, __LINE__, "APSWCursor_executemany.sqlite3_prepare", "{s: O, s: O}", "Connection", self->connection, "statement", query); return NULL; } assert(!PyErr_Occurred()); self->emoriginalquery=self->statement->utf8; Py_INCREF(self->emoriginalquery); self->bindingsoffset=0; savedbindingsoffset=0; if(APSWCursor_dobindings(self)) { assert(PyErr_Occurred()); return NULL; } if(EXECTRACE) { if(APSWCursor_doexectrace(self, savedbindingsoffset)) { assert(PyErr_Occurred()); return NULL; } } self->status=C_BEGIN; retval=APSWCursor_step(self); if (!retval) { assert(PyErr_Occurred()); return NULL; } Py_INCREF(retval); return retval; } /** .. method:: close(force=False) It is very unlikely you will need to call this method. It exists because older versions of SQLite required all Connection/Cursor activity to be confined to the same thread. That is no longer the case. Cursors are automatically garbage collected and when there are none left will allow the connection to be garbage collected if it has no other references. A cursor is open if there are remaining statements to execute (if your query included multiple statements), or if you called :meth:`~Cursor.executemany` and not all of the *sequenceofbindings* have been used yet. :param force: If False then you will get exceptions if there is remaining work to do be in the Cursor such as more statements to execute, more data from the executemany binding sequence etc. If force is True then all remaining work and state information will be silently discarded. */ static PyObject * APSWCursor_close(APSWCursor *self, PyObject *args) { int force=0; CHECK_USE(NULL); if(!self->connection) Py_RETURN_NONE; if(!PyArg_ParseTuple(args, "|i:close(force=False)", &force)) return NULL; APSWCursor_close_internal(self, !!force); if(PyErr_Occurred()) return NULL; Py_RETURN_NONE; } static PyObject * APSWCursor_next(APSWCursor *self) { PyObject *retval; PyObject *item; int numcols=-1; int i; CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); again: if(self->status==C_BEGIN) if(!APSWCursor_step(self)) { assert(PyErr_Occurred()); return NULL; } if(self->status==C_DONE) return NULL; assert(self->status==C_ROW); self->status=C_BEGIN; /* return the row of data */ numcols=sqlite3_data_count(self->statement->vdbestatement); retval=PyTuple_New(numcols); if(!retval) goto error; for(i=0;istatement->vdbestatement, i)); if(!item) goto error; PyTuple_SET_ITEM(retval, i, item); } if(ROWTRACE) { PyObject *r2=APSWCursor_dorowtrace(self, retval); Py_DECREF(retval); if(!r2) return NULL; if (r2==Py_None) { Py_DECREF(r2); goto again; } return r2; } return retval; error: Py_XDECREF(retval); return NULL; } static PyObject * APSWCursor_iter(APSWCursor *self) { CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); Py_INCREF(self); return (PyObject*)self; } /** .. method:: setexectrace(callable) *callable* is called with the cursor, statement and bindings for each :meth:`~Cursor.execute` or :meth:`~Cursor.executemany` on this cursor. If *callable* is :const:`None` then any existing execution tracer is removed. .. seealso:: * :ref:`tracing` * :ref:`executiontracer` * :meth:`Connection.setexectrace` */ static PyObject * APSWCursor_setexectrace(APSWCursor *self, PyObject *func) { CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); if(func!=Py_None && !PyCallable_Check(func)) { PyErr_SetString(PyExc_TypeError, "parameter must be callable or None"); return NULL; } Py_INCREF(func); Py_XDECREF(self->exectrace); self->exectrace=func; Py_RETURN_NONE; } /** .. method:: setrowtrace(callable) *callable* is called with cursor and row being returned. You can change the data that is returned or cause the row to be skipped altogether. If *callable* is :const:`None` then any existing row tracer is removed. .. seealso:: * :ref:`tracing` * :ref:`rowtracer` * :meth:`Connection.setexectrace` */ static PyObject * APSWCursor_setrowtrace(APSWCursor *self, PyObject *func) { CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); if(func!=Py_None && !PyCallable_Check(func)) { PyErr_SetString(PyExc_TypeError, "parameter must be callable or None"); return NULL; } Py_INCREF(func); Py_XDECREF(self->rowtrace); self->rowtrace=func; Py_RETURN_NONE; } /** .. method:: getexectrace() -> callable or None Returns the currently installed (via :meth:`~Cursor.setexectrace`) execution tracer. .. seealso:: * :ref:`tracing` */ static PyObject * APSWCursor_getexectrace(APSWCursor *self) { PyObject *ret; CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); ret=(self->exectrace)?(self->exectrace):Py_None; Py_INCREF(ret); return ret; } /** .. method:: getrowtrace() -> callable or None Returns the currently installed (via :meth:`~Cursor.setrowtrace`) row tracer. .. seealso:: * :ref:`tracing` */ static PyObject * APSWCursor_getrowtrace(APSWCursor *self) { PyObject *ret; CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); ret =(self->rowtrace)?(self->rowtrace):Py_None; Py_INCREF(ret); return ret; } /** .. method:: getconnection() -> Connection Returns the :class:`Connection` this cursor belongs to. An example usage is to get another cursor:: def func(cursor): # I don't want to alter existing cursor, so make a new one mycursor=cursor.getconnection().cursor() mycursor.execute("....") */ static PyObject * APSWCursor_getconnection(APSWCursor *self) { CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); Py_INCREF(self->connection); return (PyObject*)self->connection; } /** .. method:: fetchall() -> list Returns all remaining result rows as a list. This method is defined in DBAPI. It is a longer way of doing ``list(cursor)``. */ static PyObject * APSWCursor_fetchall(APSWCursor *self) { CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); return PySequence_List((PyObject*)self); } /** .. method:: fetchone() -> row or None Returns the next row of data or None if there are no more rows. */ static PyObject * APSWCursor_fetchone(APSWCursor *self) { PyObject *res; CHECK_USE(NULL); CHECK_CURSOR_CLOSED(NULL); res=APSWCursor_next(self); if(res==NULL && !PyErr_Occurred()) Py_RETURN_NONE; return res; } static PyMethodDef APSWCursor_methods[] = { {"execute", (PyCFunction)APSWCursor_execute, METH_VARARGS, "Executes one or more statements" }, {"executemany", (PyCFunction)APSWCursor_executemany, METH_VARARGS, "Repeatedly executes statements on sequence" }, {"setexectrace", (PyCFunction)APSWCursor_setexectrace, METH_O, "Installs a function called for every statement executed"}, {"setrowtrace", (PyCFunction)APSWCursor_setrowtrace, METH_O, "Installs a function called for every row returned"}, {"getexectrace", (PyCFunction)APSWCursor_getexectrace, METH_NOARGS, "Returns the current exec tracer function"}, {"getrowtrace", (PyCFunction)APSWCursor_getrowtrace, METH_NOARGS, "Returns the current row tracer function"}, {"getconnection", (PyCFunction)APSWCursor_getconnection, METH_NOARGS, "Returns the connection object for this cursor"}, {"getdescription", (PyCFunction)APSWCursor_getdescription, METH_NOARGS, "Returns the description for the current row"}, {"close", (PyCFunction)APSWCursor_close, METH_VARARGS, "Closes the cursor" }, {"fetchall", (PyCFunction)APSWCursor_fetchall, METH_NOARGS, "Fetches all result rows" }, {"fetchone", (PyCFunction)APSWCursor_fetchone, METH_NOARGS, "Fetches next result row" }, {0, 0, 0, 0} /* Sentinel */ }; static PyGetSetDef APSWCursor_getset[] = { {"description", (getter)APSWCursor_getdescription_dbapi, NULL, "Subset of DB-API description attribute", NULL}, {NULL, NULL, NULL, NULL, NULL} }; static PyTypeObject APSWCursorType = { APSW_PYTYPE_INIT "apsw.Cursor", /*tp_name*/ sizeof(APSWCursor), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)APSWCursor_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG #if PY_MAJOR_VERSION < 3 | Py_TPFLAGS_HAVE_ITER #endif , /*tp_flags*/ "Cursor object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(APSWCursor, weakreflist), /* tp_weaklistoffset */ (getiterfunc)APSWCursor_iter, /* tp_iter */ (iternextfunc)APSWCursor_next, /* tp_iternext */ APSWCursor_methods, /* tp_methods */ 0, /* tp_members */ APSWCursor_getset, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; apsw-3.8.11.1-r1/src/vfs.c0000644000000000000000000026331412565422256013506 0ustar rootroot/* VFS code See the accompanying LICENSE file. */ /** .. _vfs: Virtual File System (VFS) ************************* SQLite 3.6 has new `VFS functionality `_ which defines the interface between the SQLite core and the underlying operating system. The majority of the functionality deals with files. APSW exposes this functionality letting you provide your own routines. You can also *inherit* from an existing vfs making it easy to augment or override specific routines. For example you could obfuscate your database by XORing the data implemented by augmenting the read and write methods. The method names are exactly the same as SQLite uses making it easier to read the SQLite documentation, trouble tickets, web searches or mailing lists. The SQLite convention results in names like xAccess, xCurrentTime and xWrite. You specify which VFS to use as a parameter to the :class:`Connection` constructor. .. code-block:: python db=apsw.Connection("file", vfs="myvfs") The easiest way to get started is to make a :class:`VFS` derived class that inherits from the default vfs. Then override methods you want to change behaviour of. If you want to just change how file operations are done then you have to override :meth:`VFS.xOpen` to return a file instance that has your overridden :class:`VFSFile` methods. The :ref:`example ` demonstrates obfuscating the database file contents. .. note:: All strings supplied and returned to :class:`VFS`/:class:`VFSFile` routines are treated as Unicode. Exceptions and errors ===================== To return an error from any routine you should raise an exception. The exception will be translated into the appropriate SQLite error code for SQLite. To return a specific SQLite error code use :meth:`exceptionfor`. If the exception does not map to any specific error code then :const:`SQLITE_ERROR` which corresponds to :exc:`SQLError` is returned to SQLite. The SQLite code that deals with VFS errors behaves in varying ways. Some routines have no way to return an error (eg `xDlOpen `_ just returns zero/NULL on being unable to load a library, `xSleep `_ has no error return parameter), others are unified (eg almost any error in xWrite will be returned to the user as disk full error). Sometimes errors are ignored as they are harmless such as when a journal can't be deleted after a commit (the journal is marked as obsolete before being deleted). Simple operations such as opening a database can result in many different VFS function calls such as hot journals being detected, locking, and read/writes for playback/rollback. To avoid confusion with exceptions being raised in the VFS and exceptions from normal code to open Connections or execute SQL queries, VFS exceptions are not raised in the normal way. (If they were, only one could be raised and it would obscure whatever exceptions the :class:`Connection` open or SQL query execute wanted to raise.) Instead the :meth:`VFS.excepthook` or :meth:`VFSFile.excepthook` method is called with a tuple of exception type, exception value and exception traceback. The default implementation of ``excepthook`` calls ``sys.excepthook()`` which under Python 2 shows the stack trace and under Python 3 merely prints the exception value. (If ``sys.excepthook`` fails then ``PyErr_Display()`` is called.) In normal VFS usage there will be no exceptions raised, or specific expected ones which APSW clears after noting them and returning the appropriate value back to SQLite. The exception hooking behaviour helps you find issues in your code or unexpected behaviour of the external environment. Remember that :ref:`augmented stack traces ` are available which significantly increase detail about the exceptions. As an example, lets say you have a divide by zero error in your xWrite routine. The table below shows what happens with time going down and across. +----------------------------------------------+--------------------------------+---------------------------------------------+ | Python Query Code | SQLite and APSW C code | Python VFS code | +==============================================+================================+=============================================+ | ``cursor.execute("update table set foo=3")`` | | | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | SQLite starts executing query | | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | | Your VFS routines are called | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | | Your xWrite divides by zero | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | | :meth:`VFSFile.excepthook` is called with | | | | ZeroDivision exception | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | :const:`SQLITE_ERROR` (closest | | | | matching SQLite error code) is | | | | returned to SQLite by APSW | | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | SQLite error handling and | More VFS routines are called. Any | | | recovery operates which calls | exceptions in these routines will result in | | | more VFS routines. | :meth:`VFSFile.excepthook` being called with| | | | them. | +----------------------------------------------+--------------------------------+---------------------------------------------+ | | SQLite returns | | | | :const:`SQLITE_FULL` to APSW | | +----------------------------------------------+--------------------------------+---------------------------------------------+ | APSW returns :class:`apsw.FullError` | | | +----------------------------------------------+--------------------------------+---------------------------------------------+ */ /* Naming convention prefixes. Since sqlite3.c is #included alongside this file we have to ensure there is no clash with its names. There are two objects - the VFS itself and a VFSFile as returned from xOpen. For each there are both C and Python methods. The C methods are what SQLite calls and effectively turns a C call into a Python call. The Python methods turn a Python call into the C call of the (SQLite C) object we are inheriting from and wouldn't be necessary if we didn't implement the inheritance feature. Methods: apswvfs_ sqlite3_vfs* functions https://sqlite.org/c3ref/vfs.html apswvfspy_ Python implementations of those same functions apswvfsfile_ io methods https://sqlite.org/c3ref/io_methods.html apswvfsfilepy_ Python implementations of those same functions Structures: APSWVFS Python object for vfs (sqlite3_vfs * is used for sqlite object) APSWVFSType Type object for above APSWVFSFile Python object for vfs file APSWVFSFileType Type object for above APSWSQLite3File sqlite object for vfs file ("subclass" of sqlite3_file) */ /* what error code do we do for not implemented? */ #define VFSNOTIMPLEMENTED(x, v) \ if(!self->basevfs || self->basevfs->iVersionbasevfs->x) \ { return PyErr_Format(ExcVFSNotImplemented, "VFSNotImplementedError: Method " #x " is not implemented"); } #define VFSFILENOTIMPLEMENTED(x,v) \ if(!self->base || self->base->pMethods->iVersionbase->pMethods->x) \ { return PyErr_Format(ExcVFSNotImplemented, "VFSNotImplementedError: File method " #x " is not implemented"); } /* various checks */ #define CHECKVFS \ assert(vfs->pAppData); #define CHECKVFSPY \ assert(self->containingvfs->pAppData==self) #define CHECKVFSFILE \ assert(apswfile->file); #define CHECKVFSFILEPY \ if(!self->base) { return PyErr_Format(ExcVFSFileClosed, "VFSFileClosed: Attempting operation on closed file"); } #define VFSPREAMBLE \ PyObject *etype, *eval, *etb; \ PyGILState_STATE gilstate; \ gilstate=PyGILState_Ensure(); \ PyErr_Fetch(&etype, &eval, &etb); \ CHECKVFS; #define VFSPOSTAMBLE \ if(PyErr_Occurred()) \ apsw_write_unraiseable((PyObject*)(vfs->pAppData)); \ PyErr_Restore(etype, eval, etb); \ PyGILState_Release(gilstate); #define FILEPREAMBLE \ APSWSQLite3File *apswfile=(APSWSQLite3File*)(void*)file; \ PyObject *etype, *eval, *etb; \ PyGILState_STATE gilstate; \ gilstate=PyGILState_Ensure(); \ PyErr_Fetch(&etype, &eval, &etb); \ CHECKVFSFILE; #define FILEPOSTAMBLE \ if(PyErr_Occurred()) \ apsw_write_unraiseable(apswfile->file); \ PyErr_Restore(etype, eval, etb); \ PyGILState_Release(gilstate); typedef struct { PyObject_HEAD sqlite3_vfs *basevfs; /* who we inherit from (might be null) */ sqlite3_vfs *containingvfs; /* pointer given to sqlite for this instance */ int registered; /* are we currently registered? */ } APSWVFS; static PyTypeObject APSWVFSType; typedef struct /* inherits */ { const struct sqlite3_io_methods *pMethods; /* structure sqlite needs */ PyObject *file; } APSWSQLite3File; /* this is only used if there is inheritance */ typedef struct { PyObject_HEAD struct sqlite3_file *base; char *filename; /* obtained from fullpathname - has to be around for lifetime of base */ int filenamefree; /* filename should be freed on close */ /* If you add any new members then also initialize them in apswvfspy_xOpen() as that function does not call init because it has values already */ } APSWVFSFile; static PyTypeObject APSWVFSFileType; static PyTypeObject APSWURIFilenameType; static const struct sqlite3_io_methods apsw_io_methods_v1; static const struct sqlite3_io_methods apsw_io_methods_v2; typedef struct { PyObject_HEAD char *filename; } APSWURIFilename; /** .. class:: VFS Provides operating system access. You can get an overview in the `SQLite documentation `_. To create a VFS your Python class must inherit from :class:`VFS`. */ /** .. method:: excepthook(etype, evalue, etraceback) Called when there has been an exception in a :class:`VFS` routine. The default implementation calls ``sys.excepthook`` and if that fails then ``PyErr_Display``. The three arguments correspond to what ``sys.exc_info()`` would return. :param etype: The exception type :param evalue: The exception value :param etraceback: The exception traceback. Note this includes all frames all the way up to the thread being started. */ /* This function only needs to call sys.excepthook. If things mess up then whoever called us will fallback on PyErr_Display etc */ static PyObject* apswvfs_excepthook(APSW_ARGUNUSED PyObject *donotuseself, PyObject *args) { /* NOTE: do not use the self argument as this function is used for both apswvfs and apswvfsfile. If you need to use self then make two versions of the function. */ PyObject *excepthook; excepthook=PySys_GetObject("excepthook"); /* NB borrowed reference */ if(!excepthook) return NULL; return PyEval_CallObject(excepthook, args); } static int apswvfs_xDelete(sqlite3_vfs *vfs, const char *zName, int syncDir) { PyObject *pyresult=NULL; int result=SQLITE_OK; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xDelete", 1, "(Ni)", convertutf8string(zName), syncDir); if(!pyresult) { result=MakeSqliteMsgFromPyException(NULL); if(result==SQLITE_IOERR_DELETE_NOENT) PyErr_Clear(); else AddTraceBackHere(__FILE__, __LINE__, "vfs.xDelete", "{s: s, s: i}", "zName", zName, "syncDir", syncDir); } VFSPOSTAMBLE; return result; } /** .. method:: xDelete(filename, syncdir) Delete the named file. .. note:: SQLite has 3 different behaviours depending on version for how to handle missing files. +----------------------------------------+-------------------------------------------------+ | SQLite < 3.7.8 |Raise an :exc:`IOError` if the file does not | | |exist. | +----------------------------------------+-------------------------------------------------+ | SQLite >= 3.7.8 and SQLite < 3.7.15 |Do not raise an exception | +----------------------------------------+-------------------------------------------------+ | SQLite >= 3.7.15 |Raise an :exc:`IOError` exception with | | |extendedresult :const:`SQLITE_IOERR_DELETE_NOENT`| +----------------------------------------+-------------------------------------------------+ :param filename: File to delete :param syncdir: If True then the directory should be synced ensuring that the file deletion has been recorded on the disk platters. ie if there was an immediate power failure after this call returns, on a reboot the file would still be deleted. */ static PyObject * apswvfspy_xDelete(APSWVFS *self, PyObject *args) { char *zName=NULL; int syncDir, res; CHECKVFSPY; VFSNOTIMPLEMENTED(xDelete, 1); if(!PyArg_ParseTuple(args, "esi", STRENCODING, &zName, &syncDir)) return NULL; res=self->basevfs->xDelete(self->basevfs, zName, syncDir); PyMem_Free(zName); if(res==SQLITE_OK) Py_RETURN_NONE; SET_EXC(res, NULL); return NULL; } static int apswvfs_xAccess(sqlite3_vfs *vfs, const char *zName, int flags, int *pResOut) { PyObject *pyresult=NULL; int result=SQLITE_OK; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xAccess", 1, "(Ni)", convertutf8string(zName), flags); if(!pyresult) goto finally; if(PyIntLong_Check(pyresult)) *pResOut=!!PyIntLong_AsLong(pyresult); else PyErr_Format(PyExc_TypeError, "xAccess should return a number"); finally: if(PyErr_Occurred()) { *pResOut=0; result=MakeSqliteMsgFromPyException(NULL); AddTraceBackHere(__FILE__, __LINE__, "vfs.xAccess", "{s: s, s: i}", "zName", zName, "flags", flags); } VFSPOSTAMBLE; return result; } /** .. method:: xAccess(pathname, flags) -> bool SQLite wants to check access permissions. Return True or False accordingly. :param pathname: File or directory to check :param flags: One of the `access flags `_ */ static PyObject * apswvfspy_xAccess(APSWVFS *self, PyObject *args) { char *zName=NULL; int res, flags, resout=0; CHECKVFSPY; VFSNOTIMPLEMENTED(xAccess, 1); if(!PyArg_ParseTuple(args, "esi", STRENCODING, &zName, &flags)) return NULL; res=self->basevfs->xAccess(self->basevfs, zName, flags, &resout); PyMem_Free(zName); if(res==SQLITE_OK) { if(resout) Py_RETURN_TRUE; Py_RETURN_FALSE; } SET_EXC(res, NULL); return NULL; } static int apswvfs_xFullPathname(sqlite3_vfs *vfs, const char *zName, int nOut, char *zOut) { PyObject *pyresult=NULL, *utf8=NULL; int result=SQLITE_OK; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xFullPathname", 1, "(N)", convertutf8string(zName)); if(!pyresult) { result=MakeSqliteMsgFromPyException(NULL); AddTraceBackHere(__FILE__, __LINE__, "vfs.xFullPathname", "{s: s, s: i}", "zName", zName, "nOut", nOut); } else { utf8=getutf8string(pyresult); if(!utf8) { result=SQLITE_ERROR; AddTraceBackHere(__FILE__, __LINE__, "vfs.xFullPathname", "{s: s, s: O}", "zName", zName, "result_from_python", pyresult); goto finally; } /* nOut includes null terminator space (ie is mxPathname+1) */ if(PyBytes_GET_SIZE(utf8)+1>nOut) { result=SQLITE_TOOBIG; SET_EXC(result, NULL); AddTraceBackHere(__FILE__, __LINE__, "vfs.xFullPathname", "{s: s, s: O, s: i}", "zName", zName, "result_from_python", utf8, "nOut", nOut); goto finally; } memcpy(zOut, PyBytes_AS_STRING(utf8), PyBytes_GET_SIZE(utf8)+1); /* Python always null terminates hence +1 */ } finally: Py_XDECREF(utf8); Py_XDECREF(pyresult); VFSPOSTAMBLE; return result; } /** .. method:: xFullPathname(name) -> string Return the absolute pathname for name. You can use ``os.path.abspath`` to do this. */ static PyObject * apswvfspy_xFullPathname(APSWVFS *self, PyObject *name) { char *resbuf=NULL; PyObject *result=NULL, *utf8=NULL; int res=SQLITE_NOMEM; CHECKVFSPY; VFSNOTIMPLEMENTED(xFullPathname, 1); utf8=getutf8string(name); if(!utf8) { AddTraceBackHere(__FILE__, __LINE__, "vfspy.xFullPathname", "{s: O}", "name", name); goto finally; } resbuf=PyMem_Malloc(self->basevfs->mxPathname+1); memset(resbuf, 0, self->basevfs->mxPathname+1); /* make sure it is null terminated */ if(resbuf) res=self->basevfs->xFullPathname(self->basevfs, PyBytes_AsString(utf8), self->basevfs->mxPathname+1, resbuf); if(res==SQLITE_OK) APSW_FAULT_INJECT(xFullPathnameConversion,result=convertutf8string(resbuf),result=PyErr_NoMemory()); if(!result) res=SQLITE_CANTOPEN; if(res!=SQLITE_OK) { SET_EXC(res, NULL); AddTraceBackHere(__FILE__, __LINE__, "vfspy.xFullPathname", "{s: O, s: i, s: O}", "name", name, "res", res, "result", result?result:Py_None); } finally: Py_XDECREF(utf8); if(resbuf) PyMem_Free(resbuf); return result; } static int apswvfs_xOpen(sqlite3_vfs *vfs, const char *zName, sqlite3_file *file, int inflags, int *pOutFlags) { int result=SQLITE_CANTOPEN; PyObject *flags=NULL; PyObject *pyresult=NULL; APSWSQLite3File *apswfile=(APSWSQLite3File*)(void*)file; /* how we pass the name */ PyObject *nameobject; VFSPREAMBLE; flags=PyList_New(2); if(!flags) goto finally; PyList_SET_ITEM(flags, 0, PyInt_FromLong(inflags)); PyList_SET_ITEM(flags, 1, PyInt_FromLong(pOutFlags?*pOutFlags:0)); if(PyErr_Occurred()) goto finally; if(inflags & (SQLITE_OPEN_URI|SQLITE_OPEN_MAIN_DB)) { nameobject=PyObject_New(PyObject, &APSWURIFilenameType); if(nameobject) ((APSWURIFilename*)nameobject)->filename=(char*)zName; } else nameobject=convertutf8string(zName); pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xOpen", 1, "(NO)", nameobject, flags); if(!pyresult) { result=MakeSqliteMsgFromPyException(NULL); goto finally; } if(!PyList_Check(flags) || PyList_GET_SIZE(flags)!=2 || !PyIntLong_Check(PyList_GET_ITEM(flags, 1))) { PyErr_Format(PyExc_TypeError, "Flags should be two item list with item zero being integer input and item one being integer output"); AddTraceBackHere(__FILE__, __LINE__, "vfs.xOpen", "{s: s, s: i, s: i}", "zName", zName, "inflags", inflags, "flags", flags); goto finally; } if(pOutFlags) *pOutFlags=(int)PyIntLong_AsLong(PyList_GET_ITEM(flags, 1)); if(PyErr_Occurred()) goto finally; /* If we are inheriting from another file object, and that file object supports version 2 io_methods (Shm* family of functions) then we need to allocate an io_methods dupe of our own and fill in their shm methods. */ if(Py_TYPE(pyresult)==&APSWVFSFileType) { APSWVFSFile *f=(APSWVFSFile*)pyresult; if(!f->base || !f->base->pMethods || !f->base->pMethods==1 || !f->base->pMethods->xShmMap) goto version1; apswfile->pMethods=&apsw_io_methods_v2; } else { version1: apswfile->pMethods=&apsw_io_methods_v1; } apswfile->file=pyresult; pyresult=NULL; result=SQLITE_OK; finally: assert(PyErr_Occurred()?result!=SQLITE_OK:1); Py_XDECREF(pyresult); Py_XDECREF(flags); VFSPOSTAMBLE; return result; } /** .. method:: xOpen(name, flags) -> VFSFile or similar object This method should return a new file object based on name. You can return a :class:`VFSFile` from a completely different VFS. :param name: File to open. Note that *name* may be :const:`None` in which case you should open a temporary file with a name of your choosing. May be an instance of :class:`URIFilename`. :param flags: A list of two integers ``[inputflags, outputflags]``. Each integer is one or more of the `open flags `_ binary orred together. The ``inputflags`` tells you what SQLite wants. For example :const:`SQLITE_OPEN_DELETEONCLOSE` means the file should be automatically deleted when closed. The ``outputflags`` describes how you actually did open the file. For example if you opened it read only then :const:`SQLITE_OPEN_READONLY` should be set. */ static PyObject * apswvfspy_xOpen(APSWVFS *self, PyObject *args) { sqlite3_file *file=NULL; int flagsout=0; int flagsin=0; int res; PyObject *result=NULL, *flags; PyObject *pyname=NULL, *utf8name=NULL; APSWVFSFile *apswfile=NULL; char *filename=NULL; CHECKVFSPY; VFSNOTIMPLEMENTED(xOpen, 1); if(!PyArg_ParseTuple(args, "OO", &pyname, &flags)) return NULL; if(pyname==Py_None) { filename=NULL; } else if(pyname->ob_type==&APSWURIFilenameType) { filename=((APSWURIFilename*)pyname)->filename; } else { size_t len; utf8name=getutf8string(pyname); if(!utf8name) goto finally; len=strlen(PyBytes_AS_STRING(utf8name)); APSW_FAULT_INJECT(vfspyopen_fullpathnamemallocfailed, filename=PyMem_Malloc(len+3), filename=(char*)PyErr_NoMemory()); if(!filename) goto finally; strcpy(filename, PyBytes_AS_STRING(utf8name)); /* ensure extra null padding for URI params */ filename[len]=filename[len+1]=filename[len+2]=0; } if(!PyList_Check(flags) || PyList_GET_SIZE(flags)!=2 || !PyIntLong_Check(PyList_GET_ITEM(flags, 0)) || !PyIntLong_Check(PyList_GET_ITEM(flags, 1))) { PyErr_Format(PyExc_TypeError, "Flags argument needs to be a list of two integers"); goto finally; } flagsout=PyIntLong_AsLong(PyList_GET_ITEM(flags, 1)); flagsin=PyIntLong_AsLong(PyList_GET_ITEM(flags, 0)); /* check for overflow */ if(flagsout!=PyIntLong_AsLong(PyList_GET_ITEM(flags, 1)) || flagsin!=PyIntLong_AsLong(PyList_GET_ITEM(flags, 0))) PyErr_Format(PyExc_OverflowError, "Flags arguments need to fit in 32 bits"); if(PyErr_Occurred()) goto finally; file=PyMem_Malloc(self->basevfs->szOsFile); if(!file) goto finally; res=self->basevfs->xOpen(self->basevfs, filename, file, flagsin, &flagsout); if(PyErr_Occurred()) goto finally; if(res!=SQLITE_OK) { SET_EXC(res, NULL); goto finally; } PyList_SetItem(flags, 1, PyInt_FromLong(flagsout)); if(PyErr_Occurred()) goto finally; apswfile=PyObject_New(APSWVFSFile, &APSWVFSFileType); if(!apswfile) goto finally; apswfile->base=file; apswfile->filename=filename; apswfile->filenamefree=!!utf8name; filename=NULL; file=NULL; result=(PyObject*)(void*)apswfile; finally: if(file) PyMem_Free(file); if(utf8name && filename) PyMem_Free(filename); Py_XDECREF(utf8name); return result; } static void* apswvfs_xDlOpen(sqlite3_vfs *vfs, const char *zName) { PyObject *pyresult=NULL; void *result=NULL; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xDlOpen", 1, "(N)", convertutf8string(zName)); if(pyresult) { if(PyIntLong_Check(pyresult)) result=PyLong_AsVoidPtr(pyresult); else PyErr_Format(PyExc_TypeError, "Pointer returned must be int/long"); } if(PyErr_Occurred()) { result=NULL; AddTraceBackHere(__FILE__, __LINE__, "vfs.xDlOpen", "{s: s, s: O}", "zName", zName, "result", pyresult?pyresult:Py_None); } Py_XDECREF(pyresult); VFSPOSTAMBLE; return result; } /** .. method:: xDlOpen(filename) -> number Load the shared library. You should return a number which will be treated as a void pointer at the C level. On error you should return 0 (NULL). The number is passed as is to :meth:`~VFS.xDlSym`/:meth:`~VFS.xDlClose` so it can represent anything that is convenient for you (eg an index into an array). You can use ctypes to load a library:: def xDlOpen(name): return ctypes.cdll.LoadLibrary(name)._handle */ static PyObject * apswvfspy_xDlOpen(APSWVFS *self, PyObject *args) { char *zName=NULL; void *res; CHECKVFSPY; VFSNOTIMPLEMENTED(xDlOpen, 1); if(!PyArg_ParseTuple(args, "es", STRENCODING, &zName)) return NULL; res=self->basevfs->xDlOpen(self->basevfs, zName); PyMem_Free(zName); return PyLong_FromVoidPtr(res); } static void (*apswvfs_xDlSym(sqlite3_vfs *vfs, void *handle, const char *zName))(void) { PyObject *pyresult=NULL; void *result=NULL; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xDlSym", 1, "(NN)", PyLong_FromVoidPtr(handle), convertutf8string(zName)); if(pyresult) { if(PyIntLong_Check(pyresult)) result=PyLong_AsVoidPtr(pyresult); else PyErr_Format(PyExc_TypeError, "Pointer returned must be int/long"); } if(PyErr_Occurred()) { result=NULL; AddTraceBackHere(__FILE__, __LINE__, "vfs.xDlSym", "{s: s, s: O}", "zName", zName, "result", pyresult?pyresult:Py_None); } Py_XDECREF(pyresult); VFSPOSTAMBLE; return result; } /** .. method:: xDlSym(handle, symbol) -> address Returns the address of the named symbol which will be called by SQLite. On error you should return 0 (NULL). You can use ctypes:: def xDlSym(ptr, name): return _ctypes.dlsym (ptr, name) # Linux/Unix/Mac etc (note leading underscore) return ctypes.win32.kernel32.GetProcAddress (ptr, name) # Windows :param handle: The value returned from an earlier :meth:`~VFS.xDlOpen` call :param symbol: A string :rtype: An int/long with the symbol address */ static PyObject * apswvfspy_xDlSym(APSWVFS *self, PyObject *args) { char *zName=NULL; void *res=NULL; PyObject *pyptr; void *ptr=NULL; CHECKVFSPY; VFSNOTIMPLEMENTED(xDlSym, 1); if(!PyArg_ParseTuple(args, "Oes", &pyptr, STRENCODING, &zName)) return NULL; if(PyIntLong_Check(pyptr)) ptr=PyLong_AsVoidPtr(pyptr); else PyErr_Format(PyExc_TypeError, "Pointer must be int/long"); if(PyErr_Occurred()) goto finally; res=self->basevfs->xDlSym(self->basevfs, ptr, zName); finally: PyMem_Free(zName); if(PyErr_Occurred()) { AddTraceBackHere(__FILE__, __LINE__, "vfspy.xDlSym", "{s: O}", "args", args); return NULL; } return PyLong_FromVoidPtr(res); } static void apswvfs_xDlClose(sqlite3_vfs *vfs, void *handle) { PyObject *pyresult=NULL; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xDlClose", 1, "(N)", PyLong_FromVoidPtr(handle)); if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfs.xDlClose", "{s: N}", "ptr", PyLong_FromVoidPtr(handle)); Py_XDECREF(pyresult); VFSPOSTAMBLE; } /** .. method:: xDlClose(handle) Close and unload the library corresponding to the handle you returned from :meth:`~VFS.xDlOpen`. You can use ctypes to do this:: def xDlClose(handle): # Note leading underscore in _ctypes _ctypes.dlclose(handle) # Linux/Mac/Unix _ctypes.FreeLibrary(handle) # Windows */ static PyObject * apswvfspy_xDlClose(APSWVFS *self, PyObject *pyptr) { void *ptr=NULL; CHECKVFSPY; VFSNOTIMPLEMENTED(xDlClose, 1); if(PyIntLong_Check(pyptr)) ptr=PyLong_AsVoidPtr(pyptr); else PyErr_Format(PyExc_TypeError, "Argument is not number (pointer)"); if(PyErr_Occurred()) goto finally; self->basevfs->xDlClose(self->basevfs, ptr); finally: if(PyErr_Occurred()) { AddTraceBackHere(__FILE__, __LINE__, "vfspy.xDlClose", "{s: O}", "ptr", pyptr); return NULL; } Py_RETURN_NONE; } static void apswvfs_xDlError(sqlite3_vfs *vfs, int nByte, char *zErrMsg) { PyObject *pyresult=NULL, *utf8=NULL; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xDlError", 0, "()"); if(pyresult && pyresult!=Py_None) { utf8=getutf8string(pyresult); if(utf8) { /* Get size includes trailing null */ size_t len=PyBytes_GET_SIZE(utf8); if(len>(size_t)nByte) len=(size_t)nByte; memcpy(zErrMsg, PyBytes_AS_STRING(utf8), len); } } if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfs.xDlError", NULL); Py_XDECREF(pyresult); Py_XDECREF(utf8); VFSPOSTAMBLE; } /** .. method:: xDlError() -> string Return an error string describing the last error of :meth:`~VFS.xDlOpen` or :meth:`~VFS.xDlSym` (ie they returned zero/NULL). If you do not supply this routine then SQLite provides a generic message. To implement this method, catch exceptions in :meth:`~VFS.xDlOpen` or :meth:`~VFS.xDlSym`, turn them into strings, save them, and return them in this routine. If you have an error in this routine or return None then SQLite's generic message will be used. */ static PyObject * apswvfspy_xDlError(APSWVFS *self) { PyObject *res=NULL; PyObject *unicode=NULL; CHECKVFSPY; VFSNOTIMPLEMENTED(xDlError, 1); APSW_FAULT_INJECT(xDlErrorAllocFail, res=PyBytes_FromStringAndSize(NULL, 512+self->basevfs->mxPathname), res=PyErr_NoMemory()); if(res) { memset(PyBytes_AS_STRING(res), 0, PyBytes_GET_SIZE(res)); self->basevfs->xDlError(self->basevfs, PyBytes_GET_SIZE(res), PyBytes_AS_STRING(res)); } if(PyErr_Occurred()) { AddTraceBackHere(__FILE__, __LINE__, "vfspy.xDlError", NULL); Py_XDECREF(res); return NULL; } /* did they make a message? */ if(strlen(PyBytes_AS_STRING(res))==0) { Py_DECREF(res); Py_RETURN_NONE; } /* turn into unicode */ APSW_FAULT_INJECT(xDlErrorUnicodeFail, unicode=convertutf8string(PyBytes_AS_STRING(res)), unicode=PyErr_NoMemory()); if(unicode) { Py_DECREF(res); return unicode; } AddTraceBackHere(__FILE__, __LINE__, "vfspy.xDlError", "{s: O, s: N}", "self", self, "res", PyBytes_FromStringAndSize(PyBytes_AS_STRING(res), strlen(PyBytes_AS_STRING(res)))); Py_DECREF(res); return NULL; } static int apswvfs_xRandomness(sqlite3_vfs *vfs, int nByte, char *zOut) { PyObject *pyresult=NULL; int result=0; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xRandomness", 1, "(i)", nByte); if(pyresult && PyUnicode_Check(pyresult)) PyErr_Format(PyExc_TypeError, "Randomness object must be data/bytes not unicode"); else if(pyresult && pyresult!=Py_None) { const void *buffer; Py_ssize_t buflen; int asrb=PyObject_AsReadBuffer(pyresult, &buffer, &buflen); if(asrb==0) { if(buflen>nByte) buflen=nByte; memcpy(zOut, buffer, buflen); result=buflen; } else assert(PyErr_Occurred()); } if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfs.xRandomness", "{s: i, s: O}", "nByte", nByte, "result", pyresult?pyresult:Py_None); Py_XDECREF(pyresult); VFSPOSTAMBLE; return result; } /** .. method:: xRandomness(numbytes) -> bytes This method is called once when SQLite needs to seed the random number generator. It is called on the default VFS only. It is not called again, even across :meth:`apsw.shutdown` calls. You can return less than the number of bytes requested including None. If you return more then the surplus is ignored. :rtype: (Python 2) string, buffer (Python 3) bytes, buffer */ static PyObject * apswvfspy_xRandomness(APSWVFS *self, PyObject *args) { PyObject *res=NULL; int nbyte=0; CHECKVFSPY; VFSNOTIMPLEMENTED(xRandomness, 1); if(!PyArg_ParseTuple(args, "i", &nbyte)) return NULL; if(nbyte<0) return PyErr_Format(PyExc_ValueError, "You can't have negative amounts of randomness!"); APSW_FAULT_INJECT(xRandomnessAllocFail, res=PyBytes_FromStringAndSize(NULL, nbyte), res=PyErr_NoMemory()); if(res) { int amt=self->basevfs->xRandomness(self->basevfs, PyBytes_GET_SIZE(res), PyBytes_AS_STRING(res)); if(amtpAppData), "xSleep", 1, "(i)", microseconds); if(pyresult) { if(PyIntLong_Check(pyresult)) { long actual=PyIntLong_AsLong(pyresult); if(actual!=(int)actual) PyErr_Format(PyExc_OverflowError, "Result is too big for integer"); result=actual; } else PyErr_Format(PyExc_TypeError, "You should return a number from sleep"); } if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfs.xSleep", "{s: i, s: O}", "microseconds", microseconds, "result", pyresult?pyresult:Py_None); Py_XDECREF(pyresult); VFSPOSTAMBLE; return result; } /** .. method:: xSleep(microseconds) -> integer Pause exection of the thread for at least the specified number of microseconds (millionths of a second). This routine is typically called from the busy handler. :returns: How many microseconds you actually requested the operating system to sleep for. For example if your operating system sleep call only takes seconds then you would have to have rounded the microseconds number up to the nearest second and should return that rounded up value. */ static PyObject * apswvfspy_xSleep(APSWVFS *self, PyObject *args) { int microseconds=0; CHECKVFSPY; VFSNOTIMPLEMENTED(xSleep, 1); if(!PyArg_ParseTuple(args, "i", µseconds)) return NULL; return PyLong_FromLong(self->basevfs->xSleep(self->basevfs, microseconds)); } static int apswvfs_xCurrentTime(sqlite3_vfs *vfs, double *julian) { PyObject *pyresult=NULL; int result=0; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xCurrentTime", 1, "()"); if(pyresult) *julian=PyFloat_AsDouble(pyresult); if(PyErr_Occurred()) { AddTraceBackHere(__FILE__, __LINE__, "vfs.xCurrentTime", "{s: O}", "result", pyresult?pyresult:Py_None); result=1; } Py_XDECREF(pyresult); VFSPOSTAMBLE; return result; } /** .. method:: xCurrentTime() -> float Return the `Julian Day Number `_ as a floating point number where the integer portion is the day and the fractional part is the time. Do not adjust for timezone (ie use `UTC `_). */ static PyObject * apswvfspy_xCurrentTime(APSWVFS *self) { int res; double julian=0; CHECKVFSPY; VFSNOTIMPLEMENTED(xCurrentTime, 1); res=self->basevfs->xCurrentTime(self->basevfs, &julian); APSW_FAULT_INJECT(xCurrentTimeFail, ,res=1); if(res!=0) { SET_EXC(SQLITE_ERROR, NULL); /* general sqlite error code */ AddTraceBackHere(__FILE__, __LINE__, "vfspy.xCurrentTime", NULL); return NULL; } return PyFloat_FromDouble(julian); } static int apswvfs_xGetLastError(sqlite3_vfs *vfs, int nByte, char *zErrMsg) { PyObject *pyresult=NULL, *utf8=NULL; int buffertoosmall=0; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xGetLastError", 0, "()"); if(pyresult && pyresult!=Py_None) { utf8=getutf8string(pyresult); if(utf8) { /* Get size includes trailing null */ size_t len=PyBytes_GET_SIZE(utf8); if(len>(size_t)nByte) { len=(size_t)nByte; buffertoosmall=1; } memcpy(zErrMsg, PyBytes_AS_STRING(utf8), len); } } if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfs.xGetLastError", NULL); Py_XDECREF(pyresult); Py_XDECREF(utf8); VFSPOSTAMBLE; return buffertoosmall; } /** .. method:: xGetLastError() -> string This method is to return text describing the last error that happened in this thread. If not implemented SQLite's more generic message is used. However the method is :cvstrac:`never called <3337>` by SQLite. */ static PyObject * apswvfspy_xGetLastError(APSWVFS *self) { PyObject *res=NULL; int toobig=1; Py_ssize_t size=256; /* start small */ CHECKVFSPY; VFSNOTIMPLEMENTED(xGetLastError, 1); res=PyBytes_FromStringAndSize(NULL, size); if(!res) goto error; while(toobig) { int resizeresult; memset(PyBytes_AS_STRING(res), 0, PyBytes_GET_SIZE(res)); toobig=self->basevfs->xGetLastError(self->basevfs, PyBytes_GET_SIZE(res), PyBytes_AS_STRING(res)); if(!toobig) break; size*=2; /* double size and try again */ APSW_FAULT_INJECT(xGetLastErrorAllocFail, resizeresult=_PyBytes_Resize(&res, size), resizeresult=(PyErr_NoMemory(), -1)); if(resizeresult!=0) goto error; } /* did they make a message? */ if(strlen(PyBytes_AS_STRING(res))==0) { Py_XDECREF(res); Py_RETURN_NONE; } _PyBytes_Resize(&res, strlen(PyBytes_AS_STRING(res))); return res; error: assert(PyErr_Occurred()); AddTraceBackHere(__FILE__, __LINE__, "vfspy.xGetLastError", "{s: O, s: i}", "self", self, "size", (int)size); Py_XDECREF(res); return NULL; } static int apswvfs_xSetSystemCall(sqlite3_vfs *vfs, const char *zName, sqlite3_syscall_ptr call) { int res=SQLITE_OK; PyObject *pyresult=NULL; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xSetSystemCall", 1, "(NN)", convertutf8string(zName), PyLong_FromVoidPtr(call)); if(!pyresult) res=MakeSqliteMsgFromPyException(NULL); if(res==SQLITE_NOTFOUND) PyErr_Clear(); if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfs.xSetSystemCall", "{s: O}", "pyresult", pyresult); Py_XDECREF(pyresult); VFSPOSTAMBLE; return res; } /** .. method:: xSetSystemCall(name, pointer) -> bool Change a system call used by the VFS. This is useful for testing and some other scenarios such as sandboxing. :param name: The string name of the system call :param pointer: A pointer provided as an int/long. There is no reference counting or other memory tracking of the pointer. If you provide one you need to ensure it is around for the lifetime of this and any other related VFS. Raise an exception to return an error. If the system call does not exist then raise :exc:`NotFoundError`. :returns: True if the system call was set. False if the system call is not known. */ static PyObject * apswvfspy_xSetSystemCall(APSWVFS *self, PyObject *args) { const char *name=0; PyObject *pyptr; void *ptr=NULL; int res=-7; /* initialization to stop compiler whining */ CHECKVFSPY; VFSNOTIMPLEMENTED(xSetSystemCall, 3); if(!PyArg_ParseTuple(args, "zO", &name, &pyptr)) return NULL; if(PyIntLong_Check(pyptr)) ptr=PyLong_AsVoidPtr(pyptr); else PyErr_Format(PyExc_TypeError, "Pointer must be int/long"); if(PyErr_Occurred()) goto finally; res=self->basevfs->xSetSystemCall(self->basevfs, name, ptr); if(res!=SQLITE_OK && res!=SQLITE_NOTFOUND) SET_EXC(res, NULL); finally: if(PyErr_Occurred()) { AddTraceBackHere(__FILE__, __LINE__, "vfspy.xSetSystemCall", "{s: O, s: i}", "args", args, "res", res); return NULL; } assert(res==SQLITE_OK || res==SQLITE_NOTFOUND); if(res==SQLITE_OK) Py_RETURN_TRUE; Py_RETURN_FALSE; } static sqlite3_syscall_ptr apswvfs_xGetSystemCall(sqlite3_vfs *vfs, const char *zName) { sqlite3_syscall_ptr ptr=NULL; PyObject *pyresult=NULL; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xGetSystemCall", 1, "(N)", convertutf8string(zName)); if(!pyresult) goto finally; if(PyIntLong_Check(pyresult)) ptr=PyLong_AsVoidPtr(pyresult); else PyErr_Format(PyExc_TypeError, "Pointer must be int/long"); if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfs.xGetSystemCall", "{s:O}", "pyresult", pyresult); finally: Py_XDECREF(pyresult); VFSPOSTAMBLE; return ptr; } /** .. method:: xGetSystemCall(name) -> int Returns a pointer for the current method implementing the named system call. Return None if the call does not exist. */ static PyObject* apswvfspy_xGetSystemCall(APSWVFS *self, PyObject *args) { const char *name; sqlite3_syscall_ptr ptr; CHECKVFSPY; VFSNOTIMPLEMENTED(xGetSystemCall, 3); if(!PyArg_ParseTuple(args, "es", STRENCODING, &name)) return NULL; ptr=self->basevfs->xGetSystemCall(self->basevfs, name); PyMem_Free((void*)name); if (ptr) return PyLong_FromVoidPtr(ptr); Py_RETURN_NONE; } static const char* apswvfs_xNextSystemCall(sqlite3_vfs *vfs, const char *zName) { PyObject *pyresult=NULL; PyObject *utf8=NULL; const char *res=NULL; VFSPREAMBLE; pyresult=Call_PythonMethodV((PyObject*)(vfs->pAppData), "xNextSystemCall", 1, "(N)", zName?convertutf8string(zName):(Py_INCREF(Py_None),Py_None)); if(pyresult && pyresult!=Py_None) { if(PyUnicode_CheckExact(pyresult) #if PY_MAJOR_VERSION<3 || PyString_CheckExact(pyresult) #endif ) { utf8=getutf8string(pyresult); if(utf8) /* note this deliberately leaks memory due to SQLite semantics */ res=sqlite3_mprintf("%s", PyBytes_AsString(utf8)); else { assert (PyErr_Occurred()) ; } } else PyErr_Format(PyExc_TypeError, "You must return a string or None"); } if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfs.xNextSystemCall", "{s:O}", "pyresult", pyresult); Py_XDECREF(pyresult); Py_XDECREF(utf8); VFSPOSTAMBLE; return res; } /** .. method:: xNextSystemCall(name) -> String or None This method is repeatedly called to iterate over all of the system calls in the vfs. When called with None you should return the name of the first system call. In subsequent calls return the name after the one passed in. If name is the last system call then return None. .. note:: Because of internal SQLite implementation semantics memory will be leaked on each call to this function. Consequently you should build up the list of call names once rather than repeatedly doing it. */ static PyObject * apswvfspy_xNextSystemCall(APSWVFS *self, PyObject *name) { const char *zName=NULL; PyObject *res=NULL; PyObject *utf8=NULL; CHECKVFSPY; VFSNOTIMPLEMENTED(xNextSystemCall, 3); if(name!=Py_None) { if(PyUnicode_CheckExact(name) #if PY_MAJOR_VERSION<3 || PyString_CheckExact(name) #endif ) { utf8=getutf8string(name); } else PyErr_Format(PyExc_TypeError, "You must provide a string or None"); } if(PyErr_Occurred()) goto finally; zName=self->basevfs->xNextSystemCall(self->basevfs, utf8?PyBytes_AsString(utf8):NULL); if(zName) res=convertutf8string(zName); else { Py_INCREF(Py_None); res=Py_None; } finally: if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfspy.xNextSystemCall", "{s:O}", "name", name); Py_XDECREF(utf8); return res; } /** .. method:: unregister() Unregisters the VFS making it unavailable to future database opens. You do not need to call this as the VFS is automatically unregistered by when the VFS has no more references or open datatabases using it. It is however useful to call if you have made your VFS be the default and wish to immediately make it be unavailable. It is safe to call this routine multiple times. -* sqlite3_vfs_unregister */ static PyObject * apswvfspy_unregister(APSWVFS *self) { int res; CHECKVFSPY; if(self->registered) { /* although it is undocumented by sqlite, we assume that an unregister failure always results in an unregister and so continue freeing the data structures. we memset everything to zero so there will be a coredump should this behaviour change. as of 3.6.3 the sqlite code doesn't return anything except ok anyway. */ res=sqlite3_vfs_unregister(self->containingvfs); self->registered=0; APSW_FAULT_INJECT(APSWVFSDeallocFail, ,res=SQLITE_IOERR); SET_EXC(res, NULL); if(res!=SQLITE_OK) return NULL; } Py_RETURN_NONE; } static void APSWVFS_dealloc(APSWVFS *self) { if(self->basevfs && self->basevfs->xAccess==apswvfs_xAccess) { Py_DECREF((PyObject*)self->basevfs->pAppData); } if(self->containingvfs) { PyObject *xx; /* not allowed to clobber existing exception */ PyObject *etype=NULL, *evalue=NULL, *etraceback=NULL; PyErr_Fetch(&etype, &evalue, &etraceback); xx=apswvfspy_unregister(self); Py_XDECREF(xx); if(PyErr_Occurred()) apsw_write_unraiseable(NULL); PyErr_Restore(etype, evalue, etraceback); /* some cleanups */ self->containingvfs->pAppData=NULL; PyMem_Free((void*)(self->containingvfs->zName)); /* zero it out so any attempt to use results in core dump */ memset(self->containingvfs, 0, sizeof(sqlite3_vfs)); PyMem_Free(self->containingvfs); self->containingvfs=NULL; } self->basevfs=self->containingvfs=NULL; Py_TYPE(self)->tp_free((PyObject*)self); } static PyObject * APSWVFS_new(PyTypeObject *type, APSW_ARGUNUSED PyObject *args, APSW_ARGUNUSED PyObject *kwds) { APSWVFS *self; self= (APSWVFS*)type->tp_alloc(type, 0); if(self) { self->basevfs=NULL; self->containingvfs=NULL; self->registered=0; } return (PyObject*)self; } /** .. method:: __init__(name[, base=None, makedefault=False, maxpathname=1024]) :param name: The name to register this vfs under. If the name already exists then this vfs will replace the prior one of the same name. Use :meth:`apsw.vfsnames` to get a list of registered vfs names. :param base: If you would like to inherit behaviour from an already registered vfs then give their name. To inherit from the default vfs, use a zero length string ``""`` as the name. :param makedefault: If true then this vfs will be registered as the default, and will be used by any opens that don't specify a vfs. :param maxpathname: The maximum length of database name in bytes when represented in UTF-8. If a pathname is passed in longer than this value then SQLite will not `be able to open it `__. :raises ValueError: If *base* is not :const:`None` and the named vfs is not currently registered. -* sqlite3_vfs_register sqlite3_vfs_find */ static int APSWVFS_init(APSWVFS *self, PyObject *args, PyObject *kwds) { static char *kwlist[]={"name", "base", "makedefault", "maxpathname", NULL}; char *base=NULL, *name=NULL; int makedefault=0, maxpathname=0, res; if(!PyArg_ParseTupleAndKeywords(args, kwds, "es|esii:init(name, base=None, makedefault=False, maxpathname=1024)", kwlist, STRENCODING, &name, STRENCODING, &base, &makedefault, &maxpathname)) return -1; if(base) { int baseversion; if(!strlen(base)) { PyMem_Free(base); base=NULL; } self->basevfs=sqlite3_vfs_find(base); if(!self->basevfs) { PyErr_Format(PyExc_ValueError, "Base vfs named \"%s\" not found", base?base:""); goto error; } baseversion=self->basevfs->iVersion; APSW_FAULT_INJECT(APSWVFSBadVersion, , baseversion=-789426); if(baseversion<1 || baseversion>3) { PyErr_Format(PyExc_ValueError, "Base vfs implements version %d of vfs spec, but apsw only supports versions 1, 2 and 3", baseversion); goto error; } if(base) PyMem_Free(base); } self->containingvfs=(sqlite3_vfs *)PyMem_Malloc(sizeof(sqlite3_vfs)); if(!self->containingvfs) return -1; memset(self->containingvfs, 0, sizeof(sqlite3_vfs)); self->containingvfs->iVersion=3; self->containingvfs->szOsFile=sizeof(APSWSQLite3File); if(self->basevfs && !maxpathname) self->containingvfs->mxPathname=self->basevfs->mxPathname; else self->containingvfs->mxPathname=maxpathname?maxpathname:1024; self->containingvfs->zName=name; name=NULL; self->containingvfs->pAppData=self; #define METHOD(meth) \ self->containingvfs->x##meth=apswvfs_x##meth; METHOD(Delete); METHOD(FullPathname); METHOD(Open); METHOD(Access); METHOD(DlOpen); METHOD(DlSym); METHOD(DlClose); METHOD(DlError); METHOD(Randomness); METHOD(Sleep); METHOD(CurrentTime); METHOD(GetLastError); /* The VFS2 method is not particularly useful */ METHOD(SetSystemCall); METHOD(GetSystemCall); METHOD(NextSystemCall); #undef METHOD APSW_FAULT_INJECT(APSWVFSRegistrationFails, res=sqlite3_vfs_register(self->containingvfs, makedefault), res=SQLITE_NOMEM); if(res==SQLITE_OK) { self->registered=1; if(self->basevfs && self->basevfs->xAccess==apswvfs_xAccess) { Py_INCREF((PyObject*)self->basevfs->pAppData); } return 0; } SET_EXC(res, NULL); error: if(name) PyMem_Free(name); if(base) PyMem_Free(base); if(self->containingvfs && self->containingvfs->zName) PyMem_Free((void*)(self->containingvfs->zName)); if(self->containingvfs) PyMem_Free(self->containingvfs); self->containingvfs=NULL; return -1; } static PyMethodDef APSWVFS_methods[]={ {"xDelete", (PyCFunction)apswvfspy_xDelete, METH_VARARGS, "xDelete"}, {"xFullPathname", (PyCFunction)apswvfspy_xFullPathname, METH_O, "xFullPathname"}, {"xOpen", (PyCFunction)apswvfspy_xOpen, METH_VARARGS, "xOpen"}, {"xAccess", (PyCFunction)apswvfspy_xAccess, METH_VARARGS, "xAccess"}, {"xDlOpen", (PyCFunction)apswvfspy_xDlOpen, METH_VARARGS, "xDlOpen"}, {"xDlSym", (PyCFunction)apswvfspy_xDlSym, METH_VARARGS, "xDlSym"}, {"xDlClose", (PyCFunction)apswvfspy_xDlClose, METH_O, "xDlClose"}, {"xDlError", (PyCFunction)apswvfspy_xDlError, METH_NOARGS, "xDlError"}, {"xRandomness", (PyCFunction)apswvfspy_xRandomness, METH_VARARGS, "xRandomness"}, {"xSleep", (PyCFunction)apswvfspy_xSleep, METH_VARARGS, "xSleep"}, {"xCurrentTime", (PyCFunction)apswvfspy_xCurrentTime, METH_NOARGS, "xCurrentTime"}, {"xGetLastError", (PyCFunction)apswvfspy_xGetLastError, METH_NOARGS, "xGetLastError"}, {"xSetSystemCall", (PyCFunction)apswvfspy_xSetSystemCall, METH_VARARGS, "xSetSystemCall"}, {"xGetSystemCall", (PyCFunction)apswvfspy_xGetSystemCall, METH_VARARGS, "xGetSystemCall"}, {"xNextSystemCall", (PyCFunction)apswvfspy_xNextSystemCall, METH_O, "xNextSystemCall"}, {"unregister", (PyCFunction)apswvfspy_unregister, METH_NOARGS, "Unregisters the vfs"}, {"excepthook", (PyCFunction)apswvfs_excepthook, METH_VARARGS, "Exception hook"}, /* Sentinel */ {0, 0, 0, 0} }; static PyTypeObject APSWVFSType = { APSW_PYTYPE_INIT "apsw.VFS", /*tp_name*/ sizeof(APSWVFS), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)APSWVFS_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "VFS object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ APSWVFS_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)APSWVFS_init, /* tp_init */ 0, /* tp_alloc */ APSWVFS_new, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; /** .. class:: VFSFile Wraps access to a file. You only need to derive from this class if you want the file object returned from :meth:`VFS.xOpen` to inherit from an existing VFS implementation. .. note:: All file sizes and offsets are 64 bit quantities even on 32 bit operating systems. */ /** .. method:: excepthook(etype, evalue, etraceback) Called when there has been an exception in a :class:`VFSFile` routine. The default implementation calls ``sys.excepthook`` and if that fails then ``PyErr_Display``. The three arguments correspond to what ``sys.exc_info()`` would return. :param etype: The exception type :param evalue: The exception value :param etraceback: The exception traceback. Note this includes all frames all the way up to the thread being started. */ static PyObject *apswvfsfilepy_xClose(APSWVFSFile *self); static void APSWVFSFile_dealloc(APSWVFSFile *self) { PyObject *a,*b,*c; PyErr_Fetch(&a, &b, &c); if(self->base) { /* close it */ PyObject *x=apswvfsfilepy_xClose(self); Py_XDECREF(x); } if(self->filenamefree) PyMem_Free(self->filename); if(PyErr_Occurred()) { AddTraceBackHere(__FILE__, __LINE__, "APSWVFS File destructor", NULL); apsw_write_unraiseable(NULL); } Py_TYPE(self)->tp_free((PyObject*)self); PyErr_Restore(a,b,c); } /*ARGSUSED*/ static PyObject * APSWVFSFile_new(PyTypeObject *type, APSW_ARGUNUSED PyObject *args, APSW_ARGUNUSED PyObject *kwds) { APSWVFSFile *self; self= (APSWVFSFile*)type->tp_alloc(type, 0); if(self) { self->base=NULL; self->filename=NULL; } return (PyObject*)self; } /** .. method:: __init__(vfs, name, flags) :param vfs: The vfs you want to inherit behaviour from. You can use an empty string ``""`` to inherit from the default vfs. :param name: The name of the file being opened. May be an instance of :class:`URIFilename`. :param flags: A two list ``[inflags, outflags]`` as detailed in :meth:`VFS.xOpen`. :raises ValueError: If the named VFS is not registered. .. note:: If the VFS that you inherit from supports :ref:`write ahead logging ` then your :class:`VFSFile` will also support the xShm methods necessary to implement wal. .. seealso:: :meth:`VFS.xOpen` */ static int APSWVFSFile_init(APSWVFSFile *self, PyObject *args, PyObject *kwds) { static char *kwlist[]={"vfs", "name", "flags", NULL}; char *vfs=NULL; PyObject *flags=NULL, *pyname=NULL, *utf8name=NULL; int xopenresult; int flagsout=0; long flagsin; int res=-1; /* error */ PyObject *itemzero=NULL, *itemone=NULL, *zero=NULL, *pyflagsout=NULL; sqlite3_vfs *vfstouse=NULL; sqlite3_file *file=NULL; if(!PyArg_ParseTupleAndKeywords(args, kwds, "esOO:init(vfs, name, flags)", kwlist, STRENCODING, &vfs, &pyname, &flags)) return -1; self->filenamefree=0; if(pyname==Py_None) { self->filename=NULL; } else if(pyname->ob_type==&APSWURIFilenameType) { self->filename=((APSWURIFilename*)pyname)->filename; } else { size_t len; utf8name=getutf8string(pyname); if(!utf8name) goto finally; len=strlen(PyBytes_AS_STRING(utf8name)); APSW_FAULT_INJECT(vfspyopen_fullpathnamemallocfailed, self->filename=PyMem_Malloc(len+3), self->filename=(char*)PyErr_NoMemory()); if(!self->filename) goto finally; strcpy(self->filename, PyBytes_AS_STRING(utf8name)); /* ensure extra null padding for URI params */ self->filename[len]=self->filename[len+1]=self->filename[len+2]=0; self->filenamefree=1; } /* type checking */ if(strlen(vfs)==0) { /* sqlite uses null for default vfs - we use empty string */ PyMem_Free(vfs); vfs=NULL; } /* flags need to be a list of two integers */ if(!PySequence_Check(flags) || PySequence_Size(flags)!=2) { PyErr_Format(PyExc_TypeError, "Flags should be a sequence of two integers"); goto finally; } itemzero=PySequence_GetItem(flags, 0); itemone=PySequence_GetItem(flags, 1); if(!itemzero || !itemone || !PyIntLong_Check(itemzero) || !PyIntLong_Check(itemone)) { PyErr_Format(PyExc_TypeError, "Flags should contain two integers"); goto finally; } /* check we can change item 1 */ zero=PyInt_FromLong(0); if(!zero) goto finally; if(-1==PySequence_SetItem(flags, 1, zero)) goto finally; flagsin=PyIntLong_AsLong(itemzero); if(flagsin!=(int)flagsin) { PyErr_Format(PyExc_OverflowError, "flags[0] is too big!"); AddTraceBackHere(__FILE__, __LINE__, "VFSFile.__init__", "{s: O}", "flags", flags); } if(PyErr_Occurred()) goto finally; vfstouse=sqlite3_vfs_find(vfs); if(!vfstouse) { PyErr_Format(PyExc_ValueError, "Unknown vfs \"%s\"", vfs); goto finally; } file=PyMem_Malloc(vfstouse->szOsFile); if(!file) goto finally; xopenresult=vfstouse->xOpen(vfstouse, self->filename, file, (int)flagsin, &flagsout); SET_EXC(xopenresult, NULL); if(PyErr_Occurred()) { /* just in case the result was ok, but there was a python level exception ... */ if(xopenresult==SQLITE_OK) file->pMethods->xClose(file); goto finally; } pyflagsout=PyInt_FromLong(flagsout); if(-1==PySequence_SetItem(flags, 1, pyflagsout)) { file->pMethods->xClose(file); goto finally; } if(PyErr_Occurred()) goto finally; self->base=(sqlite3_file*)(void*)file; res=0; finally: assert(res==0 || PyErr_Occurred()); if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "vfsfile.init", "{s: O, s: O}", "args", args, "kwargs", kwds); Py_XDECREF(pyflagsout); Py_XDECREF(itemzero); Py_XDECREF(itemone); Py_XDECREF(zero); Py_XDECREF(utf8name); if(res!=0 && file) PyMem_Free(file); if(vfs) PyMem_Free(vfs); return res; } static int apswvfsfile_xRead(sqlite3_file *file, void *bufout, int amount, sqlite3_int64 offset) { int result=SQLITE_ERROR; PyObject *pybuf=NULL; int asrb; Py_ssize_t size; const void *buffer; FILEPREAMBLE; pybuf=Call_PythonMethodV(apswfile->file, "xRead", 1, "(iL)", amount, offset); if(!pybuf) { assert(PyErr_Occurred()); result=MakeSqliteMsgFromPyException(NULL); goto finally; } if(PyUnicode_Check(pybuf) || !PyObject_CheckReadBuffer(pybuf)) { PyErr_Format(PyExc_TypeError, "Object returned from xRead should be bytes/buffer/string"); goto finally; } asrb=PyObject_AsReadBuffer(pybuf, &buffer, &size); APSW_FAULT_INJECT(xReadReadBufferFail,,(PyErr_NoMemory(),asrb=-1)); if(asrb!=0) { PyErr_Format(PyExc_TypeError, "Object returned from xRead doesn't do read buffer"); goto finally; } if(size bytes Read the specified *amount* of data starting at *offset*. You should make every effort to read all the data requested, or return an error. If you have the file open for non-blocking I/O or if signals happen then it is possible for the underlying operating system to do a partial read. You will need to request the remaining data. Except for empty files SQLite considers short reads to be a fatal error. :param amount: Number of bytes to read :param offset: Where to start reading. This number may be 64 bit once the database is larger than 2GB. :rtype: (Python 2) string, buffer. (Python 3) bytes, buffer */ static PyObject * apswvfsfilepy_xRead(APSWVFSFile *self, PyObject *args) { int amount; sqlite3_int64 offset; int res; PyObject *buffy=NULL; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xRead, 1); if(!PyArg_ParseTuple(args, "iL", &amount, &offset)) { assert(PyErr_Occurred()); return NULL; } buffy=PyBytes_FromStringAndSize(NULL, amount); if(!buffy) return NULL; res=self->base->pMethods->xRead(self->base, PyBytes_AS_STRING(buffy), amount, offset); if(res==SQLITE_OK) return buffy; if(res==SQLITE_IOERR_SHORT_READ) { /* We don't know how short the read was, so look for first non-trailing null byte. See https://sqlite.org/cvstrac/chngview?cn=5867 */ while(amount && PyBytes_AS_STRING(buffy)[amount-1]==0) amount--; _PyBytes_Resize(&buffy, amount); return buffy; } Py_DECREF(buffy); SET_EXC(res, NULL); return NULL; } static int apswvfsfile_xWrite(sqlite3_file *file, const void *buffer, int amount, sqlite3_int64 offset) { PyObject *pyresult=NULL, *pybuf=NULL; int result=SQLITE_OK; FILEPREAMBLE; /* I could instead use PyBuffer_New here which avoids duplicating the memory. But if the developer keeps a reference on it then the underlying memory goes away on return of this function and all hell would break lose on next access. It is very unlikely someone would hang on to them but I'd rather there not be any possibility of problems. In any event the data sizes are usually very small - typically the SQLite default page size of 1kb */ pybuf=PyBytes_FromStringAndSize(buffer, amount); if(!pybuf) goto finally; pyresult=Call_PythonMethodV(apswfile->file, "xWrite", 1, "(OL)", pybuf, offset); finally: if(PyErr_Occurred()) { result=MakeSqliteMsgFromPyException(NULL); AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile_xWrite", "{s: i, s: L, s: O}", "amount", amount, "offset", offset, "data", pybuf?pybuf:Py_None); } Py_XDECREF(pybuf); Py_XDECREF(pyresult); FILEPOSTAMBLE; return result; } /** .. method:: xWrite(data, offset) Write the *data* starting at absolute *offset*. You must write all the data requested, or return an error. If you have the file open for non-blocking I/O or if signals happen then it is possible for the underlying operating system to do a partial write. You will need to write the remaining data. :param offset: Where to start writing. This number may be 64 bit once the database is larger than 2GB. :param data: (Python 2) string, (Python 3) bytes */ static PyObject * apswvfsfilepy_xWrite(APSWVFSFile *self, PyObject *args) { sqlite3_int64 offset; int res; PyObject *buffy=NULL; const void *buffer; Py_ssize_t size; int asrb; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xWrite, 1); if(!PyArg_ParseTuple(args, "OL", &buffy, &offset)) { assert(PyErr_Occurred()); return NULL; } asrb=PyObject_AsReadBuffer(buffy, &buffer, &size); if(asrb!=0 || PyUnicode_Check(buffy)) { PyErr_Format(PyExc_TypeError, "Object passed to xWrite doesn't do read buffer"); AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile_xWrite", "{s: L, s: O}", "offset", offset, "buffer", buffy); return NULL; } res=self->base->pMethods->xWrite(self->base, buffer, size, offset); if(res==SQLITE_OK) Py_RETURN_NONE; SET_EXC(res, NULL); return NULL; } static int apswvfsfile_xUnlock(sqlite3_file *file, int flag) { int result=SQLITE_ERROR; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xUnlock", 1, "(i)", flag); if(!pyresult) result=MakeSqliteMsgFromPyException(NULL); else result=SQLITE_OK; if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile.xUnlock", "{s: i}", "flag", flag); Py_XDECREF(pyresult); FILEPOSTAMBLE; return result; } /** .. method:: xUnlock(level) Decrease the lock to the level specified which is one of the `SQLITE_LOCK `_ family of constants. */ static PyObject * apswvfsfilepy_xUnlock(APSWVFSFile *self, PyObject *args) { int flag, res; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xUnlock, 1); if(!PyArg_ParseTuple(args, "i", &flag)) return NULL; res=self->base->pMethods->xUnlock(self->base, flag); APSW_FAULT_INJECT(xUnlockFails,,res=SQLITE_IOERR); if(res==SQLITE_OK) Py_RETURN_NONE; SET_EXC(res, NULL); return NULL; } static int apswvfsfile_xLock(sqlite3_file *file, int flag) { int result=SQLITE_ERROR; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xLock", 1, "(i)", flag); if(!pyresult) { result=MakeSqliteMsgFromPyException(NULL); /* a busy exception is normal so we clear it */ if(SQLITE_BUSY==(result&0xff)) PyErr_Clear(); } else result=SQLITE_OK; Py_XDECREF(pyresult); if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile.xLock", "{s: i}", "level", flag); FILEPOSTAMBLE; return result; } /** .. method:: xLock(level) Increase the lock to the level specified which is one of the `SQLITE_LOCK `_ family of constants. If you can't increase the lock level because someone else has locked it, then raise :exc:`BusyError`. */ static PyObject * apswvfsfilepy_xLock(APSWVFSFile *self, PyObject *args) { int flag, res; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xLock, 1); if(!PyArg_ParseTuple(args, "i", &flag)) return NULL; res=self->base->pMethods->xLock(self->base, flag); if(res==SQLITE_OK) Py_RETURN_NONE; SET_EXC(res, NULL); return NULL; } static int apswvfsfile_xTruncate(sqlite3_file *file, sqlite3_int64 size) { int result=SQLITE_ERROR; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xTruncate", 1, "(L)", size); if(!pyresult) result=MakeSqliteMsgFromPyException(NULL); else result=SQLITE_OK; Py_XDECREF(pyresult); if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile.xTruncate", "{s: L}", "size", size); FILEPOSTAMBLE; return result; } /** .. method:: xTruncate(newsize) Set the file length to *newsize* (which may be more or less than the current length). */ static PyObject * apswvfsfilepy_xTruncate(APSWVFSFile *self, PyObject *args) { int res; sqlite3_int64 size; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xTruncate, 1); if(!PyArg_ParseTuple(args, "L", &size)) return NULL; res=self->base->pMethods->xTruncate(self->base, size); if(res==SQLITE_OK) Py_RETURN_NONE; SET_EXC(res, NULL); return NULL; } static int apswvfsfile_xSync(sqlite3_file *file, int flags) { int result=SQLITE_ERROR; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xSync", 1, "(i)", flags); if(!pyresult) result=MakeSqliteMsgFromPyException(NULL); else result=SQLITE_OK; Py_XDECREF(pyresult); if (PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile.xSync", "{s: i}", "flags", flags); FILEPOSTAMBLE; return result; } /** .. method:: xSync(flags) Ensure data is on the disk platters (ie could survive a power failure immediately after the call returns) with the `sync flags `_ detailing what needs to be synced. You can sync more than what is requested. */ static PyObject * apswvfsfilepy_xSync(APSWVFSFile *self, PyObject *args) { int flags, res; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xSync, 1); if(!PyArg_ParseTuple(args, "i", &flags)) return NULL; res=self->base->pMethods->xSync(self->base, flags); APSW_FAULT_INJECT(xSyncFails, ,res=SQLITE_IOERR); if(res==SQLITE_OK) Py_RETURN_NONE; SET_EXC(res, NULL); return NULL; } static int apswvfsfile_xSectorSize(sqlite3_file *file) { int result=4096; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xSectorSize", 0, "()"); if(!pyresult) result=MakeSqliteMsgFromPyException(NULL); else if(pyresult!=Py_None) { if(PyIntLong_Check(pyresult)) result=PyIntLong_AsLong(pyresult); /* returns -1 on error/overflow */ else PyErr_Format(PyExc_TypeError, "xSectorSize should return a number"); } /* We can't return errors so use unraiseable */ if(PyErr_Occurred()) { AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile_xSectorSize", NULL); result=4096; /* could be -1 as stated above */ } Py_XDECREF(pyresult); FILEPOSTAMBLE; return result; } /** .. method:: xSectorSize() -> int Return the native underlying sector size. SQLite uses the value returned in determining the default database page size. If you do not implement the function or have an error then 4096 (the SQLite default) is returned. */ static PyObject * apswvfsfilepy_xSectorSize(APSWVFSFile *self) { int res=4096; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xSectorSize, 1); res=self->base->pMethods->xSectorSize(self->base); return PyInt_FromLong(res); } static int apswvfsfile_xDeviceCharacteristics(sqlite3_file *file) { int result=0; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xDeviceCharacteristics", 0, "()"); if(!pyresult) result=MakeSqliteMsgFromPyException(NULL); else if(pyresult!=Py_None) { if(PyIntLong_Check(pyresult)) result=PyIntLong_AsLong(pyresult); /* sets to -1 on error */ else PyErr_Format(PyExc_TypeError, "xDeviceCharacteristics should return a number"); } /* We can't return errors so use unraiseable */ if(PyErr_Occurred()) { AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile_xDeviceCharacteristics", "{s: O}", "result", pyresult?pyresult:Py_None); result=0; /* harmless value for error cases */ } Py_XDECREF(pyresult); FILEPOSTAMBLE; return result; } /** .. method:: xDeviceCharacteristics() -> int Return `I/O capabilities `_ (bitwise or of appropriate values). If you do not implement the function or have an error then 0 (the SQLite default) is returned. */ static PyObject * apswvfsfilepy_xDeviceCharacteristics(APSWVFSFile *self) { int res=0; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xDeviceCharacteristics, 1); res=self->base->pMethods->xDeviceCharacteristics(self->base); return PyInt_FromLong(res); } static int apswvfsfile_xFileSize(sqlite3_file *file, sqlite3_int64 *pSize) { int result=SQLITE_OK; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xFileSize", 1, "()"); if(!pyresult) result=MakeSqliteMsgFromPyException(NULL); else if(PyLong_Check(pyresult)) *pSize=PyLong_AsLongLong(pyresult); else if(PyIntLong_Check(pyresult)) *pSize=PyIntLong_AsLong(pyresult); else PyErr_Format(PyExc_TypeError, "xFileSize should return a number"); if(PyErr_Occurred()) { result=MakeSqliteMsgFromPyException(NULL); AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile_xFileSize", "{s: O}", "result", pyresult?pyresult:Py_None); } Py_XDECREF(pyresult); FILEPOSTAMBLE; return result; } /** .. method:: xFileSize() -> int Return the size of the file in bytes. Remember that file sizes are 64 bit quantities even on 32 bit operating systems. */ static PyObject * apswvfsfilepy_xFileSize(APSWVFSFile *self) { sqlite3_int64 size; int res; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xFileSize, 1); res=self->base->pMethods->xFileSize(self->base, &size); APSW_FAULT_INJECT(xFileSizeFails, ,res=SQLITE_IOERR); if(res!=SQLITE_OK) { SET_EXC(res, NULL); return NULL; } return PyLong_FromLongLong(size); } static int apswvfsfile_xCheckReservedLock(sqlite3_file *file, int *pResOut) { int result=SQLITE_OK; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xCheckReservedLock", 1, "()"); if(!pyresult) result=MakeSqliteMsgFromPyException(NULL); else if(PyIntLong_Check(pyresult)) *pResOut=!!PyIntLong_AsLong(pyresult); else PyErr_Format(PyExc_TypeError, "xCheckReservedLock should return a boolean/number"); if(PyErr_Occurred()) { result=MakeSqliteMsgFromPyException(NULL); AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile_xCheckReservedLock", "{s: O}", "result", pyresult?pyresult:Py_None); } Py_XDECREF(pyresult); FILEPOSTAMBLE; return result; } /** .. method:: xCheckReservedLock() Returns True if any database connection (in this or another process) has a lock other than `SQLITE_LOCK_NONE or SQLITE_LOCK_SHARED `_. */ static PyObject * apswvfsfilepy_xCheckReservedLock(APSWVFSFile *self) { int islocked; int res; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xCheckReservedLock, 1); res=self->base->pMethods->xCheckReservedLock(self->base, &islocked); APSW_FAULT_INJECT(xCheckReservedLockFails,, res=SQLITE_IOERR); if(res!=SQLITE_OK) { SET_EXC(res, NULL); return NULL; } APSW_FAULT_INJECT(xCheckReservedLockIsTrue,,islocked=1); if(islocked) Py_RETURN_TRUE; Py_RETURN_FALSE; } static int apswvfsfile_xFileControl(sqlite3_file *file, int op, void *pArg) { int result=SQLITE_ERROR; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xFileControl", 1, "(iN)", op, PyLong_FromVoidPtr(pArg)); if(!pyresult) result=MakeSqliteMsgFromPyException(NULL); else { if(pyresult!=Py_True && pyresult!=Py_False) { PyErr_Format(PyExc_TypeError, "xFileControl must return True or False"); result=SQLITE_ERROR; } else result=(pyresult==Py_True)?SQLITE_OK:SQLITE_NOTFOUND; } Py_XDECREF(pyresult); FILEPOSTAMBLE; return result; } /** .. method:: xFileControl(op, ptr) -> bool Receives `file control `_ request typically issued by :meth:`Connection.filecontrol`. See :meth:`Connection.filecontrol` for an example of how to pass a Python object to this routine. :param op: A numeric code. Codes below 100 are reserved for SQLite internal use. :param ptr: An integer corresponding to a pointer at the C level. :returns: A boolean indicating if the op was understood As of SQLite 3.6.10, this method is called by SQLite if you have inherited from an underlying VFSFile. Consequently ensure you pass any unrecognised codes through to your super class. For example:: def xFileControl(self, op, ptr): if op==1027: process_quick(ptr) elif op==1028: obj=ctypes.py_object.from_address(ptr).value else: # this ensures superclass implementation is called return super(MyFile, self).xFileControl(op, ptr) # we understood the op return True */ static PyObject * apswvfsfilepy_xFileControl(APSWVFSFile *self, PyObject *args) { int op, res=SQLITE_ERROR; PyObject *pyptr; void *ptr=NULL; CHECKVFSFILEPY; VFSFILENOTIMPLEMENTED(xFileControl, 1); if(!PyArg_ParseTuple(args, "iO", &op, &pyptr)) return NULL; if(PyIntLong_Check(pyptr)) ptr=PyLong_AsVoidPtr(pyptr); else PyErr_Format(PyExc_TypeError, "Argument is not number (pointer)"); if(PyErr_Occurred()) goto finally; res=self->base->pMethods->xFileControl(self->base, op, ptr); if(res==SQLITE_OK) Py_RETURN_TRUE; if(res==SQLITE_NOTFOUND) Py_RETURN_FALSE; finally: SET_EXC(res, NULL); return NULL; } static int apswvfsfile_xClose(sqlite3_file *file) { int result=SQLITE_ERROR; PyObject *pyresult=NULL; FILEPREAMBLE; pyresult=Call_PythonMethodV(apswfile->file, "xClose", 1, "()"); if(!pyresult) result=MakeSqliteMsgFromPyException(NULL); else result=SQLITE_OK; if(PyErr_Occurred()) AddTraceBackHere(__FILE__, __LINE__, "apswvfsfile.xClose", NULL); Py_XDECREF(apswfile->file); apswfile->file=NULL; Py_XDECREF(pyresult); FILEPOSTAMBLE; return result; } /** .. method:: xClose() Close the database. Note that even if you return an error you should still close the file. It is safe to call this method mutliple times. */ static PyObject * apswvfsfilepy_xClose(APSWVFSFile *self) { int res; if(!self->base) /* already closed */ Py_RETURN_NONE; res=self->base->pMethods->xClose(self->base); APSW_FAULT_INJECT(xCloseFails,, res=SQLITE_IOERR); /* we set pMethods to NULL after xClose callback so xClose can call other operations such as read or write during close */ self->base->pMethods=NULL; PyMem_Free(self->base); self->base=NULL; if(res==SQLITE_OK) Py_RETURN_NONE; SET_EXC(res, NULL); return NULL; } #define APSWPROXYBASE \ APSWSQLite3File *apswfile=(APSWSQLite3File*)(void*)file; \ APSWVFSFile *f=(APSWVFSFile*) (apswfile->file); \ assert(Py_TYPE(f)==&APSWVFSFileType) static int apswproxyxShmLock(sqlite3_file *file, int offset, int n, int flags) { APSWPROXYBASE; return f->base->pMethods->xShmLock(f->base, offset, n, flags); } static int apswproxyxShmMap(sqlite3_file *file, int iPage, int pgsz, int isWrite, void volatile **pp) { APSWPROXYBASE; return f->base->pMethods->xShmMap(f->base, iPage, pgsz, isWrite, pp); } static void apswproxyxShmBarrier(sqlite3_file *file) { APSWPROXYBASE; f->base->pMethods->xShmBarrier(f->base); } static int apswproxyxShmUnmap(sqlite3_file *file, int deleteFlag) { APSWPROXYBASE; return f->base->pMethods->xShmUnmap(f->base, deleteFlag); } static const struct sqlite3_io_methods apsw_io_methods_v1= { 1, /* version */ apswvfsfile_xClose, /* close */ apswvfsfile_xRead, /* read */ apswvfsfile_xWrite, /* write */ apswvfsfile_xTruncate, /* truncate */ apswvfsfile_xSync, /* sync */ apswvfsfile_xFileSize, /* filesize */ apswvfsfile_xLock, /* lock */ apswvfsfile_xUnlock, /* unlock */ apswvfsfile_xCheckReservedLock, /* checkreservedlock */ apswvfsfile_xFileControl, /* filecontrol */ apswvfsfile_xSectorSize, /* sectorsize */ apswvfsfile_xDeviceCharacteristics,/* device characteristics */ 0, /* shmmap */ 0, /* shmlock */ 0, /* shmbarrier */ 0 /* shmunmap */ }; static const struct sqlite3_io_methods apsw_io_methods_v2= { 2, /* version */ apswvfsfile_xClose, /* close */ apswvfsfile_xRead, /* read */ apswvfsfile_xWrite, /* write */ apswvfsfile_xTruncate, /* truncate */ apswvfsfile_xSync, /* sync */ apswvfsfile_xFileSize, /* filesize */ apswvfsfile_xLock, /* lock */ apswvfsfile_xUnlock, /* unlock */ apswvfsfile_xCheckReservedLock, /* checkreservedlock */ apswvfsfile_xFileControl, /* filecontrol */ apswvfsfile_xSectorSize, /* sectorsize */ apswvfsfile_xDeviceCharacteristics,/* device characteristics */ apswproxyxShmMap, /* shmmap */ apswproxyxShmLock, /* shmlock */ apswproxyxShmBarrier, /* shmbarrier */ apswproxyxShmUnmap /* shmunmap */ }; static PyMethodDef APSWVFSFile_methods[]={ {"xRead", (PyCFunction)apswvfsfilepy_xRead, METH_VARARGS, "xRead"}, {"xUnlock", (PyCFunction)apswvfsfilepy_xUnlock, METH_VARARGS, "xUnlock"}, {"xLock", (PyCFunction)apswvfsfilepy_xLock, METH_VARARGS, "xLock"}, {"xClose", (PyCFunction)apswvfsfilepy_xClose, METH_NOARGS, "xClose"}, {"xSectorSize", (PyCFunction)apswvfsfilepy_xSectorSize, METH_NOARGS, "xSectorSize"}, {"xFileSize", (PyCFunction)apswvfsfilepy_xFileSize, METH_NOARGS, "xFileSize"}, {"xDeviceCharacteristics", (PyCFunction)apswvfsfilepy_xDeviceCharacteristics, METH_NOARGS, "xDeviceCharacteristics"}, {"xCheckReservedLock", (PyCFunction)apswvfsfilepy_xCheckReservedLock, METH_NOARGS, "xCheckReservedLock"}, {"xWrite", (PyCFunction)apswvfsfilepy_xWrite, METH_VARARGS, "xWrite"}, {"xSync", (PyCFunction)apswvfsfilepy_xSync, METH_VARARGS, "xSync"}, {"xTruncate", (PyCFunction)apswvfsfilepy_xTruncate, METH_VARARGS, "xTruncate"}, {"xFileControl", (PyCFunction)apswvfsfilepy_xFileControl, METH_VARARGS, "xFileControl"}, {"excepthook", (PyCFunction)apswvfs_excepthook, METH_VARARGS, "Exception hook"}, /* Sentinel */ {0, 0, 0, 0} }; static PyTypeObject APSWVFSFileType = { APSW_PYTYPE_INIT "apsw.VFSFile", /*tp_name*/ sizeof(APSWVFSFile), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)APSWVFSFile_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "VFSFile object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ APSWVFSFile_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)APSWVFSFile_init, /* tp_init */ 0, /* tp_alloc */ APSWVFSFile_new, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; /** .. class:: URIFilename SQLite uses a convoluted method of storing `uri parameters `__ after the filename binding the C filename representation and parameters together. This class encapsulates that binding. The :ref:`example ` shows usage of this class. Your :meth:`VFS.xOpen` method will generally be passed one of these instead of a string as the filename if the URI flag was used or the main database flag is set. You can safely pass it on to the :class:`VFSFile` constructor which knows how to get the name back out. */ /** .. method:: filename() -> str Returns the filename. */ static PyObject* apswurifilename_filename(APSWURIFilename *self) { return convertutf8string(self->filename); } /** .. method:: uri_parameter(name) -> str Returns the value of parameter `name` or None. -* sqlite3_uri_parameter */ static PyObject* apswurifilename_uri_parameter(APSWURIFilename *self, PyObject *param) { const char *res; PyObject *asutf8=getutf8string(param); if(!asutf8) return NULL; res=sqlite3_uri_parameter(self->filename, PyBytes_AS_STRING(asutf8)); Py_DECREF(asutf8); return convertutf8string(res); } /** .. method:: uri_int(name, default) -> int Returns the integer value for parameter `name` or `default` if not present. -* sqlite3_uri_int64 */ static PyObject* apswurifilename_uri_int(APSWURIFilename *self, PyObject *args) { char *param=NULL; long long res=0; if(!PyArg_ParseTuple(args, "esL", STRENCODING, ¶m, &res)) return NULL; res=sqlite3_uri_int64(self->filename, param, res); PyMem_Free(param); return PyLong_FromLongLong(res); } /** .. method:: uri_boolean(name, default) -> bool Returns the boolean value for parameter `name` or `default` if not present. -* sqlite3_uri_boolean */ static PyObject* apswurifilename_uri_boolean(APSWURIFilename *self, PyObject *args) { char *param=NULL; int res=0; if(!PyArg_ParseTuple(args, "esi", STRENCODING, ¶m, &res)) return NULL; res=sqlite3_uri_boolean(self->filename, param, res); PyMem_Free(param); if(res) Py_RETURN_TRUE; Py_RETURN_FALSE; } static PyMethodDef APSWURIFilenameMethods[]={ {"filename", (PyCFunction) apswurifilename_filename, METH_NOARGS, "Get filename"}, {"uri_parameter", (PyCFunction) apswurifilename_uri_parameter, METH_O, "Get URI value"}, {"uri_int", (PyCFunction) apswurifilename_uri_int, METH_VARARGS, "Get URI integer value"}, {"uri_boolean", (PyCFunction) apswurifilename_uri_boolean, METH_VARARGS, "Get URI boolean value"}, /* Sentinel */ {0, 0, 0, 0} }; static PyTypeObject APSWURIFilenameType= { APSW_PYTYPE_INIT "apsw.URIFilename", /*tp_name*/ sizeof(APSWURIFilename), /*tp_basicsize*/ 0, /*tp_itemsize*/ 0, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "Filename and URI", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ APSWURIFilenameMethods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; apsw-3.8.11.1-r1/src/apswbuffer.c0000644000000000000000000001773212565422256015055 0ustar rootroot/* A lightweight buffer class that works across Python 2 and 3. See the accompanying LICENSE file. */ /* Set to zero to disable buffer object recycling. Even a small amount makes a big difference with diminishing returns based on how many the user program goes through without freeing and the interpretter gc intervals. */ #ifndef AB_NRECYCLE #define AB_NRECYCLE 256 #endif typedef struct APSWBuffer { PyObject_HEAD PyObject *base; /* the pybytes/pystring we are a view of */ const char *data; Py_ssize_t length; long hash; } APSWBuffer; static PyTypeObject APSWBufferType; #define APSWBuffer_Check(x) (Py_TYPE(x)==&APSWBufferType) #define APSWBuffer_AS_STRING(x) ( ((APSWBuffer*)(x)) -> data ) #define APSWBuffer_GET_SIZE(x) ( ((APSWBuffer*)(x)) -> length ) #if AB_NRECYCLE > 0 static APSWBuffer* apswbuffer_recyclelist[AB_NRECYCLE]; static unsigned apswbuffer_nrecycle=0; static void _APSWBuffer_DECREF(PyObject *x) { APSWBuffer *y=(APSWBuffer*)x; assert(APSWBuffer_Check(x)); assert(Py_REFCNT(x)==1); if(apswbuffer_nrecyclebase) assert(!APSWBuffer_Check(y->base)); Py_XDECREF(y->base); y->base=NULL; } else { Py_DECREF(y); } } #define APSWBuffer_XDECREF(x) \ do { \ if(x) \ { \ if(Py_REFCNT(x)==1) \ { _APSWBuffer_DECREF(x); } \ else \ { Py_DECREF(x); } \ } \ } while(0) /* Profiling of the test suite and speedtest was used to which locations were likely to meet the criteria for recycling the buffer object and which wouldn't */ #define APSWBuffer_XDECREF_likely APSWBuffer_XDECREF #define APSWBuffer_XDECREF_unlikely Py_XDECREF #ifdef APSW_TESTFIXTURES static void APSWBuffer_fini(void) { while(apswbuffer_nrecycle) { PyObject *p=(PyObject*)apswbuffer_recyclelist[--apswbuffer_nrecycle]; Py_DECREF(p); } } #endif #else #define APSWBuffer_XDECREF_likely Py_XDECREF #define APSWBuffer_XDECREF_unlikely Py_XDECREF #ifdef APSW_TESTFIXTURES static void APSWBuffer_fini(void) { } #endif #endif static long APSWBuffer_hash(APSWBuffer *self) { long hash; unsigned char *p; Py_ssize_t len; if(self->hash!=-1) return self->hash; /* this is the same algorithm as used for Python strings/bytes/buffer except we add one so that there is no hash collision */ p=(unsigned char*)self->data; len=self->length; /* The python implementations all start the hash with the first byte even if the length is zero. This checks there was a zero padding byte there as pystring/pyunicode do anyway */ assert( (len==0)?(*p==0):1 ); hash=*p << 7; while(--len>=0) hash=(1000003*hash) ^ *p++; hash ^= self->length; hash++; /* avoid collision */ /* I tried to find a string that would have a hash of -2 but failed. */ if(hash==-1) hash= -2; self->hash=hash; return hash; } static PyObject * APSWBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t length) { APSWBuffer *res=NULL; #if AB_NRECYCLE > 0 if(apswbuffer_nrecycle) { res=apswbuffer_recyclelist[--apswbuffer_nrecycle]; } else #endif { res=PyObject_New(APSWBuffer, &APSWBufferType); if(!res) return NULL; } assert(length>=0); /* the base object can be another apswbuffer */ if(APSWBuffer_Check(base)) { assert(PyBytes_Check(((APSWBuffer*)base)->base)); assert(offset <= APSWBuffer_GET_SIZE(base)); assert(offset+length <= APSWBuffer_GET_SIZE(base)); res->base=((APSWBuffer*)base)->base; Py_INCREF(res->base); res->data=APSWBuffer_AS_STRING(base)+offset; res->length=length; res->hash= -1; return (PyObject*)res; } /* or pybytes/pystring */ assert(PyBytes_Check(base)); assert(offset<=PyBytes_GET_SIZE(base)); assert(offset+length<=PyBytes_GET_SIZE(base)); Py_INCREF(base); res->base=base; res->data=PyBytes_AS_STRING(base)+offset; res->length=length; /* Performance hack. If the bytes/string we are copying from has already calculated a hash then use that rather than recalculating it ourselves. */ res->hash= -1; #ifndef PYPY_VERSION assert(PyBytes_CheckExact(base)); if(offset==0 && length==PyBytes_GET_SIZE(base)) { res->hash=((PyBytesObject*)base)->ob_shash; if(res->hash<-2 || res->hash>-1) res->hash+=1; } #endif #ifndef NDEBUG /* check our conniving performance hack actually worked */ if(res->hash!=-1) { long tmp=res->hash; res->hash= -1; assert(tmp==APSWBuffer_hash(res)); res->hash=tmp; } #endif return (PyObject*)res; } static void APSWBuffer_dealloc(APSWBuffer *self) { if(self->base) assert(!APSWBuffer_Check(self->base)); Py_CLEAR(self->base); Py_TYPE(self)->tp_free((PyObject*)self); } /* Our instances are not publically exposed and we are only compared for dictionary insertion/checking, so take some serious short cuts */ static PyObject * APSWBuffer_richcompare(APSWBuffer *left, APSWBuffer *right, int op) { assert(op==Py_EQ); assert(left->hash!=-1); assert(right->hash!=-1); if(left->hash != right->hash || left->length != right->length) goto notequal; if(left->data == right->data) goto equal; if(0==memcmp(left->data, right->data, left->length)) goto equal; notequal: Py_RETURN_FALSE; equal: Py_RETURN_TRUE; } static PyTypeObject APSWBufferType = { APSW_PYTYPE_INIT "apsw.APSWBuffer", /*tp_name*/ sizeof(APSWBuffer), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)APSWBuffer_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ (hashfunc)APSWBuffer_hash, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "APSWStatement object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ (richcmpfunc)APSWBuffer_richcompare, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ 0, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; apsw-3.8.11.1-r1/src/traceback.c0000644000000000000000000000727612565422256014632 0ustar rootroot/* Augment a traceback with dummy stack frames from C so you can tell why the code was called. */ /* This code was originally from the Pyrex project: Copyright (C) 2004-2006 Greg Ewing It has been lightly modified to be a part of APSW with permission from Greg and to be under the same license (and option for any OSI approved license) as the rest of APSW. See the accompanying LICENSE file. */ /* These are python header files */ #include "compile.h" #include "frameobject.h" #include "traceback.h" /* Add a dummy frame to the traceback so the developer has a better idea of what C code was doing @param filename: Use __FILE__ for this - it will be the filename reported in the frame @param lineno: Use __LINE__ for this - it will be the line number reported in the frame @param functionname: Name of the function reported @param localsformat: Format string for Py_BuildValue( that must specify a dictionary or NULL to make an empty dictionary. An example is "{s:i, s: s}" with the varargs then conforming to this format (the corresponding params could be "seven", 7, "foo", "bar" */ static void AddTraceBackHere(const char *filename, int lineno, const char *functionname, const char *localsformat, ...) { PyObject *srcfile=0, *funcname=0, *empty_dict=0, *empty_tuple=0, *empty_string=0, *localargs=0, *empty_code=0; PyCodeObject *code=0; PyFrameObject *frame=0; va_list localargsva; va_start(localargsva, localsformat); assert(PyErr_Occurred()); #if PY_VERSION_HEX<0x03000000 srcfile=PyString_FromString(filename); funcname=PyString_FromString(functionname); #else srcfile=PyUnicode_FromString(filename); funcname=PyUnicode_FromString(functionname); #endif empty_dict=PyDict_New(); empty_tuple=PyTuple_New(0); #if PY_VERSION_HEX<0x03000000 empty_string=PyString_FromString(""); empty_code=PyString_FromString(""); #else empty_string=PyUnicode_FromString(""); empty_code=PyBytes_FromStringAndSize(NULL,0); #endif localargs=localsformat?(Py_VaBuildValue((char *)localsformat, localargsva)):PyDict_New(); if(localsformat) assert(localsformat[0]=='{'); if(localargs) assert(PyDict_Check(localargs)); /* did any fail? */ if (!srcfile || !funcname || !empty_dict || !empty_tuple || !empty_string) goto end; /* make the dummy code object */ code = PyCode_New( 0, /*int argcount,*/ #if PY_VERSION_HEX >= 0x03000000 0, /*int kwonlyargcount*/ #endif 0, /*int nlocals,*/ 0, /*int stacksize,*/ 0, /*int flags,*/ empty_code, /*PyObject *code,*/ empty_tuple, /*PyObject *consts,*/ empty_tuple, /*PyObject *names,*/ empty_tuple, /*PyObject *varnames,*/ empty_tuple, /*PyObject *freevars,*/ empty_tuple, /*PyObject *cellvars,*/ srcfile, /*PyObject *filename,*/ funcname, /*PyObject *name,*/ lineno, /*int firstlineno,*/ empty_code /*PyObject *lnotab*/ ); if (!code) goto end; /* make the dummy frame */ frame=PyFrame_New( PyThreadState_Get(), /*PyThreadState *tstate,*/ code, /*PyCodeObject *code,*/ empty_dict, /*PyObject *globals,*/ localargs /*PyObject *locals*/ ); if(!frame) goto end; /* add dummy frame to traceback */ frame->f_lineno=lineno; PyTraceBack_Here(frame); /* this epilogue deals with success or failure cases */ end: va_end(localargsva); Py_XDECREF(localargs); Py_XDECREF(srcfile); Py_XDECREF(funcname); Py_XDECREF(empty_dict); Py_XDECREF(empty_tuple); Py_XDECREF(empty_string); Py_XDECREF(empty_code); Py_XDECREF(code); Py_XDECREF(frame); } apsw-3.8.11.1-r1/src/blob.c0000644000000000000000000005322012565422256013617 0ustar rootroot/* Blob and ZeroBlob code See the accompanying LICENSE file. */ /** .. _blobio: Blob Input/Output ***************** A `blob `_ is a SQLite `datatype `_ representing a sequence of bytes. It can be zero or more bytes in size. SQLite blobs have an absolute maximum size of 2GB and a `default maximum size `_ of 1GB. An alternate approach to using blobs is to store the data in files and store the filename in the database. Doing so loses the `ACID `_ properties of SQLite. */ /* ZEROBLOB CODE */ /** .. class:: zeroblob(size) If you want to insert a blob into a row, you previously needed to supply the entire blob in one go. To read just one byte also required retrieving the blob in its entireity. For example to insert a 100MB file you would have done:: largedata=open("largefile", "rb").read() cur.execute("insert into foo values(?)", (buffer(largedata),)) SQLite 3.5 allowed for incremental Blob I/O so you can read and write blobs in small amounts. You cannot change the size of a blob so you need to reserve space which you do through zeroblob which creates a blob of the specified size but full of zero bytes. For example you would reserve space for your 100MB one of these two ways:: cur.execute("insert into foo values(zeroblob(100000000))") cur.execute("insert into foo values(?), (apsw.zeroblob(100000000),)) This class is used for the second way. Once a blob exists in the database, you then use the :class:`blob` class to read and write its contents. */ typedef struct { PyObject_HEAD int blobsize; } ZeroBlobBind; static PyObject* ZeroBlobBind_new(PyTypeObject *type, APSW_ARGUNUSED PyObject *args, APSW_ARGUNUSED PyObject *kwargs) { ZeroBlobBind *self; self=(ZeroBlobBind*)type->tp_alloc(type, 0); if(self) self->blobsize=0; return (PyObject*)self; } static int ZeroBlobBind_init(ZeroBlobBind *self, PyObject *args, PyObject *kwargs) { int n; if(kwargs && PyDict_Size(kwargs)!=0) { PyErr_Format(PyExc_TypeError, "Zeroblob constructor does not take keyword arguments"); return -1; } if(!PyArg_ParseTuple(args, "i", &n)) return -1; if(n<0) { PyErr_Format(PyExc_TypeError, "zeroblob size must be >= 0"); return -1; } self->blobsize=n; return 0; } /** .. method:: length() -> int Size of zero blob in bytes. */ static PyObject * ZeroBlobBind_len(ZeroBlobBind *self) { return PyInt_FromLong(self->blobsize); } static PyMethodDef ZeroBlobBind_methods[] = { {"length", (PyCFunction)ZeroBlobBind_len, METH_NOARGS, "Size of zero blob"}, {0,0,0,0} }; static PyTypeObject ZeroBlobBindType = { APSW_PYTYPE_INIT "apsw.zeroblob", /*tp_name*/ sizeof(ZeroBlobBind), /*tp_basicsize*/ 0, /*tp_itemsize*/ 0, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "ZeroBlobBind object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ ZeroBlobBind_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ (initproc)ZeroBlobBind_init, /* tp_init */ 0, /* tp_alloc */ ZeroBlobBind_new, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; /* BLOB TYPE */ struct APSWBlob { PyObject_HEAD Connection *connection; sqlite3_blob *pBlob; unsigned inuse; /* track if we are in use preventing concurrent thread mangling */ int curoffset; /* SQLite only supports 32 bit signed int offsets */ PyObject *weakreflist; /* weak reference tracking */ }; typedef struct APSWBlob APSWBlob; static PyTypeObject APSWBlobType; /* BLOB CODE */ /** .. class:: blob This object is created by :meth:`Connection.blobopen` and provides access to a blob in the database. It behaves like a Python file. At the C level it wraps a `sqlite3_blob `_. .. note:: You cannot change the size of a blob using this object. You should create it with the correct size in advance either by using :class:`zeroblob` or the `zeroblob() `_ function. See the :ref:`example `. */ static void APSWBlob_init(APSWBlob *self, Connection *connection, sqlite3_blob *blob) { Py_INCREF(connection); self->connection=connection; self->pBlob=blob; self->curoffset=0; self->inuse=0; self->weakreflist=NULL; } static int APSWBlob_close_internal(APSWBlob *self, int force) { int setexc=0; PyObject *err_type, *err_value, *err_traceback; if(force==2) PyErr_Fetch(&err_type, &err_value, &err_traceback); /* note that sqlite3_blob_close always works even if an error is returned - see sqlite ticket #2815 */ if(self->pBlob) { int res; PYSQLITE_BLOB_CALL(res=sqlite3_blob_close(self->pBlob)); if(res!=SQLITE_OK) { switch(force) { case 0: SET_EXC(res, self->connection->db); setexc=1; break; case 1: break; case 2: SET_EXC(res, self->connection->db); apsw_write_unraiseable(NULL); } } self->pBlob=0; } /* Remove from connection dependents list. Has to be done before we decref self->connection otherwise connection could dealloc and we'd still be in list */ if(self->connection) Connection_remove_dependent(self->connection, (PyObject*)self); Py_CLEAR(self->connection); if(force==2) PyErr_Restore(err_type, err_value, err_traceback); return setexc; } static void APSWBlob_dealloc(APSWBlob *self) { APSW_CLEAR_WEAKREFS; APSWBlob_close_internal(self, 2); Py_TYPE(self)->tp_free((PyObject*)self); } /* If the blob is closed, we return the same error as normal python files */ #define CHECK_BLOB_CLOSED \ do { if(!self->pBlob) \ return PyErr_Format(PyExc_ValueError, "I/O operation on closed blob"); \ } while(0) /** .. method:: length() -> int Returns the size of the blob in bytes. -* sqlite3_blob_bytes */ static PyObject * APSWBlob_length(APSWBlob *self) { CHECK_USE(NULL); CHECK_BLOB_CLOSED; return PyLong_FromLong(sqlite3_blob_bytes(self->pBlob)); } /** .. method:: read([nbytes]) -> bytes Reads amount of data requested, or till end of file, whichever is earlier. Attempting to read beyond the end of the blob returns the empty string/bytes, in the same manner as end of file on normal file objects. :rtype: (Python 2) string (Python 3) bytes -* sqlite3_blob_read */ static PyObject * APSWBlob_read(APSWBlob *self, PyObject *args) { int length=-1; int res; PyObject *buffy=0; char *thebuffer; CHECK_USE(NULL); CHECK_BLOB_CLOSED; /* The python file read routine treats negative numbers as read till end of file, which I think is rather silly. (Try reading -3 bytes from /dev/zero on a 64 bit machine with lots of swap to see why). In any event we remain consistent with Python file objects */ if(!PyArg_ParseTuple(args, "|i:read(numbytes=remaining)", &length)) return NULL; if( (self->curoffset==sqlite3_blob_bytes(self->pBlob)) /* eof */ || (length==0) ) return PyBytes_FromStringAndSize(NULL, 0); if(length<0) length=sqlite3_blob_bytes(self->pBlob)-self->curoffset; /* trying to read more than is in the blob? */ if(self->curoffset+length>sqlite3_blob_bytes(self->pBlob)) length=sqlite3_blob_bytes(self->pBlob)-self->curoffset; buffy=PyBytes_FromStringAndSize(NULL, length); if(!buffy) return NULL; thebuffer= PyBytes_AS_STRING(buffy); PYSQLITE_BLOB_CALL(res=sqlite3_blob_read(self->pBlob, thebuffer, length, self->curoffset)); if(PyErr_Occurred()) return NULL; if(res!=SQLITE_OK) { Py_DECREF(buffy); SET_EXC(res, self->connection->db); return NULL; } else self->curoffset+=length; assert(self->curoffset<=sqlite3_blob_bytes(self->pBlob)); return buffy; } /** .. method:: readinto(buffer[, offset=0, length=remaining-buffer]) -> None Reads from the blob into a buffer you have supplied. This method is useful if you already have a buffer like object that data is being assembled in, and avoids allocating results in :meth:`blob.read` and then copying into buffer. :param buffer: A writable buffer like object. In Python 2.6 onwards there is a bytearray type that is very useful. :class:`array.array` also works. :param offset: The position to start writing into the buffer defaulting to the beginning. :param length: How much of the blob to read. The default is the remaining space left in the buffer. Note that if there is more space available than blob left then you will get a :exc:`ValueError` exception. -* sqlite3_blob_read */ static PyObject* APSWBlob_readinto(APSWBlob *self, PyObject *args) { int length; int res; Py_ssize_t offset, lengthwanted; PyObject *wbuf=NULL; int aswb; void *buffer; Py_ssize_t bufsize; int bloblen; CHECK_USE(NULL); CHECK_BLOB_CLOSED; /* To get Py_ssize_t we need "n" format but that only exists in Python 2.5 plus */ if(!PyArg_ParseTuple(args, "O|" #if PY_VERSION_HEX < 0x02050000 "i" #else "n" #endif "i:readinto(wbuf, offset=1, length=wbufremaining)", &wbuf, &offset, &length)) return NULL; aswb=PyObject_AsWriteBuffer(wbuf, &buffer, &bufsize); if(aswb) return NULL; /* Although a lot of these checks could be combined into a single one, they are kept separate so that we can verify they have each been exercised with code coverage checks */ if(PyTuple_GET_SIZE(args)<2) offset=0; bloblen=sqlite3_blob_bytes(self->pBlob); if(offset<0 || offset>bufsize) return PyErr_Format(PyExc_ValueError, "offset is less than zero or beyond end of buffer"); if(PyTuple_GET_SIZE(args)<3) lengthwanted=bufsize-offset; else lengthwanted=length; if(lengthwanted<0) return PyErr_Format(PyExc_ValueError, "Length wanted is negative"); if(offset+lengthwanted>bufsize) return PyErr_Format(PyExc_ValueError, "Data would go beyond end of buffer"); if(lengthwanted>bloblen-self->curoffset) return PyErr_Format(PyExc_ValueError, "More data requested than blob length"); PYSQLITE_BLOB_CALL(res=sqlite3_blob_read(self->pBlob, (char*)buffer+offset, lengthwanted, self->curoffset)); if(PyErr_Occurred()) return NULL; if(res!=SQLITE_OK) { SET_EXC(res, self->connection->db); return NULL; } self->curoffset+=lengthwanted; Py_RETURN_NONE; } /** .. method:: seek(offset[, whence=0]) -> None Changes current position to *offset* biased by *whence*. :param offset: New position to seek to. Can be positive or negative number. :param whence: Use 0 if *offset* is relative to the begining of the blob, 1 if *offset* is relative to the current position, and 2 if *offset* is relative to the end of the blob. :raises ValueError: If the resulting offset is before the begining (less than zero) or beyond the end of the blob. */ static PyObject * APSWBlob_seek(APSWBlob *self, PyObject *args) { int offset, whence=0; CHECK_USE(NULL); CHECK_BLOB_CLOSED; if(!PyArg_ParseTuple(args, "i|i:seek(offset,whence=0)", &offset, &whence)) return NULL; switch(whence) { default: return PyErr_Format(PyExc_ValueError, "whence parameter should be 0, 1 or 2"); case 0: /* relative to begining of file */ if(offset<0 || offset>sqlite3_blob_bytes(self->pBlob)) goto out_of_range; self->curoffset=offset; break; case 1: /* relative to current position */ if(self->curoffset+offset<0 || self->curoffset+offset>sqlite3_blob_bytes(self->pBlob)) goto out_of_range; self->curoffset+=offset; break; case 2: /* relative to end of file */ if(sqlite3_blob_bytes(self->pBlob)+offset<0 || sqlite3_blob_bytes(self->pBlob)+offset>sqlite3_blob_bytes(self->pBlob)) goto out_of_range; self->curoffset=sqlite3_blob_bytes(self->pBlob)+offset; break; } Py_RETURN_NONE; out_of_range: return PyErr_Format(PyExc_ValueError, "The resulting offset would be less than zero or past the end of the blob"); } /** .. method:: tell() -> int Returns the current offset. */ static PyObject * APSWBlob_tell(APSWBlob *self) { CHECK_USE(NULL); CHECK_BLOB_CLOSED; return PyLong_FromLong(self->curoffset); } /** .. method:: write(data) -> None Writes the data to the blob. :param data: (Python 2) buffer or string. (Python 3) buffer or bytes. :raises TypeError: Wrong data type :raises ValueError: If the data would go beyond the end of the blob. You cannot increase the size of a blob by writing beyond the end. You need to use :class:`zeroblob` to set the desired size first when inserting the blob. -* sqlite3_blob_write */ static PyObject * APSWBlob_write(APSWBlob *self, PyObject *obj) { const void *buffer=0; Py_ssize_t size; int res; CHECK_USE(NULL); CHECK_BLOB_CLOSED; /* we support buffers and string for the object */ if(!PyUnicode_Check(obj) && PyObject_CheckReadBuffer(obj)) { int asrb=PyObject_AsReadBuffer(obj, &buffer, &size); APSW_FAULT_INJECT(BlobWriteAsReadBufFails,,(PyErr_NoMemory(), asrb=-1)); if(asrb!=0) return NULL; } else return PyErr_Format(PyExc_TypeError, "Parameter should be bytes/string or buffer"); if( ((int)(size+self->curoffset))curoffset) return PyErr_Format(PyExc_ValueError, "Data is too large (integer wrap)"); if( ((int)(size+self->curoffset))>sqlite3_blob_bytes(self->pBlob)) return PyErr_Format(PyExc_ValueError, "Data would go beyond end of blob"); PYSQLITE_BLOB_CALL(res=sqlite3_blob_write(self->pBlob, buffer, size, self->curoffset)); if(PyErr_Occurred()) return NULL; if(res!=SQLITE_OK) { SET_EXC(res, self->connection->db); return NULL; } else self->curoffset+=size; assert(self->curoffset<=sqlite3_blob_bytes(self->pBlob)); Py_RETURN_NONE; } /** .. method:: close([force=False]) Closes the blob. Note that even if an error occurs the blob is still closed. .. note:: In some cases errors that technically occurred in the :meth:`~blob.read` and :meth:`~blob.write` routines may not be reported until close is called. Similarly errors that occurred in those methods (eg calling :meth:`~blob.write` on a read-only blob) may also be re-reported in :meth:`~blob.close`. (This behaviour is what the underlying SQLite APIs do - it is not APSW doing it.) It is okay to call :meth:`~blob.close` multiple times. :param force: Ignores any errors during close. -* sqlite3_blob_close */ static PyObject * APSWBlob_close(APSWBlob *self, PyObject *args) { int setexc; int force=0; CHECK_USE(NULL); if(args && !PyArg_ParseTuple(args, "|i:close(force=False)", &force)) return NULL; setexc=APSWBlob_close_internal(self, !!force); if(setexc) return NULL; Py_RETURN_NONE; } /** .. method:: __enter__() -> context You can use a blob as a `context manager `_ as defined in :pep:`0343`. When you use *with* statement, the blob is always :meth:`closed <~blob.close>` on exit from the block, even if an exception occurred in the block. For example:: with connection.blobopen() as blob: blob.write("...") res=blob.read(1024) */ static PyObject * APSWBlob_enter(APSWBlob *self) { CHECK_USE(NULL); CHECK_BLOB_CLOSED; Py_INCREF(self); return (PyObject *)self; } /** .. method:: __exit__() -> False Implements context manager in conjunction with :meth:`~blob.__enter__`. Any exception that happened in the *with* block is raised after closing the blob. */ static PyObject * APSWBlob_exit(APSWBlob *self, APSW_ARGUNUSED PyObject *args) { PyObject *res; CHECK_USE(NULL); CHECK_BLOB_CLOSED; res=APSWBlob_close(self, NULL); Py_XDECREF(res); if(!res) return NULL; Py_RETURN_FALSE; } /** .. method:: reopen(rowid) Change this blob object to point to a different row. It can be faster than closing an existing blob an opening a new one. -* sqlite3_blob_reopen */ static PyObject * APSWBlob_reopen(APSWBlob *self, PyObject *arg) { int res; long long rowid; CHECK_USE(NULL); CHECK_BLOB_CLOSED; #if PY_MAJOR_VERSION<3 if(PyInt_Check(arg)) rowid=PyInt_AS_LONG(arg); else #endif if (PyLong_Check(arg)) { rowid=PyLong_AsLongLong(arg); if(PyErr_Occurred()) return NULL; } else return PyErr_Format(PyExc_TypeError, "blob reopen argument must be a number"); /* no matter what happens we always reset current offset */ self->curoffset=0; PYSQLITE_BLOB_CALL(res=sqlite3_blob_reopen(self->pBlob, rowid)); if(PyErr_Occurred()) return NULL; if(res!=SQLITE_OK) { SET_EXC(res, self->connection->db); return NULL; } Py_RETURN_NONE; } static PyMethodDef APSWBlob_methods[]={ {"length", (PyCFunction)APSWBlob_length, METH_NOARGS, "Returns length in bytes of the blob"}, {"read", (PyCFunction)APSWBlob_read, METH_VARARGS, "Reads data from the blob"}, {"readinto", (PyCFunction)APSWBlob_readinto, METH_VARARGS, "Reads data from the blob into a provided buffer"}, {"seek", (PyCFunction)APSWBlob_seek, METH_VARARGS, "Seeks to a position in the blob"}, {"tell", (PyCFunction)APSWBlob_tell, METH_NOARGS, "Returns current blob offset"}, {"write", (PyCFunction)APSWBlob_write, METH_O, "Writes data to blob"}, {"reopen", (PyCFunction)APSWBlob_reopen, METH_O, "Changes the blob to point to a different row"}, {"close", (PyCFunction)APSWBlob_close, METH_VARARGS, "Closes blob"}, {"__enter__", (PyCFunction)APSWBlob_enter, METH_NOARGS, "Context manager entry"}, {"__exit__", (PyCFunction)APSWBlob_exit, METH_VARARGS, "Context manager exit"}, {0,0,0,0} /* Sentinel */ }; static PyTypeObject APSWBlobType = { APSW_PYTYPE_INIT "apsw.blob", /*tp_name*/ sizeof(APSWBlob), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)APSWBlob_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "APSW blob object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(APSWBlob, weakreflist), /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ APSWBlob_methods, /* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; apsw-3.8.11.1-r1/src/testextension.c0000644000000000000000000000275012565422256015617 0ustar rootroot/* This code is to test extension loading and is taken from https://sqlite.org/cvstrac/wiki/wiki?p=LoadableExtensions */ #include SQLITE_EXTENSION_INIT1 /* ** The half() SQL function returns half of its input value. */ static void halfFunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ sqlite3_result_double(context, 0.5*sqlite3_value_double(argv[0])); } /* SQLite invokes this routine once when it loads the extension. ** Create new functions, collating sequences, and virtual table ** modules here. This is usually the only exported symbol in ** the shared library. */ int sqlite3_extension_init( sqlite3 *db, char **pzErrMsg, const sqlite3_api_routines *pApi ){ SQLITE_EXTENSION_INIT2(pApi) sqlite3_create_function(db, "half", 1, SQLITE_ANY, 0, halfFunc, 0, 0); return 0; } /* this is code added by me and checks that alternate entry points work by providing an double function */ static void doubleFunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ sqlite3_result_double(context, 2.0*sqlite3_value_double(argv[0])); } int alternate_sqlite3_extension_init( sqlite3 *db, char **pzErrMsg, const sqlite3_api_routines *pApi ){ SQLITE_EXTENSION_INIT2(pApi) sqlite3_create_function(db, "doubleup", 1, SQLITE_ANY, 0, doubleFunc, 0, 0); return 0; } apsw-3.8.11.1-r1/src/util.c0000644000000000000000000002704012565422256013657 0ustar rootroot/* Utility macros and functions See the accompanying LICENSE file. */ /* These macros are to address several issues: - Prevent simultaneous calls on the same object while the GIL is released in one thread. For example if a Cursor is executing sqlite3_step with the GIL released, we don't want Cursor_execute called on another thread since that will thrash what the first thread is doing. We use a member of Connection, Blob and Cursor named 'inuse' to provide the simple exclusion. - The GIL has to be released around all SQLite calls that take the database mutex (which is most of them). If the GIL is kept even for trivial calls then deadlock will arise. This is because if you have multiple mutexes you must always acquire them in the same order, or never hold more than one at a time. - The SQLite error code is not threadsafe. This is because the error string is per database connection. The call to sqlite3_errmsg will return a pointer but that can be replaced by any other thread with an error. Consequently SQLite added sqlite3_db_mutex (see sqlite-dev mailing list for 4 Nov 2008). A far better workaround would have been to make the SQLite error stuff be per thread just like errno. Instead I have had to roll my own thread local storage system for storing the error message. */ /* call where no error is returned */ #define _PYSQLITE_CALL_V(x) \ do { Py_BEGIN_ALLOW_THREADS { x; } Py_END_ALLOW_THREADS ; } while(0) /* Calls where error could be set. We assume that a variable 'res' is set. Also need the db to take the mutex on */ #define _PYSQLITE_CALL_E(db, x) \ do { \ Py_BEGIN_ALLOW_THREADS \ { \ sqlite3_mutex_enter(sqlite3_db_mutex(db)); \ x; \ if(res!=SQLITE_OK && res!=SQLITE_DONE && res!=SQLITE_ROW) \ apsw_set_errmsg(sqlite3_errmsg((db))); \ sqlite3_mutex_leave(sqlite3_db_mutex(db)); \ } \ Py_END_ALLOW_THREADS; \ } while(0) #define INUSE_CALL(x) \ do { \ assert(self->inuse==0); self->inuse=1; \ { x; } \ assert(self->inuse==1); self->inuse=0; \ } while(0) /* call from blob code */ #define PYSQLITE_BLOB_CALL(y) INUSE_CALL(_PYSQLITE_CALL_E(self->connection->db, y)) /* call from connection code */ #define PYSQLITE_CON_CALL(y) INUSE_CALL(_PYSQLITE_CALL_E(self->db, y)) /* call from cursor code - same as blob */ #define PYSQLITE_CUR_CALL PYSQLITE_BLOB_CALL /* from statement cache */ #define PYSQLITE_SC_CALL(y) _PYSQLITE_CALL_E(sc->db, y) /* call to sqlite code that doesn't return an error */ #define PYSQLITE_VOID_CALL(y) INUSE_CALL(_PYSQLITE_CALL_V(y)) /* call from backup code */ #define PYSQLITE_BACKUP_CALL(y) INUSE_CALL(_PYSQLITE_CALL_E(self->dest->db, y)) #ifdef __GNUC__ #define APSW_ARGUNUSED __attribute__ ((unused)) #else #define APSW_ARGUNUSED #endif /* used to decide if we will use int (4 bytes) or long long (8 bytes) */ #define APSW_INT32_MIN (-2147483647-1) #define APSW_INT32_MAX 2147483647 /* The default Python PyErr_WriteUnraiseable is almost useless. It only prints the str() of the exception and the str() of the object passed in. This gives the developer no clue whatsoever where in the code it is happening. It also does funky things to the passed in object which can cause the destructor to fire twice. Consequently we use our version here. It makes the traceback complete, and then tries the following, going to the next if the hook isn't found or returns an error: * excepthook of hookobject (if not NULL) * excepthook of sys module * PyErr_Display If any return an error then then the next one is tried. When we return, any error will be cleared. */ static void apsw_write_unraiseable(PyObject *hookobject) { PyObject *err_type=NULL, *err_value=NULL, *err_traceback=NULL; PyObject *excepthook=NULL; PyObject *result=NULL; PyFrameObject *frame=NULL; #ifndef PYPY_VERSION /* fill in the rest of the traceback */ frame = PyThreadState_GET()->frame; while(frame) { PyTraceBack_Here(frame); frame=frame->f_back; } #endif /* Get the exception details */ PyErr_Fetch(&err_type, &err_value, &err_traceback); PyErr_NormalizeException(&err_type, &err_value, &err_traceback); if(hookobject) { excepthook=PyObject_GetAttrString(hookobject, "excepthook"); PyErr_Clear(); if(excepthook) { result=PyEval_CallFunction(excepthook, "(OOO)", err_type?err_type:Py_None, err_value?err_value:Py_None, err_traceback?err_traceback:Py_None); if(result) goto finally; } Py_XDECREF(excepthook); } excepthook=PySys_GetObject("excepthook"); if(excepthook) { Py_INCREF(excepthook); /* borrowed reference from PySys_GetObject so we increment */ PyErr_Clear(); result=PyEval_CallFunction(excepthook, "(OOO)", err_type?err_type:Py_None, err_value?err_value:Py_None, err_traceback?err_traceback:Py_None); if(result) goto finally; } /* remove any error from callback failure */ PyErr_Clear(); #ifndef PYPY_VERSION PyErr_Display(err_type, err_value, err_traceback); #endif finally: Py_XDECREF(excepthook); Py_XDECREF(result); Py_XDECREF(err_traceback); Py_XDECREF(err_value); Py_XDECREF(err_type); PyErr_Clear(); /* being paranoid - make sure no errors on return */ } /* Python's handling of Unicode is horrible. It can use 2 or 4 byte unicode chars and the conversion routines like to put out BOMs which makes life even harder. These macros are used in pairs to do the right form of conversion and tell us whether to use the plain or -16 version of the SQLite function that is about to be called. */ #if Py_UNICODE_SIZE==2 #define UNIDATABEGIN(obj) \ { \ size_t strbytes=2*PyUnicode_GET_SIZE(obj); \ const void *strdata=PyUnicode_AS_DATA(obj); #define UNIDATAEND(obj) \ } #define USE16(x) x##16 #else /* Py_UNICODE_SIZE!=2 */ #define UNIDATABEGIN(obj) \ { \ Py_ssize_t strbytes=0; \ const char *strdata=NULL; \ PyObject *_utf8=NULL; \ _utf8=PyUnicode_AsUTF8String(obj); \ if(_utf8) \ { \ strbytes=PyBytes_GET_SIZE(_utf8); \ strdata=PyBytes_AS_STRING(_utf8); \ } #define UNIDATAEND(obj) \ Py_XDECREF(_utf8); \ } #define USE16(x) x #endif /* Py_UNICODE_SIZE */ /* Converts sqlite3_value to PyObject. Returns a new reference. */ static PyObject * convert_value_to_pyobject(sqlite3_value *value) { int coltype=sqlite3_value_type(value); APSW_FAULT_INJECT(UnknownValueType,,coltype=123456); switch(coltype) { case SQLITE_INTEGER: { sqlite3_int64 val=sqlite3_value_int64(value); #if PY_MAJOR_VERSION<3 if (val>=LONG_MIN && val<=LONG_MAX) return PyInt_FromLong((long)val); #endif return PyLong_FromLongLong(val); } case SQLITE_FLOAT: return PyFloat_FromDouble(sqlite3_value_double(value)); case SQLITE_TEXT: return convertutf8stringsize((const char*)sqlite3_value_text(value), sqlite3_value_bytes(value)); case SQLITE_NULL: Py_RETURN_NONE; case SQLITE_BLOB: return converttobytes(sqlite3_value_blob(value), sqlite3_value_bytes(value)); default: return PyErr_Format(APSWException, "Unknown sqlite column type %d!", coltype); } /* can't get here */ assert(0); return NULL; } /* Converts column to PyObject. Returns a new reference. Almost identical to above but we cannot just use sqlite3_column_value and then call the above function as SQLite doesn't allow that ("unprotected values") */ static PyObject * convert_column_to_pyobject(sqlite3_stmt *stmt, int col) { int coltype; _PYSQLITE_CALL_V(coltype=sqlite3_column_type(stmt, col)); APSW_FAULT_INJECT(UnknownColumnType,,coltype=12348); switch(coltype) { case SQLITE_INTEGER: { sqlite3_int64 val; _PYSQLITE_CALL_V(val=sqlite3_column_int64(stmt, col)); #if PY_MAJOR_VERSION<3 if (val>=LONG_MIN && val<=LONG_MAX) return PyInt_FromLong((long)val); #endif return PyLong_FromLongLong(val); } case SQLITE_FLOAT: { double d; _PYSQLITE_CALL_V(d=sqlite3_column_double(stmt, col)); return PyFloat_FromDouble(d); } case SQLITE_TEXT: { const char *data; size_t len; _PYSQLITE_CALL_V( (data=(const char*)sqlite3_column_text(stmt, col), len=sqlite3_column_bytes(stmt, col)) ); return convertutf8stringsize(data, len); } case SQLITE_NULL: Py_RETURN_NONE; case SQLITE_BLOB: { const void *data; size_t len; _PYSQLITE_CALL_V( (data=sqlite3_column_blob(stmt, col), len=sqlite3_column_bytes(stmt, col)) ); return converttobytes(data, len); } default: return PyErr_Format(APSWException, "Unknown sqlite column type %d!", coltype); } /* can't get here */ assert(0); return NULL; } /* Some macros used for frequent operations */ /* used by Connection and Cursor */ #define CHECK_USE(e) \ do \ { if(self->inuse) \ { /* raise exception if we aren't already in one */ \ if (!PyErr_Occurred()) \ PyErr_Format(ExcThreadingViolation, "You are trying to use the same object concurrently in two threads or re-entrantly within the same thread which is not allowed."); \ return e; \ } \ } while(0) /* used by Connection */ #define CHECK_CLOSED(connection,e) do \ { if(!(connection) || !(connection)->db) { PyErr_Format(ExcConnectionClosed, "The connection has been closed"); return e; } } while(0) /* used by cursor */ #define CHECK_CURSOR_CLOSED(e) \ do \ { \ if(!self->connection) \ { PyErr_Format(ExcCursorClosed, "The cursor has been closed"); return e; } \ else if(!self->connection->db) \ { PyErr_Format(ExcConnectionClosed, "The connection has been closed"); return e; } \ } while(0) /* It is 2009 - why do I have to write this? */ static char *apsw_strdup(const char *source) { char *res=PyMem_Malloc(strlen(source)+1); if(res) strcpy(res, source); return res; } apsw-3.8.11.1-r1/src/backup.c0000644000000000000000000003062212565422256014147 0ustar rootroot/* Another Python Sqlite Wrapper Wrap SQLite hot backup functionality See the accompanying LICENSE file. */ #ifdef EXPERIMENTAL /** .. _backup: Backup ****** A backup object encapsulates copying one database to another. You call :meth:`Connection.backup` on the destination database to get the backup object. Call :meth:`~backup.step` to copy some pages repeatedly dealing with errors as appropriate. Finally :meth:`~backup.finish` cleans up committing or rolling back and releasing locks. Here is an example usage using the **with** statement to ensure :meth:`~backup.finish` is called:: # copies source.main into db with db.backup("main", source, "main") as b: while not b.done: b.step(100) print b.remaining, b.pagecount, "\r", If you are not using **with** then you'll need to ensure :meth:`~backup.finish` is called:: # copies source.main into db b=db.backup("main", source, "main") try: while not b.done: b.step(100) print b.remaining, b.pagecount, "\r", finally: b.finish() Important details ================= The database is copied page by page. This means that there is not a round trip via SQL. All pages are copied including free ones. The destination database is locked during the copy. You will get a :exc:`ThreadingViolationError` if you attempt to use it. */ /* we love us some macros */ #define CHECK_BACKUP_CLOSED(e) \ do \ { \ if(!self->backup || (self->dest && !self->dest->db) || (self->source && !self->source->db) ) \ { \ PyErr_Format(ExcConnectionClosed, "The backup is finished or the source or destination databases have been closed"); \ return e; \ } \ } while(0) /** .. class:: backup You create a backup instance by calling :meth:`Connection.backup`. */ struct APSWBackup { PyObject_HEAD Connection *dest; Connection *source; sqlite3_backup *backup; PyObject *done; int inuse; PyObject *weakreflist; }; typedef struct APSWBackup APSWBackup; static void APSWBackup_init(APSWBackup *self, Connection *dest, Connection *source, sqlite3_backup *backup) { assert(dest->inuse==0); dest->inuse=1; assert(source->inuse==1); /* set by caller */ self->dest=dest; self->source=source; self->backup=backup; self->done=Py_False; Py_INCREF(self->done); self->inuse=0; self->weakreflist=NULL; } /* returns non-zero if it set an exception */ static int APSWBackup_close_internal(APSWBackup *self, int force) { int res, setexc=0; assert(!self->inuse); if(!self->backup) return 0; PYSQLITE_BACKUP_CALL(res=sqlite3_backup_finish(self->backup)); if(res) { switch(force) { case 0: SET_EXC(res, self->dest->db); setexc=1; break; case 1: break; case 2: { PyObject *etype, *eval, *etb; PyErr_Fetch(&etype, &eval, &etb); SET_EXC(res, self->dest->db); apsw_write_unraiseable(NULL); PyErr_Restore(etype, eval, etb); break; } } } self->backup=0; assert(self->dest->inuse); self->dest->inuse=0; Connection_remove_dependent(self->dest, (PyObject*)self); Connection_remove_dependent(self->source, (PyObject*)self); Py_CLEAR(self->dest); Py_CLEAR(self->source); return setexc; } static void APSWBackup_dealloc(APSWBackup *self) { APSW_CLEAR_WEAKREFS; APSWBackup_close_internal(self, 2); Py_CLEAR(self->done); Py_TYPE(self)->tp_free((PyObject*)self); } /** .. method:: step([npages=All]) -> bool Copies *npages* pages from the source to destination database. The source database is locked during the copy so using smaller values allows other access to the source database. The destination database is always locked until the backup object is :meth:`finished `. :param npages: How many pages to copy. If the parameter is omitted or negative then all remaining pages are copied. The default page size is 1024 bytes (1kb) which can be changed before database creation using a `pragma `_. This method may throw a :exc:`BusyError` or :exc:`LockedError` if unable to lock the source database. You can catch those and try again. :returns: True if this copied the last remaining outstanding pages, else false. This is the same value as :attr:`~backup.done` -* sqlite3_backup_step */ static PyObject * APSWBackup_step(APSWBackup *self, PyObject *args) { int pages=-1, res; CHECK_USE(NULL); CHECK_BACKUP_CLOSED(NULL); if(args && !PyArg_ParseTuple(args, "|i:step(pages=All)", &pages)) return NULL; PYSQLITE_BACKUP_CALL(res=sqlite3_backup_step(self->backup, pages)); if(PyErr_Occurred()) return NULL; if(res==SQLITE_DONE) { if(self->done!=Py_True) { Py_CLEAR(self->done); self->done=Py_True; Py_INCREF(self->done); } res=SQLITE_OK; } if(res) { SET_EXC(res, NULL); /* ::TODO:: will likely have message on dest->db */ return NULL; } Py_INCREF(self->done); return self->done; } /** .. method:: finish() Completes the copy process. If all pages have been copied then the transaction is committed on the destination database, otherwise it is rolled back. This method must be called for your backup to take effect. The backup object will always be finished even if there is an exception. It is safe to call this method multiple times. -* sqlite3_backup_finish */ static PyObject * APSWBackup_finish(APSWBackup *self) { int setexc; CHECK_USE(NULL); /* We handle CHECK_BACKUP_CLOSED internally */ if(!self->backup) Py_RETURN_NONE; setexc=APSWBackup_close_internal(self, 0); if(setexc) return NULL; Py_RETURN_NONE; } /** .. method:: close([force=False]) Does the same thing as :meth:`~backup.finish`. This extra api is provided to give the same api as other APSW objects such as :meth:`Connection.close`, :meth:`blob.close` and :meth:`Cursor.close`. It is safe to call this method multiple times. :param force: If true then any exceptions are ignored. */ static PyObject * APSWBackup_close(APSWBackup *self, PyObject *args) { int force=0, setexc; CHECK_USE(NULL); /* We handle CHECK_BACKUP_CLOSED internally */ if(!self->backup) Py_RETURN_NONE; /* already closed */ if(args && !PyArg_ParseTuple(args, "|i:close(force=False)", &force)) return NULL; setexc=APSWBackup_close_internal(self, force); if(setexc) return NULL; Py_RETURN_NONE; } /** .. attribute:: remaining Read only. How many pages were remaining to be copied after the last step. If you haven't called :meth:`~backup.step` or the backup object has been :meth:`finished ` then zero is returned. -* sqlite3_backup_remaining */ static PyObject * APSWBackup_get_remaining(APSWBackup *self, APSW_ARGUNUSED void *ignored) { CHECK_USE(NULL); return PyInt_FromLong(self->backup?sqlite3_backup_remaining(self->backup):0); } /** .. attribute:: pagecount Read only. How many pages were in the source database after the last step. If you haven't called :meth:`~backup.step` or the backup object has been :meth:`finished ` then zero is returned. -* sqlite3_backup_pagecount */ static PyObject * APSWBackup_get_pagecount(APSWBackup *self, APSW_ARGUNUSED void *ignored) { CHECK_USE(NULL); return PyInt_FromLong(self->backup?sqlite3_backup_pagecount(self->backup):0); } /** .. method:: __enter__() -> self You can use the backup object as a `context manager `_ as defined in :pep:`0343`. The :meth:`~backup.__exit__` method ensures that backup is :meth:`finished `. */ static PyObject * APSWBackup_enter(APSWBackup *self) { CHECK_USE(NULL); CHECK_BACKUP_CLOSED(NULL); Py_INCREF(self); return (PyObject*)self; } /** .. method:: __exit__() -> False Implements context manager in conjunction with :meth:`~backup.__enter__` ensuring that the copy is :meth:`finished `. */ static PyObject * APSWBackup_exit(APSWBackup *self, PyObject *args) { PyObject *etype, *evalue, *etb; int setexc; CHECK_USE(NULL); if(!PyArg_ParseTuple(args, "OOO", &etype, &evalue, &etb)) return NULL; /* If already closed then we are fine - CHECK_BACKUP_CLOSED not needed*/ if(!self->backup) Py_RETURN_FALSE; /* we don't want to override any existing exception with the corresponding close exception, although there is a chance the close exception has more detail. At the time of writing this code the step method only set an error code but not an error message */ setexc=APSWBackup_close_internal(self, etype!=Py_None||evalue!=Py_None||etb!=Py_None); if(setexc) { assert(PyErr_Occurred()); return NULL; } Py_RETURN_FALSE; } /** .. attribute:: done A boolean that is True if the copy completed in the last call to :meth:`~backup.step`. */ static PyMemberDef backup_members[] = { /* name type offset flags doc */ {"done", T_OBJECT, offsetof(APSWBackup, done), READONLY, "True if all pages copied"}, {0,0,0,0,0} }; static PyGetSetDef backup_getset[] = { /* name getter setter doc closure */ {"remaining", (getter)APSWBackup_get_remaining, NULL, "Pages still to be copied", NULL}, {"pagecount", (getter)APSWBackup_get_pagecount, NULL, "Total pages in source database", NULL}, {0,0,0,0,0} }; static PyMethodDef backup_methods[] = { {"__enter__", (PyCFunction)APSWBackup_enter, METH_NOARGS, "Context manager entry"}, {"__exit__", (PyCFunction)APSWBackup_exit, METH_VARARGS, "Context manager exit"}, {"step", (PyCFunction)APSWBackup_step, METH_VARARGS, "Copies some pages"}, {"finish", (PyCFunction)APSWBackup_finish, METH_NOARGS, "Commits or rollsback backup"}, {"close", (PyCFunction)APSWBackup_close, METH_VARARGS, "Alternate way to finish"}, {0,0,0,0} }; static PyTypeObject APSWBackupType = { APSW_PYTYPE_INIT "apsw.backup", /*tp_name*/ sizeof(APSWBackup), /*tp_basicsize*/ 0, /*tp_itemsize*/ (destructor)APSWBackup_dealloc, /*tp_dealloc*/ 0, /*tp_print*/ 0, /*tp_getattr*/ 0, /*tp_setattr*/ 0, /*tp_compare*/ 0, /*tp_repr*/ 0, /*tp_as_number*/ 0, /*tp_as_sequence*/ 0, /*tp_as_mapping*/ 0, /*tp_hash */ 0, /*tp_call*/ 0, /*tp_str*/ 0, /*tp_getattro*/ 0, /*tp_setattro*/ 0, /*tp_as_buffer*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_VERSION_TAG, /*tp_flags*/ "backup object", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(APSWBackup,weakreflist), /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ backup_methods, /* tp_methods */ backup_members, /* tp_members */ backup_getset, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ 0, /* tp_new */ 0, /* tp_free */ 0, /* tp_is_gc */ 0, /* tp_bases */ 0, /* tp_mro */ 0, /* tp_cache */ 0, /* tp_subclasses */ 0, /* tp_weaklist */ 0 /* tp_del */ APSW_PYTYPE_VERSION }; #endif /* EXPERIMENTAL */ apsw-3.8.11.1-r1/src/apswversion.h0000644000000000000000000000004312557432064015260 0ustar rootroot#define APSW_VERSION "3.8.11.1-r1" apsw-3.8.11.1-r1/src/apsw.c0000644000000000000000000014514112565422256013657 0ustar rootroot/* Another Python Sqlite Wrapper This wrapper aims to be the minimum necessary layer over SQLite 3 itself. It assumes we are running as 32 bit int with a 64 bit long long type available. See the accompanying LICENSE file. */ /** .. module:: apsw :synopsis: Python access to SQLite database library APSW Module *********** The module is the main interface to SQLite. Methods and data on the module have process wide effects. You can instantiate the :class:`Connection` and :class:`zeroblob` objects using :meth:`Connection` and :meth:`zeroblob` respectively. API Reference ============= */ /* Fight with setuptools over ndebug */ #ifdef APSW_NO_NDEBUG #ifdef NDEBUG #undef NDEBUG #endif #endif #ifdef APSW_USE_SQLITE_CONFIG #include APSW_USE_SQLITE_CONFIG #endif /* SQLite amalgamation */ #ifdef APSW_USE_SQLITE_AMALGAMATION /* See SQLite ticket 2554 */ #define SQLITE_API static #define SQLITE_EXTERN static #include APSW_USE_SQLITE_AMALGAMATION #undef small /* Fight with SQLite over ndebug */ #ifdef APSW_NO_NDEBUG #ifdef NDEBUG #undef NDEBUG #endif #endif #else /* SQLite 3 headers */ #include "sqlite3.h" #endif #if SQLITE_VERSION_NUMBER < 3008011 #error Your SQLite version is too old. It must be at least 3.8.11 #endif /* system headers */ #include #include /* Get the version number */ #include "apswversion.h" /* Python headers */ #include #include #include "structmember.h" #ifdef APSW_TESTFIXTURES /* Fault injection */ #define APSW_FAULT_INJECT(name,good,bad) \ do { \ if(APSW_Should_Fault(#name)) \ { \ do { bad ; } while(0); \ } \ else \ { \ do { good ; } while(0); \ } \ } while(0) static int APSW_Should_Fault(const char *); /* Are we Python 2.x (x>=5) and doing 64 bit? - _LP64 is best way I can find as sizeof isn't valid in cpp #if */ #if PY_VERSION_HEX>=0x02050000 && defined(_LP64) && _LP64 #define APSW_TEST_LARGE_OBJECTS #endif #else /* APSW_TESTFIXTURES */ #define APSW_FAULT_INJECT(name,good,bad) \ do { good ; } while(0) #endif /* The encoding we use with SQLite. SQLite supports either utf8 or 16 bit unicode (host byte order). If the latter is used then all functions have "16" appended to their name. The encoding used also affects how strings are stored in the database. We use utf8 since it is more space efficient, and Python can't make its mind up about Unicode (it uses 16 or 32 bit unichars and often likes to use Byte Order Markers as well). */ #define STRENCODING "utf-8" /* The module object */ static PyObject *apswmodule; /* Everything except the module itself is in seperate files */ #ifdef PYPY_VERSION #include "pypycompat.c" #endif /* Augment tracebacks */ #include "traceback.c" /* Make various versions of Python code compatible with each other */ #include "pyutil.c" /* Exceptions we can raise */ #include "exceptions.c" /* various utility functions and macros */ #include "util.c" /* buffer used in statement cache */ #include "apswbuffer.c" /* The statement cache */ #include "statementcache.c" /* connections */ #include "connection.c" /* backup */ #include "backup.c" /* Zeroblob and blob */ #include "blob.c" /* cursors */ #include "cursor.c" /* virtual tables */ #include "vtable.c" /* virtual file system */ #include "vfs.c" /* MODULE METHODS */ /** .. method:: sqlitelibversion() -> string Returns the version of the SQLite library. This value is queried at run time from the library so if you use shared libraries it will be the version in the shared library. -* sqlite3_libversion */ static PyObject * getsqliteversion(void) { return MAKESTR(sqlite3_libversion()); } /** .. method:: sqlite3_sourceid() -> string Returns the exact checkin information for the SQLite 3 source being used. -* sqlite3_sourceid */ static PyObject * get_sqlite3_sourceid(void) { return MAKESTR(sqlite3_sourceid()); } /** .. method:: apswversion() -> string Returns the APSW version. */ static PyObject * getapswversion(void) { return MAKESTR(APSW_VERSION); } /** .. method:: enablesharedcache(bool) If you use the same :class:`Connection` across threads or use multiple :class:`connections ` accessing the same file, then SQLite can `share the cache between them `_. It is :ref:`not recommended ` that you use this. -* sqlite3_enable_shared_cache */ static PyObject * enablesharedcache(APSW_ARGUNUSED PyObject *self, PyObject *args) { int setting,res; if(!PyArg_ParseTuple(args, "i:enablesharedcache(boolean)", &setting)) return NULL; APSW_FAULT_INJECT(EnableSharedCacheFail,res=sqlite3_enable_shared_cache(setting),res=SQLITE_NOMEM); SET_EXC(res, NULL); if(res!=SQLITE_OK) return NULL; Py_RETURN_NONE; } /** .. method:: initialize() It is unlikely you will want to call this method as SQLite automatically initializes. -* sqlite3_initialize */ static PyObject * initialize(void) { int res; res=sqlite3_initialize(); APSW_FAULT_INJECT(InitializeFail, ,res=SQLITE_NOMEM); SET_EXC(res, NULL); if(res!=SQLITE_OK) return NULL; Py_RETURN_NONE; } /** .. method:: shutdown() It is unlikely you will want to call this method and there is no need to do so. It is a **really** bad idea to call it unless you are absolutely sure all :class:`connections `, :class:`blobs `, :class:`cursors `, :class:`vfs ` etc have been closed, deleted and garbage collected. -* sqlite3_shutdown */ static PyObject * sqliteshutdown(void) { int res; APSW_FAULT_INJECT(ShutdownFail, res=sqlite3_shutdown(), res=SQLITE_NOMEM); SET_EXC(res, NULL); if(res!=SQLITE_OK) return NULL; Py_RETURN_NONE; } /** .. method:: config(op[, *args]) :param op: A `configuration operation `_ :param args: Zero or more arguments as appropriate for *op* Many operations don't make sense from a Python program. The following configuration operations are supported: SQLITE_CONFIG_LOG, SQLITE_CONFIG_SINGLETHREAD, SQLITE_CONFIG_MULTITHREAD, SQLITE_CONFIG_SERIALIZED, SQLITE_CONFIG_URI, SQLITE_CONFIG_MEMSTATUS, SQLITE_CONFIG_COVERING_INDEX_SCAN, SQLITE_CONFIG_PCACHE_HDRSZ, and SQLITE_CONFIG_PMASZ. See :ref:`tips ` for an example of how to receive log messages (SQLITE_CONFIG_LOG) -* sqlite3_config */ #ifdef EXPERIMENTAL static PyObject *logger_cb=NULL; static void apsw_logger(void *arg, int errcode, const char *message) { PyGILState_STATE gilstate; PyObject *etype=NULL, *evalue=NULL, *etraceback=NULL; PyObject *res=NULL; PyObject *msgaspystring=NULL; gilstate=PyGILState_Ensure(); assert(arg==logger_cb); assert(arg); PyErr_Fetch(&etype, &evalue, &etraceback); msgaspystring=convertutf8string(message); if(msgaspystring) res=PyEval_CallFunction(arg, "iO", errcode, msgaspystring); if(!res) { AddTraceBackHere(__FILE__, __LINE__, "Call_Logger", "{s: O, s: i, s: s}", "logger", arg, "errcode", errcode, "message", message); apsw_write_unraiseable(NULL); } else Py_DECREF(res); Py_XDECREF(msgaspystring); if(etype || evalue || etraceback) PyErr_Restore(etype, evalue, etraceback); PyGILState_Release(gilstate); } static PyObject * config(APSW_ARGUNUSED PyObject *self, PyObject *args) { int res, optdup; long opt; if(PyTuple_GET_SIZE(args)<1 || !PyIntLong_Check(PyTuple_GET_ITEM(args, 0))) return PyErr_Format(PyExc_TypeError, "There should be at least one argument with the first being a number"); opt=PyIntLong_AsLong(PyTuple_GET_ITEM(args,0)); if(PyErr_Occurred()) return NULL; switch(opt) { case SQLITE_CONFIG_SINGLETHREAD: case SQLITE_CONFIG_MULTITHREAD: case SQLITE_CONFIG_SERIALIZED: case SQLITE_CONFIG_URI: if(!PyArg_ParseTuple(args, "i", &optdup)) return NULL; assert(opt==optdup); res=sqlite3_config( (int)opt ); break; case SQLITE_CONFIG_PCACHE_HDRSZ: { int outval=-1; if(!PyArg_ParseTuple(args, "i", &optdup)) return NULL; assert(opt==optdup); res=sqlite3_config( (int)opt, &outval ); if(res) { SET_EXC(res, NULL); return NULL; } return PyInt_FromLong(outval); } case SQLITE_CONFIG_MEMSTATUS: case SQLITE_CONFIG_COVERING_INDEX_SCAN: case SQLITE_CONFIG_PMASZ: { int boolval; if(!PyArg_ParseTuple(args, "ii", &optdup, &boolval)) return NULL; assert(opt==optdup); res=sqlite3_config( (int)opt, boolval); break; } case SQLITE_CONFIG_LOG: { PyObject *logger; if(!PyArg_ParseTuple(args, "iO", &optdup, &logger)) return NULL; if(logger==Py_None) { res=sqlite3_config((int)opt, NULL); if(res==SQLITE_OK) Py_CLEAR(logger_cb); } else if(!PyCallable_Check(logger)) { return PyErr_Format(PyExc_TypeError, "Logger should be None or a callable"); } else { res=sqlite3_config((int)opt, apsw_logger, logger); if(res==SQLITE_OK) { Py_CLEAR(logger_cb); logger_cb=logger; Py_INCREF(logger); } } break; } default: return PyErr_Format(PyExc_TypeError, "Unknown config type %d", (int)opt); } SET_EXC(res, NULL); if(res!=SQLITE_OK) return NULL; Py_RETURN_NONE; } #endif /* EXPERIMENTAL */ /** .. method:: memoryused() -> int Returns the amount of memory SQLite is currently using. .. seealso:: :meth:`status` -* sqlite3_memory_used */ static PyObject* memoryused(void) { return PyLong_FromLongLong(sqlite3_memory_used()); } /** .. method:: memoryhighwater(reset=False) -> int Returns the maximum amount of memory SQLite has used. If *reset* is True then the high water mark is reset to the current value. .. seealso:: :meth:`status` -* sqlite3_memory_highwater */ static PyObject* memoryhighwater(APSW_ARGUNUSED PyObject *self, PyObject *args) { int reset=0; if(!PyArg_ParseTuple(args, "|i:memoryhighwater(reset=False)", &reset)) return NULL; return PyLong_FromLongLong(sqlite3_memory_highwater(reset)); } /** .. method:: softheaplimit(bytes) -> oldlimit Requests SQLite try to keep memory usage below *bytes* bytes and returns the previous setting. -* sqlite3_soft_heap_limit64 */ static PyObject* softheaplimit(APSW_ARGUNUSED PyObject *self, PyObject *args) { long long limit, oldlimit; if(!PyArg_ParseTuple(args, "L", &limit)) return NULL; oldlimit=sqlite3_soft_heap_limit64(limit); return PyLong_FromLongLong(oldlimit); } /** .. method:: randomness(bytes) -> data Gets random data from SQLite's random number generator. :param bytes: How many bytes to return :rtype: (Python 2) string, (Python 3) bytes -* sqlite3_randomness */ static PyObject* randomness(APSW_ARGUNUSED PyObject *self, PyObject *args) { int amount; PyObject *bytes; if(!PyArg_ParseTuple(args, "i", &amount)) return NULL; if(amount<0) return PyErr_Format(PyExc_ValueError, "Can't have negative number of bytes"); bytes=PyBytes_FromStringAndSize(NULL, amount); if(!bytes) return bytes; sqlite3_randomness(amount, PyBytes_AS_STRING(bytes)); return bytes; } /** .. method:: releasememory(bytes) -> int Requests SQLite try to free *bytes* bytes of memory. Returns how many bytes were freed. -* sqlite3_release_memory */ static PyObject* releasememory(APSW_ARGUNUSED PyObject *self, PyObject *args) { int amount; if(!PyArg_ParseTuple(args, "i", &amount)) return NULL; return PyInt_FromLong(sqlite3_release_memory(amount)); } /** .. method:: status(op, reset=False) -> (int, int) Returns current and highwater measurements. :param op: A `status parameter `_ :param reset: If *True* then the highwater is set to the current value :returns: A tuple of current value and highwater value .. seealso:: * :ref:`Status example ` -* sqlite3_status64 */ static PyObject * status(APSW_ARGUNUSED PyObject *self, PyObject *args) { int res, op, reset=0; sqlite3_int64 current=0, highwater=0; if(!PyArg_ParseTuple(args, "i|i:status(op, reset=False)", &op, &reset)) return NULL; res=sqlite3_status64(op, ¤t, &highwater, reset); SET_EXC(res, NULL); if(res!=SQLITE_OK) return NULL; return Py_BuildValue("(LL)", current, highwater); } /** .. method:: vfsnames() -> list(string) Returns a list of the currently installed :ref:`vfs `. The first item in the list is the default vfs. */ static PyObject * vfsnames(APSW_ARGUNUSED PyObject *self) { PyObject *result=NULL, *str=NULL; sqlite3_vfs *vfs=sqlite3_vfs_find(0); result=PyList_New(0); if(!result) goto error; while(vfs) { APSW_FAULT_INJECT(vfsnamesfails, str=convertutf8string(vfs->zName), str=PyErr_NoMemory()); if(!str) goto error; if(PyList_Append(result, str)) goto error; Py_DECREF(str); vfs=vfs->pNext; } return result; error: Py_XDECREF(str); Py_XDECREF(result); return NULL; } /** .. method:: exceptionfor(int) -> Exception If you would like to raise an exception that corresponds to a particular SQLite `error code `_ then call this function. It also understands `extended error codes `_. For example to raise `SQLITE_IOERR_ACCESS `_:: raise apsw.exceptionfor(apsw.SQLITE_IOERR_ACCESS) */ static PyObject * getapswexceptionfor(APSW_ARGUNUSED PyObject *self, PyObject *pycode) { int code, i; PyObject *result=NULL; if(!PyIntLong_Check(pycode)) return PyErr_Format(PyExc_TypeError, "Argument should be an integer"); code=PyIntLong_AsLong(pycode); if(PyErr_Occurred()) return NULL; for(i=0;exc_descriptors[i].name;i++) if (exc_descriptors[i].code==(code&0xff)) { result=PyObject_CallObject(exc_descriptors[i].cls, NULL); if(!result) return result; break; } if(!result) return PyErr_Format(PyExc_ValueError, "%d is not a known error code", code); PyObject_SetAttrString(result, "extendedresult", PyInt_FromLong(code)); PyObject_SetAttrString(result, "result", PyInt_FromLong(code&0xff)); return result; } /** .. method:: complete(statement) -> bool Returns True if the input string comprises one or more complete SQL statements by looking for an unquoted trailing semi-colon. An example use would be if you were prompting the user for SQL statements and needed to know if you had a whole statement, or needed to ask for another line:: statement=raw_input("SQL> ") while not apsw.complete(statement): more=raw_input(" .. ") statement=statement+"\n"+more -* sqlite3_complete */ static PyObject * apswcomplete(APSW_ARGUNUSED Connection *self, PyObject *args) { char *statements=NULL; int res; if(!PyArg_ParseTuple(args, "es:complete(statement)", STRENCODING, &statements)) return NULL; res=sqlite3_complete(statements); PyMem_Free(statements); if(res) { Py_INCREF(Py_True); return Py_True; } Py_INCREF(Py_False); return Py_False; } #if defined(APSW_TESTFIXTURES) && defined(APSW_USE_SQLITE_AMALGAMATION) /* a routine to reset the random number generator so that we can test xRandomness */ static PyObject * apsw_test_reset_rng(APSW_ARGUNUSED PyObject *self) { /* See sqlite3PrngResetState in sqlite's random.c */ GLOBAL(struct sqlite3PrngType, sqlite3Prng).isInit = 0; Py_RETURN_NONE; } #endif #ifdef APSW_TESTFIXTURES /* xGetLastError isn't actually called anywhere by SQLite so add a manual way of doing so https://sqlite.org/cvstrac/tktview?tn=3337 */ static PyObject * apsw_call_xGetLastError(APSW_ARGUNUSED PyObject *self, PyObject *args) { char *vfsname; int bufsize; PyObject *resultbuffer=NULL; sqlite3_vfs *vfs; int res=-1; if(!PyArg_ParseTuple(args, "esi", STRENCODING, &vfsname, &bufsize)) return NULL; vfs=sqlite3_vfs_find(vfsname); if(!vfs) goto finally; resultbuffer=PyBytes_FromStringAndSize(NULL, bufsize); if(!resultbuffer) goto finally; memset(PyBytes_AS_STRING(resultbuffer), 0, PyBytes_GET_SIZE(resultbuffer)); res=vfs->xGetLastError(vfs, bufsize, PyBytes_AS_STRING(resultbuffer)); finally: if(vfsname) PyMem_Free(vfsname); return resultbuffer?Py_BuildValue("Ni", resultbuffer, res):NULL; } static PyObject * apsw_fini(APSW_ARGUNUSED PyObject *self) { APSWBuffer_fini(); Py_XDECREF(tls_errmsg); Py_RETURN_NONE; } #endif #ifdef APSW_FORK_CHECKER /* We want to verify that SQLite objects are not used across forks. One way is to modify all calls to SQLite to do the checking but this is a pain as well as a performance hit. Instead we use the approach of providing an alternative mutex implementation since pretty much every SQLite API call takes and releases a mutex. Our diverted functions check the process id on calls and set the process id on allocating a mutex. We have to avoid the checks for the static mutexes. This code also doesn't bother with some things like checking malloc results. It is intended to only be used to verify correctness with test suites. The code that sets Python exceptions is also very brute force and is likely to cause problems. That however is a good thing - you will really be sure there is a problem! */ typedef struct { pid_t pid; sqlite3_mutex *underlying_mutex; } apsw_mutex; static apsw_mutex* apsw_mutexes[]= { NULL, /* not used - fast */ NULL, /* not used - recursive */ NULL, /* from this point on corresponds to the various static mutexes */ NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL }; static sqlite3_mutex_methods apsw_orig_mutex_methods; static int apsw_xMutexInit(void) { return apsw_orig_mutex_methods.xMutexInit(); } static int apsw_xMutexEnd(void) { return apsw_orig_mutex_methods.xMutexEnd(); } static sqlite3_mutex* apsw_xMutexAlloc(int which) { switch(which) { case SQLITE_MUTEX_FAST: case SQLITE_MUTEX_RECURSIVE: { apsw_mutex *am; sqlite3_mutex *m=apsw_orig_mutex_methods.xMutexAlloc(which); if(!m) return m; am=malloc(sizeof(apsw_mutex)); am->pid=getpid(); am->underlying_mutex=m; return (sqlite3_mutex*)am; } default: /* verify we have space */ assert(whichpid=0; apsw_mutexes[which]->underlying_mutex=apsw_orig_mutex_methods.xMutexAlloc(which); } return (sqlite3_mutex*)apsw_mutexes[which]; } } static int apsw_check_mutex(apsw_mutex *am) { if(am->pid && am->pid!=getpid()) { PyGILState_STATE gilstate; gilstate=PyGILState_Ensure(); PyErr_Format(ExcForkingViolation, "SQLite object allocated in one process is being used in another (across a fork)"); apsw_write_unraiseable(NULL); PyErr_Format(ExcForkingViolation, "SQLite object allocated in one process is being used in another (across a fork)"); PyGILState_Release(gilstate); return SQLITE_MISUSE; } return SQLITE_OK; } static void apsw_xMutexFree(sqlite3_mutex *mutex) { apsw_mutex* am=(apsw_mutex*)mutex; apsw_check_mutex(am); apsw_orig_mutex_methods.xMutexFree(am->underlying_mutex); } static void apsw_xMutexEnter(sqlite3_mutex *mutex) { apsw_mutex* am=(apsw_mutex*)mutex; apsw_check_mutex(am); apsw_orig_mutex_methods.xMutexEnter(am->underlying_mutex); } static int apsw_xMutexTry(sqlite3_mutex *mutex) { apsw_mutex* am=(apsw_mutex*)mutex; if(apsw_check_mutex(am)) return SQLITE_MISUSE; return apsw_orig_mutex_methods.xMutexTry(am->underlying_mutex); } static void apsw_xMutexLeave(sqlite3_mutex *mutex) { apsw_mutex* am=(apsw_mutex*)mutex; apsw_check_mutex(am); apsw_orig_mutex_methods.xMutexLeave(am->underlying_mutex); } #ifdef SQLITE_DEBUG static int apsw_xMutexHeld(sqlite3_mutex *mutex) { apsw_mutex* am=(apsw_mutex*)mutex; apsw_check_mutex(am); return apsw_orig_mutex_methods.xMutexHeld(am->underlying_mutex); } static int apsw_xMutexNotheld(sqlite3_mutex *mutex) { apsw_mutex* am=(apsw_mutex*)mutex; apsw_check_mutex(am); return apsw_orig_mutex_methods.xMutexNotheld(am->underlying_mutex); } #endif static sqlite3_mutex_methods apsw_mutex_methods= { apsw_xMutexInit, apsw_xMutexEnd, apsw_xMutexAlloc, apsw_xMutexFree, apsw_xMutexEnter, apsw_xMutexTry, apsw_xMutexLeave, #ifdef SQLITE_DEBUG apsw_xMutexHeld, apsw_xMutexNotheld #else 0, 0 #endif }; /** .. method:: fork_checker() **Note** This method is not available on Windows as it does not support the fork system call. SQLite does not allow the use of database connections across `forked `__ processes (see the `SQLite FAQ Q6 `__). (Forking creates a child process that is a duplicate of the parent including the state of all data structures in the program. If you do this to SQLite then parent and child would both consider themselves owners of open databases and silently corrupt each other's work and interfere with each other's locks.) One example of how you may end up using fork is if you use the `multiprocessing module `__ which uses fork to make child processes. If you do use fork or multiprocessing on a platform that supports fork then you **must** ensure database connections and their objects (cursors, backup, blobs etc) are not used in the parent process, or are all closed before calling fork or starting a `Process `__. (Note you must call close to ensure the underlying SQLite objects are closed. It is also a good idea to call `gc.collect(2) `__ to ensure anything you may have missed is also deallocated.) Once you run this method, extra checking code is inserted into SQLite's mutex operations (at a very small performance penalty) that verifies objects are not used across processes. You will get a :exc:`ForkingViolationError` if you do so. Note that due to the way Python's internals work, the exception will be delivered to `sys.excepthook` in addition to the normal exception mechanisms and may be reported by Python after the line where the issue actually arose. (Destructors of objects you didn't close also run between lines.) You should only call this method as the first line after importing APSW, as it has to shutdown and re-initialize SQLite. If you have any SQLite objects already allocated when calling the method then the program will later crash. The recommended use is to use the fork checking as part of your test suite. */ static PyObject * apsw_fork_checker(APSW_ARGUNUSED PyObject *self) { int rc; /* ignore multiple attempts to use this routine */ if(apsw_orig_mutex_methods.xMutexInit) goto ok; /* Ensure mutex methods available and installed */ rc=sqlite3_initialize(); if(rc) goto fail; /* then do a shutdown as we can't get or change mutex while sqlite is running */ rc=sqlite3_shutdown(); if(rc) goto fail; rc=sqlite3_config(SQLITE_CONFIG_GETMUTEX, &apsw_orig_mutex_methods); if(rc) goto fail; rc=sqlite3_config(SQLITE_CONFIG_MUTEX, &apsw_mutex_methods); if(rc) goto fail; /* start back up again */ rc=sqlite3_initialize(); if(rc) goto fail; ok: Py_RETURN_NONE; fail: assert(rc!=SQLITE_OK); SET_EXC(rc, NULL); return NULL; } #endif /** .. attribute:: compile_options A tuple of the options used to compile SQLite. For example it will be something like this:: ('ENABLE_LOCKING_STYLE=0', 'TEMP_STORE=1', 'THREADSAFE=1') -* sqlite3_compileoption_get */ static PyObject* get_compile_options(void) { int i, count=0; const char *opt; PyObject *tmpstring; PyObject *res=0; for(i=0;;i++) { opt=sqlite3_compileoption_get(i); /* No PYSQLITE_CALL needed */ if(!opt) break; } count=i; res=PyTuple_New(count); if(!res) goto fail; for(i=0;i string Returns a Python string (unicode) representing the supplied value in SQL syntax. Python 2 note: You must supply unicode strings not plain strings. */ static PyObject* formatsqlvalue(APSW_ARGUNUSED PyObject *self, PyObject *value) { /* NULL/None */ if(value==Py_None) { static PyObject *nullstr; if(!nullstr) nullstr=PyObject_Unicode(MAKESTR("NULL")); Py_INCREF(nullstr); return nullstr; } /* Integer/Long/Float */ if(PyIntLong_Check(value) /* ::TODO:: verify L is not appended in py 2.3 and similar vintage */ || PyFloat_Check(value)) return PyObject_Unicode(value); #if PY_MAJOR_VERSION<3 /* We don't support plain strings only unicode */ if(PyString_Check(value)) return PyErr_Format(PyExc_TypeError, "Old plain strings not supported - use unicode"); #endif /* Unicode */ if(PyUnicode_Check(value)) { /* We optimize for the default case of there being no nuls or single quotes */ PyObject *unires; Py_UNICODE *res; Py_ssize_t left; unires=PyUnicode_FromUnicode(NULL, PyUnicode_GET_SIZE(value)+2); if(!unires) return NULL; APSW_UNICODE_READY(unires, return NULL); res=PyUnicode_AS_UNICODE(unires); *res++='\''; memcpy(res, PyUnicode_AS_UNICODE(value), PyUnicode_GET_DATA_SIZE(value)); res+=PyUnicode_GET_SIZE(value); *res++='\''; /* Now look for nuls and single quotes */ res=PyUnicode_AS_UNICODE(unires)+1; left=PyUnicode_GET_SIZE(value); for(;left;left--,res++) { if(*res=='\'' || *res==0) { /* we add one char for ' and 10 for null */ const int moveamount=*res=='\''?1:10; int retval; APSW_FAULT_INJECT(FormatSQLValueResizeFails, retval=PyUnicode_Resize(&unires, PyUnicode_GET_SIZE(unires)+moveamount), retval=PyUnicode_Resize(&unires, -17) ); if(retval==-1) { Py_DECREF(unires); return NULL; } res=PyUnicode_AS_UNICODE(unires)+(PyUnicode_GET_SIZE(unires)-left-moveamount-1); memmove(res+moveamount, res, sizeof(Py_UNICODE)*(left+1)); if(*res==0) { *res++='\''; *res++='|'; *res++='|'; *res++='X'; *res++='\''; *res++='0'; *res++='0'; *res++='\''; *res++='|'; *res++='|'; *res='\''; } else res++; } } return unires; } /* Blob */ if( #if PY_MAJOR_VERSION<3 PyBuffer_Check(value) #else PyBytes_Check(value) #endif ) { const unsigned char *buffer; Py_ssize_t buflen; int asrb; PyObject *unires; Py_UNICODE *res; #define _HEXDIGITS asrb=PyObject_AsReadBuffer(value, (const void**)&buffer, &buflen); APSW_FAULT_INJECT(FormatSQLValueAsReadBufferFails, , (PyErr_NoMemory(), asrb=-1)); if(asrb!=0) return NULL; /* 3 is X, ', ' */ APSW_FAULT_INJECT(FormatSQLValuePyUnicodeFromUnicodeFails, unires=PyUnicode_FromUnicode(NULL, buflen*2+3), unires=PyErr_NoMemory()); if(!unires) return NULL; APSW_UNICODE_READY(unires, return NULL); res=PyUnicode_AS_UNICODE(unires); *res++='X'; *res++='\''; /* About the billionth time I have written a hex conversion routine */ for(;buflen;buflen--) { *res++="0123456789ABCDEF"[(*buffer)>>4]; *res++="0123456789ABCDEF"[(*buffer++)&0x0f]; } *res++='\''; return unires; } return PyErr_Format(PyExc_TypeError, "Unsupported type"); } /** .. automethod:: main() Sphinx automethod is too stupid, so this text is replaced by my code with the actual docstring from tools.py:main(). */ /** .. method:: log(level, message) Calls the SQLite logging interface. Note that you must format the message before passing it to this method:: apsw.log(apsw.SQLITE_NOMEM, "Need %d bytes of memory" % (1234,)) See :ref:`tips ` for an example of how to receive log messages. -* sqlite3_log */ static PyObject * apsw_log(APSW_ARGUNUSED PyObject *self, PyObject *args) { int level; char *message; if(!PyArg_ParseTuple(args, "ies", &level, STRENCODING, &message)) return NULL; sqlite3_log(level, "%s", message); /* PYSQLITE_CALL not needed */ PyMem_Free(message); Py_RETURN_NONE; } static PyMethodDef module_methods[] = { {"sqlite3_sourceid", (PyCFunction)get_sqlite3_sourceid, METH_NOARGS, "Return the source identification of the SQLite library"}, {"sqlitelibversion", (PyCFunction)getsqliteversion, METH_NOARGS, "Return the version of the SQLite library"}, {"apswversion", (PyCFunction)getapswversion, METH_NOARGS, "Return the version of the APSW wrapper"}, {"vfsnames", (PyCFunction)vfsnames, METH_NOARGS, "Returns list of vfs names"}, {"enablesharedcache", (PyCFunction)enablesharedcache, METH_VARARGS, "Sets shared cache semantics for this thread"}, {"initialize", (PyCFunction)initialize, METH_NOARGS, "Initialize SQLite library"}, {"shutdown", (PyCFunction)sqliteshutdown, METH_NOARGS, "Shutdown SQLite library"}, {"format_sql_value", (PyCFunction)formatsqlvalue, METH_O, "Formats a SQL value as a string"}, #ifdef EXPERIMENTAL {"config", (PyCFunction)config, METH_VARARGS, "Calls sqlite3_config"}, {"log", (PyCFunction)apsw_log, METH_VARARGS, "Calls sqlite3_log"}, #endif {"memoryused", (PyCFunction)memoryused, METH_NOARGS, "Current SQLite memory in use"}, {"memoryhighwater", (PyCFunction)memoryhighwater, METH_VARARGS, "Most amount of memory used"}, {"status", (PyCFunction)status, METH_VARARGS, "Gets various SQLite counters"}, {"softheaplimit", (PyCFunction)softheaplimit, METH_VARARGS, "Sets soft limit on SQLite memory usage"}, {"releasememory", (PyCFunction)releasememory, METH_VARARGS, "Attempts to free specified amount of memory"}, {"randomness", (PyCFunction)randomness, METH_VARARGS, "Obtains random bytes"}, {"exceptionfor", (PyCFunction)getapswexceptionfor, METH_O, "Returns exception instance corresponding to supplied sqlite error code"}, {"complete", (PyCFunction)apswcomplete, METH_VARARGS, "Tests if a complete SQLite statement has been supplied (ie ends with ;)"}, #if defined(APSW_TESTFIXTURES) && defined(APSW_USE_SQLITE_AMALGAMATION) {"test_reset_rng", (PyCFunction)apsw_test_reset_rng, METH_NOARGS, "Resets random number generator so we can test vfs xRandomness"}, #endif #ifdef APSW_TESTFIXTURES {"test_call_xGetLastError", (PyCFunction)apsw_call_xGetLastError, METH_VARARGS, "Calls xGetLastError routine"}, {"_fini", (PyCFunction)apsw_fini, METH_NOARGS, "Frees all caches and recycle lists"}, #endif #ifdef APSW_FORK_CHECKER {"fork_checker", (PyCFunction)apsw_fork_checker, METH_NOARGS, "Installs fork checking code"}, #endif {0, 0, 0, 0} /* Sentinel */ }; static void add_shell(PyObject *module); #if PY_MAJOR_VERSION >= 3 static struct PyModuleDef apswmoduledef={ PyModuleDef_HEAD_INIT, "apsw", NULL, -1, module_methods, 0, 0, 0, 0 }; #endif PyMODINIT_FUNC #if PY_MAJOR_VERSION < 3 initapsw(void) #else PyInit_apsw(void) #endif { PyObject *m=NULL; PyObject *thedict=NULL; const char *mapping_name=NULL; PyObject *hooks; unsigned int i; assert(sizeof(int)==4); /* we expect 32 bit ints */ assert(sizeof(long long)==8); /* we expect 64 bit long long */ /* Check SQLite was compiled with thread safety */ if(!sqlite3_threadsafe()) { PyErr_Format(PyExc_EnvironmentError, "SQLite was compiled without thread safety and cannot be used."); goto fail; } if (PyType_Ready(&ConnectionType) < 0 || PyType_Ready(&APSWCursorType) < 0 || PyType_Ready(&ZeroBlobBindType) <0 || PyType_Ready(&APSWBlobType) <0 || PyType_Ready(&APSWVFSType) <0 || PyType_Ready(&APSWVFSFileType) <0 || PyType_Ready(&APSWURIFilenameType) <0 || PyType_Ready(&APSWStatementType) <0 || PyType_Ready(&APSWBufferType) <0 || PyType_Ready(&FunctionCBInfoType) <0 #ifdef EXPERIMENTAL || PyType_Ready(&APSWBackupType) <0 #endif ) goto fail; /* ensure threads are available */ PyEval_InitThreads(); #if PY_MAJOR_VERSION < 3 m = apswmodule = Py_InitModule3("apsw", module_methods, "Another Python SQLite Wrapper."); #else m = apswmodule = PyModule_Create(&apswmoduledef); #endif if (m == NULL) goto fail; Py_INCREF(m); if(init_exceptions(m)) goto fail; Py_INCREF(&ConnectionType); PyModule_AddObject(m, "Connection", (PyObject *)&ConnectionType); /* we don't add cursor, blob or backup to the module since users shouldn't be able to instantiate them directly */ Py_INCREF(&ZeroBlobBindType); PyModule_AddObject(m, "zeroblob", (PyObject *)&ZeroBlobBindType); Py_INCREF(&APSWVFSType); PyModule_AddObject(m, "VFS", (PyObject*)&APSWVFSType); Py_INCREF(&APSWVFSFileType); PyModule_AddObject(m, "VFSFile", (PyObject*)&APSWVFSFileType); Py_INCREF(&APSWURIFilenameType); PyModule_AddObject(m, "URIFilename", (PyObject*)&APSWURIFilenameType); /** .. attribute:: connection_hooks The purpose of the hooks is to allow the easy registration of :meth:`functions `, :ref:`virtual tables ` or similar items with each :class:`Connection` as it is created. The default value is an empty list. Whenever a Connection is created, each item in apsw.connection_hooks is invoked with a single parameter being the new Connection object. If the hook raises an exception then the creation of the Connection fails. If you wanted to store your own defined functions in the database then you could define a hook that looked in the relevant tables, got the Python text and turned it into the functions. */ hooks=PyList_New(0); if(!hooks) goto fail; PyModule_AddObject(m, "connection_hooks", hooks); /** .. data:: SQLITE_VERSION_NUMBER The integer version number of SQLite that APSW was compiled against. For example SQLite 3.6.4 will have the value *3006004*. This number may be different than the actual library in use if the library is shared and has been updated. Call :meth:`sqlitelibversion` to get the actual library version. */ PyModule_AddIntConstant(m, "SQLITE_VERSION_NUMBER", SQLITE_VERSION_NUMBER); /** .. attribute:: using_amalgamation If True then `SQLite amalgamation `__ is in use (statically compiled into APSW). Using the amalgamation means that SQLite shared libraries are not used and will not affect your code. */ #ifdef APSW_USE_SQLITE_AMALGAMATION Py_INCREF(Py_True); PyModule_AddObject(m, "using_amalgamation", Py_True); #else Py_INCREF(Py_False); PyModule_AddObject(m, "using_amalgamation", Py_False); #endif /** .. _sqliteconstants: SQLite constants ================ SQLite has `many constants `_ used in various interfaces. To use a constant such as :const:`SQLITE_OK`, just use ``apsw.SQLITE_OK``. The same values can be used in different contexts. For example :const:`SQLITE_OK` and :const:`SQLITE_CREATE_INDEX` both have a value of zero. For each group of constants there is also a mapping (dict) available that you can supply a string to and get the corresponding numeric value, or supply a numeric value and get the corresponding string. These can help improve diagnostics/logging, calling other modules etc. For example:: apsw.mapping_authorizer_function["SQLITE_READ"] == 20 apsw.mapping_authorizer_function[20] == "SQLITE_READ" */ /* add in some constants and also put them in a corresponding mapping dictionary */ { /* sentinel should be a number that doesn't exist */ #define SENTINEL -786343 #define DICT(n) {n, SENTINEL} #define END {NULL, 0} #define ADDINT(n) {#n, n} static const struct { const char *name; int value; } integers[]={ DICT("mapping_authorizer_return"), ADDINT(SQLITE_DENY), ADDINT(SQLITE_IGNORE), ADDINT(SQLITE_OK), END, DICT("mapping_authorizer_function"), ADDINT(SQLITE_CREATE_INDEX), ADDINT(SQLITE_CREATE_TABLE), ADDINT(SQLITE_CREATE_TEMP_INDEX), ADDINT(SQLITE_CREATE_TEMP_TABLE), ADDINT(SQLITE_CREATE_TEMP_TRIGGER), ADDINT(SQLITE_CREATE_TEMP_VIEW), ADDINT(SQLITE_CREATE_TRIGGER), ADDINT(SQLITE_CREATE_VIEW), ADDINT(SQLITE_DELETE), ADDINT(SQLITE_DROP_INDEX), ADDINT(SQLITE_DROP_TABLE), ADDINT(SQLITE_DROP_TEMP_INDEX), ADDINT(SQLITE_DROP_TEMP_TABLE), ADDINT(SQLITE_DROP_TEMP_TRIGGER), ADDINT(SQLITE_DROP_TEMP_VIEW), ADDINT(SQLITE_DROP_TRIGGER), ADDINT(SQLITE_DROP_VIEW), ADDINT(SQLITE_INSERT), ADDINT(SQLITE_PRAGMA), ADDINT(SQLITE_READ), ADDINT(SQLITE_SELECT), ADDINT(SQLITE_TRANSACTION), ADDINT(SQLITE_UPDATE), ADDINT(SQLITE_ATTACH), ADDINT(SQLITE_DETACH), ADDINT(SQLITE_ALTER_TABLE), ADDINT(SQLITE_REINDEX), ADDINT(SQLITE_COPY), ADDINT(SQLITE_ANALYZE), ADDINT(SQLITE_CREATE_VTABLE), ADDINT(SQLITE_DROP_VTABLE), ADDINT(SQLITE_FUNCTION), ADDINT(SQLITE_SAVEPOINT), ADDINT(SQLITE_RECURSIVE), END, /* vtable best index constraints */ DICT("mapping_bestindex_constraints"), ADDINT(SQLITE_INDEX_CONSTRAINT_EQ), ADDINT(SQLITE_INDEX_CONSTRAINT_GT), ADDINT(SQLITE_INDEX_CONSTRAINT_LE), ADDINT(SQLITE_INDEX_CONSTRAINT_LT), ADDINT(SQLITE_INDEX_CONSTRAINT_GE), ADDINT(SQLITE_INDEX_CONSTRAINT_MATCH), END, /* extendended result codes */ DICT("mapping_extended_result_codes"), ADDINT(SQLITE_IOERR_READ), ADDINT(SQLITE_IOERR_SHORT_READ), ADDINT(SQLITE_IOERR_WRITE), ADDINT(SQLITE_IOERR_FSYNC), ADDINT(SQLITE_IOERR_DIR_FSYNC), ADDINT(SQLITE_IOERR_TRUNCATE), ADDINT(SQLITE_IOERR_FSTAT), ADDINT(SQLITE_IOERR_UNLOCK), ADDINT(SQLITE_IOERR_RDLOCK), ADDINT(SQLITE_IOERR_DELETE), ADDINT(SQLITE_IOERR_BLOCKED), ADDINT(SQLITE_IOERR_NOMEM), ADDINT(SQLITE_IOERR_ACCESS), ADDINT(SQLITE_IOERR_CHECKRESERVEDLOCK), ADDINT(SQLITE_IOERR_LOCK), ADDINT(SQLITE_IOERR_CLOSE), ADDINT(SQLITE_IOERR_DIR_CLOSE), ADDINT(SQLITE_LOCKED_SHAREDCACHE), ADDINT(SQLITE_BUSY_RECOVERY), ADDINT(SQLITE_CANTOPEN_NOTEMPDIR), ADDINT(SQLITE_IOERR_SHMOPEN), ADDINT(SQLITE_IOERR_SHMSIZE), ADDINT(SQLITE_IOERR_SHMLOCK), ADDINT(SQLITE_CORRUPT_VTAB), ADDINT(SQLITE_IOERR_SEEK), ADDINT(SQLITE_IOERR_SHMMAP), ADDINT(SQLITE_READONLY_CANTLOCK), ADDINT(SQLITE_READONLY_RECOVERY), ADDINT(SQLITE_ABORT_ROLLBACK), ADDINT(SQLITE_CANTOPEN_ISDIR), ADDINT(SQLITE_CANTOPEN_FULLPATH), ADDINT(SQLITE_IOERR_DELETE_NOENT), ADDINT(SQLITE_CONSTRAINT_CHECK), ADDINT(SQLITE_CONSTRAINT_COMMITHOOK), ADDINT(SQLITE_CONSTRAINT_FOREIGNKEY), ADDINT(SQLITE_CONSTRAINT_FUNCTION), ADDINT(SQLITE_CONSTRAINT_NOTNULL), ADDINT(SQLITE_CONSTRAINT_PRIMARYKEY), ADDINT(SQLITE_CONSTRAINT_TRIGGER), ADDINT(SQLITE_CONSTRAINT_UNIQUE), ADDINT(SQLITE_CONSTRAINT_VTAB), ADDINT(SQLITE_READONLY_ROLLBACK), ADDINT(SQLITE_IOERR_MMAP), ADDINT(SQLITE_NOTICE_RECOVER_ROLLBACK), ADDINT(SQLITE_NOTICE_RECOVER_WAL), ADDINT(SQLITE_BUSY_SNAPSHOT), ADDINT(SQLITE_IOERR_GETTEMPPATH), ADDINT(SQLITE_WARNING_AUTOINDEX), ADDINT(SQLITE_CANTOPEN_CONVPATH), ADDINT(SQLITE_IOERR_CONVPATH), ADDINT(SQLITE_CONSTRAINT_ROWID), ADDINT(SQLITE_READONLY_DBMOVED), ADDINT(SQLITE_AUTH_USER), END, /* error codes */ DICT("mapping_result_codes"), ADDINT(SQLITE_OK), ADDINT(SQLITE_ERROR), ADDINT(SQLITE_INTERNAL), ADDINT(SQLITE_PERM), ADDINT(SQLITE_ABORT), ADDINT(SQLITE_BUSY), ADDINT(SQLITE_LOCKED), ADDINT(SQLITE_NOMEM), ADDINT(SQLITE_READONLY), ADDINT(SQLITE_INTERRUPT), ADDINT(SQLITE_IOERR), ADDINT(SQLITE_CORRUPT), ADDINT(SQLITE_FULL), ADDINT(SQLITE_CANTOPEN), ADDINT(SQLITE_PROTOCOL), ADDINT(SQLITE_EMPTY), ADDINT(SQLITE_SCHEMA), ADDINT(SQLITE_CONSTRAINT), ADDINT(SQLITE_MISMATCH), ADDINT(SQLITE_MISUSE), ADDINT(SQLITE_NOLFS), ADDINT(SQLITE_AUTH), ADDINT(SQLITE_FORMAT), ADDINT(SQLITE_RANGE), ADDINT(SQLITE_NOTADB), ADDINT(SQLITE_NOTFOUND), ADDINT(SQLITE_TOOBIG), ADDINT(SQLITE_NOTICE), ADDINT(SQLITE_WARNING), /* you can't get these from apsw code but present for completeness */ ADDINT(SQLITE_DONE), ADDINT(SQLITE_ROW), END, /* open flags */ DICT("mapping_open_flags"), ADDINT(SQLITE_OPEN_READONLY), ADDINT(SQLITE_OPEN_READWRITE), ADDINT(SQLITE_OPEN_CREATE), ADDINT(SQLITE_OPEN_DELETEONCLOSE), ADDINT(SQLITE_OPEN_EXCLUSIVE), ADDINT(SQLITE_OPEN_MAIN_DB), ADDINT(SQLITE_OPEN_TEMP_DB), ADDINT(SQLITE_OPEN_TRANSIENT_DB), ADDINT(SQLITE_OPEN_MAIN_JOURNAL), ADDINT(SQLITE_OPEN_TEMP_JOURNAL), ADDINT(SQLITE_OPEN_SUBJOURNAL), ADDINT(SQLITE_OPEN_MASTER_JOURNAL), ADDINT(SQLITE_OPEN_NOMUTEX), ADDINT(SQLITE_OPEN_FULLMUTEX), ADDINT(SQLITE_OPEN_PRIVATECACHE), ADDINT(SQLITE_OPEN_SHAREDCACHE), ADDINT(SQLITE_OPEN_AUTOPROXY), ADDINT(SQLITE_OPEN_WAL), ADDINT(SQLITE_OPEN_URI), ADDINT(SQLITE_OPEN_MEMORY), END, /* limits */ DICT("mapping_limits"), ADDINT(SQLITE_LIMIT_LENGTH), ADDINT(SQLITE_LIMIT_SQL_LENGTH), ADDINT(SQLITE_LIMIT_COLUMN), ADDINT(SQLITE_LIMIT_EXPR_DEPTH), ADDINT(SQLITE_LIMIT_COMPOUND_SELECT), ADDINT(SQLITE_LIMIT_VDBE_OP), ADDINT(SQLITE_LIMIT_FUNCTION_ARG), ADDINT(SQLITE_LIMIT_ATTACHED), ADDINT(SQLITE_LIMIT_LIKE_PATTERN_LENGTH), ADDINT(SQLITE_LIMIT_VARIABLE_NUMBER), ADDINT(SQLITE_LIMIT_TRIGGER_DEPTH), ADDINT(SQLITE_LIMIT_WORKER_THREADS), /* We don't include the MAX limits - see https://github.com/rogerbinns/apsw/issues/17 */ END, DICT("mapping_config"), ADDINT(SQLITE_CONFIG_SINGLETHREAD), ADDINT(SQLITE_CONFIG_MULTITHREAD), ADDINT(SQLITE_CONFIG_SERIALIZED), ADDINT(SQLITE_CONFIG_MALLOC), ADDINT(SQLITE_CONFIG_GETMALLOC), ADDINT(SQLITE_CONFIG_SCRATCH), ADDINT(SQLITE_CONFIG_PAGECACHE), ADDINT(SQLITE_CONFIG_HEAP), ADDINT(SQLITE_CONFIG_MEMSTATUS), ADDINT(SQLITE_CONFIG_MUTEX), ADDINT(SQLITE_CONFIG_GETMUTEX), ADDINT(SQLITE_CONFIG_LOOKASIDE), ADDINT(SQLITE_CONFIG_LOG), ADDINT(SQLITE_CONFIG_GETPCACHE), ADDINT(SQLITE_CONFIG_PCACHE), ADDINT(SQLITE_CONFIG_URI), ADDINT(SQLITE_CONFIG_PCACHE2), ADDINT(SQLITE_CONFIG_GETPCACHE2), ADDINT(SQLITE_CONFIG_COVERING_INDEX_SCAN), ADDINT(SQLITE_CONFIG_SQLLOG), ADDINT(SQLITE_CONFIG_MMAP_SIZE), ADDINT(SQLITE_CONFIG_WIN32_HEAPSIZE), ADDINT(SQLITE_CONFIG_PCACHE_HDRSZ), ADDINT(SQLITE_CONFIG_PMASZ), END, DICT("mapping_db_config"), ADDINT(SQLITE_DBCONFIG_LOOKASIDE), ADDINT(SQLITE_DBCONFIG_ENABLE_FKEY), ADDINT(SQLITE_DBCONFIG_ENABLE_TRIGGER), END, DICT("mapping_status"), ADDINT(SQLITE_STATUS_MEMORY_USED), ADDINT(SQLITE_STATUS_PAGECACHE_USED), ADDINT(SQLITE_STATUS_PAGECACHE_OVERFLOW), ADDINT(SQLITE_STATUS_SCRATCH_USED), ADDINT(SQLITE_STATUS_SCRATCH_OVERFLOW), ADDINT(SQLITE_STATUS_MALLOC_SIZE), ADDINT(SQLITE_STATUS_PARSER_STACK), ADDINT(SQLITE_STATUS_PAGECACHE_SIZE), ADDINT(SQLITE_STATUS_SCRATCH_SIZE), ADDINT(SQLITE_STATUS_MALLOC_COUNT), END, DICT("mapping_db_status"), ADDINT(SQLITE_DBSTATUS_LOOKASIDE_USED), ADDINT(SQLITE_DBSTATUS_CACHE_USED), ADDINT(SQLITE_DBSTATUS_MAX), ADDINT(SQLITE_DBSTATUS_SCHEMA_USED), ADDINT(SQLITE_DBSTATUS_STMT_USED), ADDINT(SQLITE_DBSTATUS_LOOKASIDE_HIT), ADDINT(SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL), ADDINT(SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE), ADDINT(SQLITE_DBSTATUS_CACHE_HIT), ADDINT(SQLITE_DBSTATUS_CACHE_MISS), ADDINT(SQLITE_DBSTATUS_CACHE_WRITE), ADDINT(SQLITE_DBSTATUS_DEFERRED_FKS), END, DICT("mapping_locking_level"), ADDINT(SQLITE_LOCK_NONE), ADDINT(SQLITE_LOCK_SHARED), ADDINT(SQLITE_LOCK_RESERVED), ADDINT(SQLITE_LOCK_PENDING), ADDINT(SQLITE_LOCK_EXCLUSIVE), END, DICT("mapping_access"), ADDINT(SQLITE_ACCESS_EXISTS), ADDINT(SQLITE_ACCESS_READWRITE), ADDINT(SQLITE_ACCESS_READ), END, DICT("mapping_device_characteristics"), ADDINT(SQLITE_IOCAP_ATOMIC), ADDINT(SQLITE_IOCAP_ATOMIC512), ADDINT(SQLITE_IOCAP_ATOMIC1K), ADDINT(SQLITE_IOCAP_ATOMIC2K), ADDINT(SQLITE_IOCAP_ATOMIC4K), ADDINT(SQLITE_IOCAP_ATOMIC8K), ADDINT(SQLITE_IOCAP_ATOMIC16K), ADDINT(SQLITE_IOCAP_ATOMIC32K), ADDINT(SQLITE_IOCAP_ATOMIC64K), ADDINT(SQLITE_IOCAP_SAFE_APPEND), ADDINT(SQLITE_IOCAP_SEQUENTIAL), ADDINT(SQLITE_IOCAP_UNDELETABLE_WHEN_OPEN), ADDINT(SQLITE_IOCAP_POWERSAFE_OVERWRITE), ADDINT(SQLITE_IOCAP_IMMUTABLE), END, DICT("mapping_sync"), ADDINT(SQLITE_SYNC_NORMAL), ADDINT(SQLITE_SYNC_FULL), ADDINT(SQLITE_SYNC_DATAONLY), END, DICT("mapping_wal_checkpoint"), ADDINT(SQLITE_CHECKPOINT_PASSIVE), ADDINT(SQLITE_CHECKPOINT_FULL), ADDINT(SQLITE_CHECKPOINT_RESTART), ADDINT(SQLITE_CHECKPOINT_TRUNCATE), END, DICT("mapping_file_control"), ADDINT(SQLITE_FCNTL_LOCKSTATE), ADDINT(SQLITE_FCNTL_SIZE_HINT), ADDINT(SQLITE_FCNTL_CHUNK_SIZE), ADDINT(SQLITE_FCNTL_FILE_POINTER), ADDINT(SQLITE_FCNTL_SYNC_OMITTED), ADDINT(SQLITE_FCNTL_PERSIST_WAL), ADDINT(SQLITE_FCNTL_WIN32_AV_RETRY), ADDINT(SQLITE_FCNTL_OVERWRITE), ADDINT(SQLITE_FCNTL_POWERSAFE_OVERWRITE), ADDINT(SQLITE_FCNTL_VFSNAME), ADDINT(SQLITE_FCNTL_PRAGMA), ADDINT(SQLITE_FCNTL_BUSYHANDLER), ADDINT(SQLITE_FCNTL_TEMPFILENAME), ADDINT(SQLITE_FCNTL_MMAP_SIZE), ADDINT(SQLITE_FCNTL_TRACE), ADDINT(SQLITE_FCNTL_COMMIT_PHASETWO), ADDINT(SQLITE_FCNTL_HAS_MOVED), ADDINT(SQLITE_FCNTL_SYNC), ADDINT(SQLITE_FCNTL_WIN32_SET_HANDLE), ADDINT(SQLITE_FCNTL_LAST_ERRNO), ADDINT(SQLITE_FCNTL_WAL_BLOCK), ADDINT(SQLITE_FCNTL_GET_LOCKPROXYFILE), ADDINT(SQLITE_FCNTL_SET_LOCKPROXYFILE), ADDINT(SQLITE_FCNTL_RBU), ADDINT(SQLITE_FCNTL_ZIPVFS), END, DICT("mapping_conflict_resolution_modes"), ADDINT(SQLITE_ROLLBACK), ADDINT(SQLITE_IGNORE), ADDINT(SQLITE_FAIL), ADDINT(SQLITE_ABORT), ADDINT(SQLITE_REPLACE), END, DICT("mapping_virtual_table_configuration_options"), ADDINT(SQLITE_VTAB_CONSTRAINT_SUPPORT), END, DICT("mapping_xshmlock_flags"), ADDINT(SQLITE_SHM_EXCLUSIVE), ADDINT(SQLITE_SHM_LOCK), ADDINT(SQLITE_SHM_SHARED), ADDINT(SQLITE_SHM_UNLOCK), END }; for(i=0;i= 3 m #endif ; } fail: Py_XDECREF(m); return #if PY_MAJOR_VERSION >= 3 NULL #endif ; } static void add_shell(PyObject *apswmodule) { #ifndef PYPY_VERSION PyObject *res=NULL, *maindict=NULL, *apswdict, *msvciscrap=NULL; maindict=PyModule_GetDict(PyImport_AddModule("__main__")); apswdict=PyModule_GetDict(apswmodule); PyDict_SetItemString(apswdict, "__builtins__", PyDict_GetItemString(maindict, "__builtins__")); PyDict_SetItemString(apswdict, "apsw", apswmodule); /* the toy compiler from microsoft falls over on string constants bigger than will fit in a 16 bit quantity. You remember 16 bits? All the rage in the early 1980s. So we have to compose chunks into a bytes and use that instead. The format string is as many %s as there are chunks. It is generated in setup.py. */ msvciscrap=PyBytes_FromFormat( #include "shell.c" ); if(msvciscrap) res=PyRun_StringFlags(PyBytes_AS_STRING(msvciscrap),Py_file_input, apswdict, apswdict, NULL); if(!res) PyErr_Print(); assert(res); Py_XDECREF(res); Py_XDECREF(msvciscrap); #endif } #ifdef APSW_TESTFIXTURES static int APSW_Should_Fault(const char *name) { PyGILState_STATE gilstate; PyObject *faultdict=NULL, *truthval=NULL, *value=NULL; int res=0; gilstate=PyGILState_Ensure(); if(!PyObject_HasAttrString(apswmodule, "faultdict")) PyObject_SetAttrString(apswmodule, "faultdict", PyDict_New()); value=MAKESTR(name); faultdict=PyObject_GetAttrString(apswmodule, "faultdict"); truthval=PyDict_GetItem(faultdict, value); if(!truthval) goto finally; /* set false if present - one shot firing */ PyDict_SetItem(faultdict, value, Py_False); res=PyObject_IsTrue(truthval); finally: Py_XDECREF(value); Py_XDECREF(faultdict); PyGILState_Release(gilstate); return res; } #endif apsw-3.8.11.1-r1/mingwsetup.bat0000644000000000000000000001035412257607024014635 0ustar rootroot@rem You can use this script to setup the necessary libraries for building with MinGW on Windows @rem We assume you use the default install directory of c:\PythonVERSION @rem Change below if you don't set PYTHONDIRBASE=c:\Python @rem The location of Python dlls in the system32 directory - below is correct for Win NT/2k/XP etc set WINSYS32=%systemroot%\system32 @set PYTHONVER=22 @if not exist %PYTHONDIRBASE%%PYTHONVER% goto no%PYTHONVER% @if exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto lib%PYTHONVER%ok pexports %WINSYS32%\python%PYTHONVER%.dll > %TEMP%\py%PYTHONVER%exports.def dlltool --dllname python%PYTHONVER%.dll --def %TEMP%\py%PYTHONVER%exports.def --output-lib %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a del %TEMP%\py%PYTHONVER%exports.def @if not exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto fail%PYTHONVER% @goto lib%PYTHONVER%ok :no22 @echo You do not have Python %PYTHONVER% installed @goto py%PYTHONVER%done :fail22 @echo Setup failed for Python %PYTHONVER% @goto py%PYTHONVER%done :lib22ok @echo MinGW is setup for Python %PYTHONVER% :py22done @set PYTHONVER=23 @if not exist %PYTHONDIRBASE%%PYTHONVER% goto no%PYTHONVER% @if exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto lib%PYTHONVER%ok pexports %WINSYS32%\python%PYTHONVER%.dll > %TEMP%\py%PYTHONVER%exports.def dlltool --dllname python%PYTHONVER%.dll --def %TEMP%\py%PYTHONVER%exports.def --output-lib %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a del %TEMP%\py%PYTHONVER%exports.def @if not exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto fail%PYTHONVER% @goto lib%PYTHONVER%ok :no23 @echo You do not have Python %PYTHONVER% installed @goto py%PYTHONVER%done :fail23 @echo Setup failed for Python %PYTHONVER% @goto py%PYTHONVER%done :lib23ok @echo MinGW is setup for Python %PYTHONVER% :py23done @set PYTHONVER=24 @if not exist %PYTHONDIRBASE%%PYTHONVER% goto no%PYTHONVER% @if exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto lib%PYTHONVER%ok pexports %WINSYS32%\python%PYTHONVER%.dll > %TEMP%\py%PYTHONVER%exports.def dlltool --dllname python%PYTHONVER%.dll --def %TEMP%\py%PYTHONVER%exports.def --output-lib %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a del %TEMP%\py%PYTHONVER%exports.def @if not exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto fail%PYTHONVER% @goto lib%PYTHONVER%ok :no24 @echo You do not have Python %PYTHONVER% installed @goto py%PYTHONVER%done :fail24 @echo Setup failed for Python %PYTHONVER% @goto py%PYTHONVER%done :lib24ok @echo MinGW is setup for Python %PYTHONVER% :py24done @set PYTHONVER=25 @if not exist %PYTHONDIRBASE%%PYTHONVER% goto no%PYTHONVER% @if exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto lib%PYTHONVER%ok pexports %WINSYS32%\python%PYTHONVER%.dll > %TEMP%\py%PYTHONVER%exports.def dlltool --dllname python%PYTHONVER%.dll --def %TEMP%\py%PYTHONVER%exports.def --output-lib %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a del %TEMP%\py%PYTHONVER%exports.def @if not exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto fail%PYTHONVER% @goto lib%PYTHONVER%ok :no25 @echo You do not have Python %PYTHONVER% installed @goto py%PYTHONVER%done :fail25 @echo Setup failed for Python %PYTHONVER% @goto py%PYTHONVER%done :lib25ok @echo MinGW is setup for Python %PYTHONVER% :py25done @set PYTHONVER=26 @if not exist %PYTHONDIRBASE%%PYTHONVER% goto no%PYTHONVER% @if exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto lib%PYTHONVER%ok pexports %WINSYS32%\python%PYTHONVER%.dll > %TEMP%\py%PYTHONVER%exports.def dlltool --dllname python%PYTHONVER%.dll --def %TEMP%\py%PYTHONVER%exports.def --output-lib %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a del %TEMP%\py%PYTHONVER%exports.def @if not exist %PYTHONDIRBASE%%PYTHONVER%\libs\libpython%PYTHONVER%.a goto fail%PYTHONVER% @goto lib%PYTHONVER%ok :no26 @echo You do not have Python %PYTHONVER% installed @goto py%PYTHONVER%done :fail26 @echo Setup failed for Python %PYTHONVER% @goto py%PYTHONVER%done :lib26ok @echo MinGW is setup for Python %PYTHONVER% :py26done @rem clean out variables @set PYTHONDIRBASE= @set PYTHONVER= @set WINSYS32=apsw-3.8.11.1-r1/PKG-INFO0000664000000000000000000000157512565423356013055 0ustar rootrootMetadata-Version: 1.1 Name: apsw Version: 3.8.11.1-r1 Summary: Another Python SQLite Wrapper Home-page: https://github.com/rogerbinns/apsw/ Author: Roger Binns Author-email: rogerb@rogerbinns.com License: OSI Approved :: Description: A Python wrapper for the SQLite embedded relational database engine. In contrast to other wrappers such as pysqlite it focuses on being a minimal layer over SQLite attempting just to translate the complete SQLite API into Python. Keywords: database,sqlite Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved Classifier: Operating System :: OS Independent Classifier: Programming Language :: C Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 3 Classifier: Topic :: Database :: Front-Ends apsw-3.8.11.1-r1/setup.py0000664000000000000000000010331712565422256013465 0ustar rootroot#!/usr/bin/env python # See the accompanying LICENSE file. import os import sys import shlex import glob import re import time import zipfile, tarfile import socket from distutils.core import setup, Extension, Command from distutils.command import build_ext, build, sdist ## ## Do your customizations here or by creating a setup.cfg as documented at ## http://www.python.org/doc/2.5.2/dist/setup-config.html ## include_dirs=['src'] library_dirs=[] define_macros=[] libraries=[] # This includes the functionality marked as experimental in SQLite 3. # Comment out the line to exclude them define_macros.append( ('EXPERIMENTAL', '1') ) ## ## End of customizations ## # python 2 and 3 print equivalent def write(*args): # py2 won't allow optional keyword arg on end, so work around it dest=sys.stdout if args[-1]==sys.stderr: dest=args[-1] args=args[:-1] dest.write(" ".join(args)+"\n") dest.flush() py3=sys.version_info>=(3,0) # ensure files are closed def read_whole_file(name, mode): if sys.version_info<(2,4): if "r" in mode and "U" in mode: # python 2.3 returns file not found if "U" present! mode="".join([m for m in mode if m!="U"]) f=open(name, mode) try: return f.read() finally: f.close() def write_whole_file(name, mode, data): f=open(name, mode) try: f.write(data) finally: f.close() # They keep messing with where files are in URI def fixup_download_url(url): ver=re.search("3[0-9]{6}", url) if ver: ver=int(ver.group(0)) if ver>=3071600: if ver>=3080800: year="2015" elif ver>=3080300: year="2014" else: year="2013" if "/"+year+"/" not in url: url=url.split("/") url.insert(3, year) return "/".join(url) return url # Run test suite class run_tests(Command): description="Run test suite" # I did originally try using 'verbose' as the option but it turns # out that is builtin and defaults to 1 (--quiet is also builtin # and forces verbose to 0) user_options=[ ("show-tests", "s", "Show each test being run"), ] # see if you can find boolean_options documented anywhere boolean_options=['show-tests'] def initialize_options(self): self.show_tests=0 def finalize_options(self): pass def run(self): import unittest import tests tests.setup() suite=unittest.TestLoader().loadTestsFromModule(tests) # verbosity of zero doesn't print anything, one prints a dot # per test and two prints each test name result=unittest.TextTestRunner(verbosity=self.show_tests+1).run(suite) if not result.wasSuccessful(): sys.exit(1) # A hack we dont't document class build_test_extension(Command): description="Compiles APSW test loadable extension" user_options=[] def initialize_options(self): pass def finalize_options(self): pass def run(self): # On 64 bit windows we have to use MSVC if sys.platform=='win32': # yes even on 64 bit try: import platform if platform.architecture()[0]=='64bit': res=os.system("cl /Gd src/testextension.c /I sqlite3 /I . /DDLL /LD /link /export:sqlite3_extension_init /export:alternate_sqlite3_extension_init /out:testextension.sqlext") if res!=0: raise RuntimeError("Building test extension failed") return except ImportError: pass shared="shared" if sys.platform.startswith("darwin"): shared="bundle" res=os.system("gcc -fPIC -%s -o testextension.sqlext -Isqlite3 -I. src/testextension.c" % (shared,)) if res!=0: raise RuntimeError("Building test extension failed") # Another hack. Visual Studio 2008 & 2010 ship with 64 # compilers, headers and the Windows SDK but claims it doesn't and # distutils can't find it. The separate Windows SDK can't find this # and gets very confused not to mention being one of the buggiest cmd # scripts I have ever seen. This hack just sets some environment # variables directly since all the "proper" ways are very broken. class win64hackvars(Command): description="Set env vars for Visual Studio 2008/2010 Express 64 bit" user_options=[] def initialize_options(self): pass def finalize_options(self): pass def run(self): vcver=9 if sys.version_info>=(3,3): vcver=10 sdkdir=r"C:\Program Files\Microsoft SDKs\Windows\v6.0A" vsdir=r"C:\Program Files (x86)\Microsoft Visual Studio %d.0\VC" % vcver assert os.path.isdir(sdkdir), "Expected sdk dir "+sdkdir assert os.path.isdir(vsdir), "Expected visual studio dir "+vsdir os.environ["PATH"]=r"%s\bin\amd64;%s\bin" % (vsdir, sdkdir) os.environ["INCLUDE"]=r"%s\include;%s\include" % (vsdir, sdkdir) os.environ["LIB"]=r"%s\lib\amd64;%s\lib\x64" % (vsdir, sdkdir) os.environ["DISTUTILS_USE_SDK"]="1" os.environ["MSSdk"]=sdkdir # deal with various python version compatibility issues with how # to treat returned web data as lines of text def fixupcode(code): if sys.version_info<(2,5): if type(code)!=str: code=code.read() if sys.version_info>=(3,0): if type(code)!=bytes: code=code.read() if type(code)==bytes: code=code.decode("iso8859-1") if type(code)==str: return [l+"\n" for l in code.split("\n")] return code fetch_parts=[] class fetch(Command): description="Automatically downloads SQLite and components" user_options=[ ("version=", None, "Which version of SQLite/components to get (default current)"), ("missing-checksum-ok", None, "Continue on a missing checksum (default abort)"), ("sqlite", None, "Download SQLite amalgamation"), ("all", None, "Download all downloadable components"), ] fetch_options=['sqlite'] boolean_options=fetch_options+['all', 'missing-checksum-ok'] def initialize_options(self): self.version=None self.sqlite=False self.all=False self.missing_checksum_ok=False def finalize_options(self): # If all is selected then turn on all components global fetch_parts if self.all: for i in self.fetch_options: setattr(self, i, True) for i in self.fetch_options: fetch_parts.append(i) def run(self): # work out the version if self.version is None: write(" Getting download page to work out current SQLite version") page=self.download("https://sqlite.org/download.html", text=True, checksum=False) match=re.search(r'sqlite-amalgamation-3([0-9][0-9])([0-9][0-9])([0-9][0-9])\.zip', page) if match: self.version="3.%d.%d.%d" % tuple([int(match.group(n)) for n in range(1,4)]) if self.version.endswith(".0"): self.version=self.version[:-len(".0")] else: write("Unable to determine current SQLite version. Use --version=VERSION", sys.stderr) write("to set version - eg setup.py fetch --version=3.6.18", sys.stderr) sys.exit(17) write(" Version is "+self.version) # now get each selected component downloaded=0 if not self.version.startswith("fossil"): v=[int(x) for x in self.version.split(".")] if len(v)<4: v.append(0) self.webversion="%d%02d%02d%02d" % tuple(v) ## The amalgamation if self.sqlite: if self.version.startswith("fossil"): write(" Getting code from fossil") else: write(" Getting the SQLite amalgamation") if self.version.startswith("fossil"): if self.version=="fossil": uuid="trunk" else: showmsg=False if not self.version.startswith("fossil-"): showmsg=True else: uuid=self.version.split("-", 1)[1] if not uuid: showmsg=True if showmsg: write("Use fossil-HASH to identify a partifular commit", sys.stderr) write("eg fossil-3a82c8e6", sys.stderr) sys.exit(18) AURL="https://sqlite.org/src/zip/sqlite3.zip?uuid="+uuid checksum=False else: if sys.platform=="win32": AURL="https://sqlite.org/sqlite-amalgamation-%s.zip" % (self.webversion,) else: AURL="https://sqlite.org/sqlite-autoconf-%s.tar.gz" % (self.webversion,) checksum=True AURL=fixup_download_url(AURL) data=self.download(AURL, checksum=checksum) if AURL.endswith(".zip"): zip=zipfile.ZipFile(data, "r") for name in "sqlite3.c", "sqlite3.h", "sqlite3ext.h": write("Extracting", name) f=[n for n in zip.namelist() if n.endswith(name)] if len(f)!=1: raise Exception("Can't find %s in zip. Candidates are %s" % (name, f)) # Work around SQLite 3.7.13 bug where a symbol was # declared SQLITE_API and extern data=zip.read(f[0]) if name=="sqlite3.c": data=data.decode("utf8") data=data.replace("SQLITE_API extern", "SQLITE_API") data=data.encode("utf8") open(name, "wb").write(data) zip.close() else: # we need to run configure to get various -DHAVE_foo flags on non-windows platforms # delete existing sqlite3 directory if it exists, but save sqlite3config.h if it exists sqlite3config_h=None if os.path.exists("sqlite3/sqlite3config.h"): sqlite3config_h=read_whole_file("sqlite3/sqlite3config.h", "rb") if os.path.exists('sqlite3'): for dirpath, dirnames, filenames in os.walk('sqlite3', topdown=False): for file in filenames: os.remove(os.path.join(dirpath, file)) for dir in dirnames: os.rmdir(os.path.join(dirpath, dir)) os.rmdir('sqlite3') if self.version.startswith("fossil"): zip=zipfile.ZipFile(data, "r") for name in zip.namelist(): # extract if name.endswith("/"): os.mkdir(name) else: open(name, "wb").write(zip.read(name)) zip.close() else: # if you get an exception here it is likely that you don't have the python zlib module import zlib tar=tarfile.open("nonexistentname to keep old python happy", 'r', data) configmember=None for member in tar.getmembers(): tar.extract(member) # find first file named configure if not configmember and member.name.endswith("/configure"): configmember=member tar.close() # the directory name has changed a bit with each release so try to work out what it is if not configmember: write("Unable to determine directory it extracted to.", dest=sys.stderr) sys.exit(19) dirname=configmember.name.split('/')[0] os.rename(dirname, 'sqlite3') os.chdir('sqlite3') if self.version.startswith("fossil"): write(" Building amalgamation from fossil") res=os.system("make TOP=. -f Makefile.linux-gcc sqlite3.c && cp src/sqlite3ext.h .") defs=[] if sqlite3config_h: open("sqlite3config.h", "wb").write(sqlite3config_h) else: write(" Running configure to work out SQLite compilation flags") res=os.system("./configure >/dev/null") defline=None for line in read_whole_file("Makefile", "rtU").split("\n"): if line.startswith("DEFS = "): defline=line break if not defline: write("Unable to determine compile flags. Create sqlite3/sqlite3config.h to manually set.", sys.stderr) sys.exit(18) defs=[] for part in shlex.split(defline): if part.startswith("-DHAVE"): part=part[2:] if '=' in part: part=part.split('=', 1) else: part=(part, ) defs.append(part) if res!=0: raise ValueError("Command execution failed") if defs: op=open("sqlite3config.h", "wt") op.write(""" /* This file was generated by parsing how configure altered the Makefile which isn't used when building python extensions. It is specific to the machine and developer components on which it was run. */ \n""") for define in defs: op.write('#define %s %s\n' % tuple(define)) op.close() os.chdir("..") downloaded+=1 if not downloaded: write("You didn't specify any components to fetch. Use") write(" setup.py fetch --help") write("for a list and details") raise ValueError("No components downloaded") # A function for verifying downloads def verifyurl(self, url, data): d=["%s" % (len(data),)] try: import hashlib d.append(hashlib.sha1(data).hexdigest()) d.append(hashlib.md5(data).hexdigest()) except ImportError: import sha d.append(sha.new(data).hexdigest()) import md5 d.append(md5.new(data).hexdigest()) write(" Length:", d[0], " SHA1:", d[1], " MD5:", d[2]) sums=os.path.join(os.path.dirname(__file__), "checksums") for line in read_whole_file(sums, "rt").split("\n"): line=line.strip() if len(line)==0 or line[0]=="#": continue l=[l.strip() for l in line.split()] if len(l)!=4: write("Invalid line in checksums file:", line, sys.stderr) raise ValueError("Bad checksums file") if l[0]==url: if l[1:]==d: write(" Checksums verified") return if l[1]!=d[0]: write("Length does not match. Expected", l[1], "download was", d[0]) if l[2]!=d[1]: write("SHA does not match. Expected", l[2], "download was", d[1]) if l[3]!=d[2]: write("MD5 does not match. Expected", l[3], "download was", d[2]) write("The download does not match the checksums distributed with APSW.\n" "The download should not have changed since the checksums were\n" "generated. The cause could be anything from network corruption\n" "to a malicious attack.") raise ValueError("Checksums do not match") # no matching line write(" (Not verified. No match in checksums file)") if not self.missing_checksum_ok: raise ValueError("No checksum available. Use --missing-checksum-ok option to continue") # download a url def download(self, url, text=False, checksum=True): if py3: import urllib.request urlopen=urllib.request.urlopen import io bytesio=io.BytesIO else: import urllib2 urlopen=urllib2.urlopen import cStringIO bytesio=cStringIO.StringIO write(" Fetching", url) count=0 while True: try: if count: write(" Try #",str(count+1)) try: page=urlopen(url).read() except: # Degrade to http if https is not supported e=sys.exc_info()[1] if "eof occurred in violation of protocol" in str(e).lower() or e.reason=="unknown url type: https": write(" [Python has https issues - using http instead]") page=urlopen(url.replace("https://", "http://")).read() else: raise break except: write(" Error ", str(sys.exc_info()[1])) time.sleep(1.3) count+=1 if count>=5: raise if text: if py3: page=page.decode("iso8859_1") if checksum: self.verifyurl(url, page) if not text: page=bytesio(page) return page # We allow enable/omit to be specified to build and then pass them to build_ext build_enable=None build_omit=None build_enable_all_extensions=False bparent=build.build class apsw_build(bparent): user_options=bparent.user_options+\ [ ("enable=", None, "Enable SQLite options (comma seperated list)"), ("omit=", None, "Omit SQLite functionality (comma seperated list)"), ("enable-all-extensions", None, "Enable all SQLite extensions"), ] boolean_options=bparent.boolean_options+["enable-all-extensions"] def initialize_options(self): v=bparent.initialize_options(self) self.enable=None self.omit=None self.enable_all_extensions=build_enable_all_extensions return v def finalize_options(self): global build_enable, build_omit, build_enable_all_extensions build_enable=self.enable build_omit=self.omit build_enable_all_extensions=self.enable_all_extensions return bparent.finalize_options(self) def findamalgamation(): amalgamation=( os.path.join(os.path.dirname(os.path.abspath(__file__)), "sqlite3.c"), os.path.join(os.path.dirname(os.path.abspath(__file__)), "sqlite3", "sqlite3.c") ) for path in amalgamation: if os.path.exists(path): return path return None def find_in_path(name): for loc in os.getenv("PATH").split(os.pathsep): f=os.path.abspath(os.path.join(loc, name)) if os.path.exists(f) or os.path.exists(f.lower()) or os.path.exists(f.lower()+".exe"): return f return None beparent=build_ext.build_ext class apsw_build_ext(beparent): user_options=beparent.user_options+\ [ ("enable=", None, "Enable SQLite options (comma seperated list)"), ("omit=", None, "Omit SQLite functionality (comma seperated list)"), ("enable-all-extensions", None, "Enable all SQLite extensions"), ] boolean_options=beparent.boolean_options+["enable-all-extensions"] def initialize_options(self): v=beparent.initialize_options(self) self.enable=build_enable self.omit=build_omit self.enable_all_extensions=build_enable_all_extensions return v def finalize_options(self): v=beparent.finalize_options(self) if self.enable_all_extensions: exts=["fts4", "fts3", "fts3_parenthesis", "rtree", "stat4"] if find_in_path("icu-config"): exts.append("icu") if not self.enable: self.enable=",".join(exts) else: self.enable=self.enable+","+",".join(exts) ext=self.extensions[0] if not ext.define_macros: ext.define_macros=[] if not ext.depends: ext.depends=[] if not ext.include_dirs: ext.include_dirs=[] if not ext.library_dirs: ext.library_dirs=[] if not ext.libraries: ext.libraries=[] # Fixup debug setting if self.debug: # distutils forces NDEBUG even with --debug so overcome that ext.define_macros.append( ('APSW_NO_NDEBUG', '1') ) # double negatives are bad ext.define_macros.append( ('APSW_TESTFIXTURES', '1') ) # extra test harness code ext.define_macros.append( ('SQLITE_DEBUG', '1') ) # also does NDEBUG mangling else: ext.define_macros.append( ('NDEBUG', '1') ) # fork checker? if hasattr(os, "fork"): ext.define_macros.append( ('APSW_FORK_CHECKER', '1') ) # SQLite 3 # Look for amalgamation in our directory or in sqlite3 subdirectory path=findamalgamation() if path: if sys.platform=="win32": # double quotes get consumed by windows arg processing ext.define_macros.append( ('APSW_USE_SQLITE_AMALGAMATION', '\\"'+path+'\\"') ) else: ext.define_macros.append( ('APSW_USE_SQLITE_AMALGAMATION', '"'+path+'"') ) ext.depends.append(path) # we also add the directory to include path since icu tries to use it ext.include_dirs.append(os.path.dirname(path)) write("SQLite: Using amalgamation", path) load_extension=True else: load_extension=False d=os.path.join(os.path.dirname(os.path.abspath(__file__)), "sqlite3") if os.path.isdir(d): write("SQLite: Using include/libraries in sqlite3 subdirectory") ext.include_dirs.append(d) ext.library_dirs.append(d) else: write("SQLite: Using system sqlite include/libraries") ext.libraries.append('sqlite3') s3config=os.path.join(os.path.dirname(os.path.abspath(__file__)), "sqlite3", "sqlite3config.h") if os.path.exists(s3config): if sys.platform=="win32": ext.define_macros.append( ('APSW_USE_SQLITE_CONFIG', '\\"'+s3config+'\\"') ) else: ext.define_macros.append( ('APSW_USE_SQLITE_CONFIG', '"'+s3config+'"') ) # enables addicuinclib=False if self.enable: for e in self.enable.split(","): e=e.strip() if e.lower()=="load_extension": load_extension=True continue ext.define_macros.append( ("SQLITE_ENABLE_"+e.upper(), 1) ) if e.upper()=="ICU": addicuinclib=True os.putenv("APSW_TEST_"+e.upper(), "1") # See issue #55 where I had left off the 3 in fts3. This code # tries to catch misspelling the name of an extension. # However the SQLITE_ENABLE prefix is also used by other # options - see https://sqlite.org/compile.html but almost # all of those have _ in them, so our abbreviated and # hopefully future proof test if "_" not in e.lower() and \ "memsys" not in e.lower() and \ e.lower() not in ("fts4", "fts3", "rtree", "icu", "iotrace", "stat2", "stat3", "stat4", "dbstat_vtab"): write("Unknown enable "+e, sys.stderr) raise ValueError("Bad enable "+e) # omits if self.omit: for e in self.omit.split(","): e=e.strip() if e.lower()=="load_extension": load_extension=False ext.define_macros.append( ("SQLITE_OMIT_"+e.upper(), 1) ) if not load_extension: ext.define_macros.append( ("SQLITE_OMIT_LOAD_EXTENSION", 1) ) # icu if addicuinclib: foundicu=False kwargs={} if sys.version_info>=(2, 6): # if posix is true then quotes get stripped such as from -Dfoo="bar" kwargs["posix"]=False for part in shlex.split(os.popen("icu-config --cppflags", "r").read(), **kwargs): if part.startswith("-I"): ext.include_dirs.append(part[2:]) foundicu=True elif part.startswith("-D"): part=part[2:] if '=' in part: part=tuple(part.split('=', 1)) else: part=(part, '1') ext.define_macros.append(part) foundicu=True for part in shlex.split(os.popen("icu-config --ldflags", "r").read(), **kwargs): if part.startswith("-L"): ext.library_dirs.append(part[2:]) foundicu=True elif part.startswith("-l"): ext.libraries.append(part[2:]) foundicu=True if foundicu: write("ICU: Added includes, flags and libraries from icu-config") else: write("ICU: Unable to determine includes/libraries for ICU using icu-config") write("ICU: You will need to manually edit setup.py or setup.cfg to set them") # shell if not os.path.exists("src/shell.c") or \ os.path.getmtime("src/shell.c")os.path.getmtime("src/shell.c"): create_c_file("tools/shell.py", "src/shell.c") # done ... return v def run(self): v=beparent.run(self) return v sparent=sdist.sdist class apsw_sdist(sparent): user_options=sparent.user_options+[ ("add-doc", None, "Includes built documentation from doc/build/html into source"), ] boolean_options=sparent.boolean_options+["add-doc"] def initialize_options(self): sparent.initialize_options(self) self.add_doc=False # Were we made from a source archive? If so include the help again if os.path.isfile("doc/index.html") and os.path.isfile("doc/_sources/pysqlite.txt"): self.add_doc=True self.use_defaults=False # they are useless # Make sure the manifest is regenerated self.force_manifest=True # Now do some chicanery. If a source distribution is requested and # fetch --sqlite was requested then make sure the sqlite amalgamation # ends up as part of the source distribution. if fetch_parts: # Use a temporary file for the manifest tmpmanifest="MANIFEST.in.tmp" self.template=tmpmanifest try: os.remove(tmpmanifest) except: pass min=open("MANIFEST.in", "rU") mout=open(tmpmanifest, "wt") for line in min: mout.write(line) min.close() # os.path.relpath emulation if "sqlite" in fetch_parts: amalgamationpath=findamalgamation() amalrelpath=amalgamationpath[len(os.path.dirname(os.path.abspath(__file__)))+1:] mout.write("include "+amalrelpath+"\n") # also include headers and extension headers mout.write("include "+amalrelpath.replace("sqlite3.c", "sqlite3.h")+"\n") mout.write("include "+amalrelpath.replace("sqlite3.c", "sqlite3ext.h")+"\n") if os.path.exists("sqlite3/sqlite3config.h"): mout.write("include sqlite3/sqlite3config.h\n") mout.close() def run(self): v=sparent.run(self) if self.add_doc: if len(list(help_walker('')))==0: raise Exception("The help is not built") for archive in self.get_archive_files(): add_doc(archive, self.distribution.get_fullname()) return v def help_walker(arcdir): # Provides a list of (archive name, disk name) for all the help files if os.path.isfile("doc/index.html") and os.path.isfile("doc/_sources/pysqlite.txt"): topdir="doc/" else: topdir="doc/build/html/" for dirpath, _, filenames in os.walk(topdir): prefix=dirpath[len(topdir):] for f in filenames: yield os.path.join(arcdir, "doc", prefix, f), os.path.join(dirpath, f) def add_doc(archive, topdir): write("Add help files to",archive) if archive.endswith(".tar") or ".tar." in archive: if archive.endswith(".Z"): raise Exception("tarfile module doesn't support old school compress so we can't add doc "+archive) fmt="" if archive.endswith(".gz") or archive.endswith(".tgz"): fmt=":gz" elif archive.endswith(".bz2") or archive.endswith(".tbz2"): fmt=":bz2" oldarchive=tarfile.open(archive) newarchive=tarfile.open(archive+"-", mode="w"+fmt) for mem in oldarchive.getmembers(): newarchive.addfile(mem, oldarchive.extractfile(mem)) oldarchive.close() for arcname, fname in help_walker(topdir): newarchive.add(fname, arcname) newarchive.close() os.rename(archive+"-", archive) elif archive.endswith(".zip"): ofile=zipfile.ZipFile(archive, "a", zipfile.ZIP_DEFLATED) for arcname, fname in help_walker(topdir): ofile.write(fname, arcname) ofile.close() else: raise Exception("Don't know what to do with "+archive) def create_c_file(src, dest): # Transforms Python src into C dest as a sequence of strings. # Because of the pathetic microsoft compiler we have to break it # up into small chunks out=["/* Automatically generated by setup.py from "+src+" */", ""] percents=1 size=0 for line in read_whole_file(src, "rt").split("\n"): if "if__name__=='__main__':" in line.replace(" ",""): break if line.strip().startswith('#'): # full line comment continue if line.strip()=="import apsw": continue size=size+len(line) comma=size>32000 if comma: size=0 percents+=1 line=line.replace("\\", "\\\\").\ replace('"', '\\"') out.append(' "'+line.rstrip()+'\\n"') if comma: out[-1]=out[-1]+"," if out[-1].endswith(","): out[-1]=out[-1][:-1] out[1]='"%s",' % ("%s" * percents,) write_whole_file(dest, "wt", "\n".join(out)) # We depend on every .[ch] file in src depends=[f for f in glob.glob("src/*.[ch]") if f!="src/apsw.c"] for f in (findamalgamation(), ): if f: depends.append(f) # we produce a .c file from this depends.append("tools/shell.py") # work out version number version=read_whole_file(os.path.join("src", "apswversion.h"), "rt").split()[2].strip('"') # msi can't use normal version numbers because distutils is retarded, # so mangle ours to suit it if "bdist_msi" in sys.argv: if version.endswith("-r1"): version=version[:-len("-r1")] else: assert False, "MSI version needs help" version=[int(v) for v in re.split(r"[^\d]+", version)] # easy pad to 3 items long while len(version)<3: version.append(0) # 4 is our normal length (eg 3.7.3-r1) but sometimes it is more eg # 3.7.16.1-r1 so combine last elements if longer than 4 while len(version)>4: version[-2]=10*version[-2]+version-1 del version[-1] # combine first two elements if len(version)>3: version[0]=100*version[0]+version[1] del version[1] version=".".join([str(v) for v in version]) setup(name="apsw", version=version, description="Another Python SQLite Wrapper", long_description=\ """A Python wrapper for the SQLite embedded relational database engine. In contrast to other wrappers such as pysqlite it focuses on being a minimal layer over SQLite attempting just to translate the complete SQLite API into Python.""", author="Roger Binns", author_email="rogerb@rogerbinns.com", url="https://github.com/rogerbinns/apsw/", classifiers=[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved", "Operating System :: OS Independent", "Programming Language :: C", "Programming Language :: Python :: 2", "Programming Language :: Python :: 3", "Topic :: Database :: Front-Ends", ], keywords=["database", "sqlite"], license="OSI Approved ::", ext_modules=[Extension("apsw", ["src/apsw.c"], include_dirs=include_dirs, library_dirs=library_dirs, libraries=libraries, define_macros=define_macros, depends=depends)], cmdclass={'test': run_tests, 'build_test_extension': build_test_extension, 'fetch': fetch, 'build_ext': apsw_build_ext, 'build': apsw_build, 'sdist': apsw_sdist, 'win64hackvars': win64hackvars} ) apsw-3.8.11.1-r1/checksums0000644000000000000000000003763612557432414013671 0ustar rootroot# This file contains checksums/hashes of SQLite so that we can verify # downloads using fetch command to setup.py have not been # tampered with. See # https://apidoc.apsw.googlecode.com/hg/build.html#additional-setup-py-flags # for information on setup.py flags. # Format is URL length SHA1 MD5 all on one line. Yes SHA1 and MD5 are # considered weakened, but it would take a fairly incredible feat to # come up with something that has the same length and SHA1 and MD5 # hashes. This extra checking mechanism is to give you a little bit # more peace of mind because the SQLite releases are not signed in any # way. https://sqlite.org/sqlite-amalgamation-3070701.zip 1302405 7a9547edd265666dcaceea5bbd10a26c83739cf3 ab8d9ac818c50c8744687bad23fcc63e https://sqlite.org/sqlite-autoconf-3070701.tar.gz 1748837 ee405037ab49f46f657a9b314e66eec6009d5fc6 554026fe7fac47b1cf61c18d5fe43419 https://sqlite.org/sqlite-src-3070701.zip 4562259 e30c4de765a1569dc74e2fe12966bc859f4739e5 9162803d0a82421a2df9d907e721a758 https://sqlite.org/sqlite-amalgamation-3070700.zip 1302399 b346215cb6477ec5cd980122798e2ddd16fe77ea e65b9401d1263f002486ccb6ca9f89c9 https://sqlite.org/sqlite-autoconf-3070700.tar.gz 1748848 f9b16cb62dac99119c5651bccecfe97761f66c20 a98df57053adf4c132b6b7ab59e9f333 https://sqlite.org/sqlite-src-3070700.zip 4562266 6a9622b9fd248b9e923459b1df93cb413afaed6a 9796d946ff7c86b173540a3ae28233dd https://sqlite.org/sqlite-amalgamation-3070603.zip 1273808 baa0a5ec5802718e0cecde4de3f11363746ffb86 47c8d762b7755654a20ad3286eaea3e6 https://sqlite.org/sqlite-autoconf-3070603.tar.gz 1720314 a19e4496e2854e737d55465b77c87e775a362775 7eb41eea5ffa5cbe359a48629084c425 https://sqlite.org/sqlite-src-3070603.zip 4470530 33038ecf10b7519d95db797f21c927f5e630784f f8e22ec0931c91f029d539fe3f5ef0f9 https://sqlite.org/sqlite-amalgamation-3070602.zip 1273685 1030b74720af94f820de05b7a82db93b41de3563 50e851f414720cef9c44701d8e8ee011 https://sqlite.org/sqlite-autoconf-3070602.tar.gz 1720179 3582b0b809dc0c3fa10ddaa403c219a4ebd8abb5 f16c08617968b4087b3d591fd575f59f https://sqlite.org/sqlite-src-3070602.zip 4469329 dfe00d3fd1b346d7ae53acc54a761dcb0d5587eb 1ee582b962d3408b5825bffd404b3d33 https://sqlite.org/sqlite-amalgamation-3070601.zip 1273690 e4888b7a4e9c0c99b8ba6d9083d095ead8e4fa2f bdfc96a69c1b592cc35e52fecef0335f https://sqlite.org/sqlite-autoconf-3070601.tar.gz 1720170 8e3d1f93576374238875197b1fb472332e339ea7 a9b7327d6762652fe19caf68fb50849d https://sqlite.org/sqlite-src-3070601.zip 4469325 b1646e4b82fea6c8fd1ca0d12afe319056a6d216 5d0e07112e609681187e04319f611fee https://sqlite.org/sqlite-amalgamation-3070600.zip 1273593 4dd4dc4b10b280a46d77b1b06fb531223bb39811 3b485f167c80e6cead300c926745a5c4 https://sqlite.org/sqlite-autoconf-3070600.tar.gz 1721938 6b17cf7610437706fec6e21f29933c1ffd3a13e0 8e1764574dbddb6be675c79bcfcc304d https://sqlite.org/sqlite-src-3070600.zip 4469235 238247cab076b4df4876970a3e8d1c5b23150062 cbc150157ef90e3423236740ca6c917b https://sqlite.org/sqlite-amalgamation-3070800.zip 1329197 d1eb9a3578fec86c2208b1753bb1f16215ecb036 f1a5320d96bcd6855103005066c57e7b https://sqlite.org/sqlite-autoconf-3070800.tar.gz 1776215 34852cfd49ddea440eb26a35ad32deee0176a724 6bfb46d73caaa1bbbcd2b52184b6c542 https://sqlite.org/sqlite-src-3070800.zip 4614579 70201da90a0184b5a77270950bb7c0a8155c57a6 c0a5d506b0122b9b72791b0f25a65810 https://sqlite.org/sqlite-amalgamation-3070900.zip 1336595 f5420071c7424c224f4bdacf78dbc78e00c3780c 90afc3864430c96912778aab5ed33b06 https://sqlite.org/sqlite-autoconf-3070900.tar.gz 1783755 a9da98a4bde4d9dae5c29a969455d11a03600e11 dce303524736fe89a76b8ed29d566352 https://sqlite.org/sqlite-src-3070900.zip 4639586 d0f7661f06c246429f75973b61c7e620f5ec43cd b2646c5a0c0b5bc6b8f0b67fc318bab3 https://sqlite.org/sqlite-amalgamation-3071000.zip 1353318 0c2c6fff00cbf91ea9f9f727eaf6f1ca52ba8dac 0a337005a7fae8a10cbf60758167e6a6 https://sqlite.org/sqlite-autoconf-3071000.tar.gz 1801640 0442d5a1bff50153039951b09db649864d8af0bb 9ed2ca93577b58cfa0d01f64b9312ab9 https://sqlite.org/sqlite-src-3071000.zip 4682084 23baab0a89d77e7b1a9a0ececd05dd629a2b098d 16da2e08ce2233556965cdf20d614bd9 https://sqlite.org/sqlite-amalgamation-3071100.zip 1359507 6e299c54bbb507a1e7a8f259f1fdfdc25681673d bb6e4e9d01b58d4f072fec116159f1f8 https://sqlite.org/sqlite-autoconf-3071100.tar.gz 1807754 a768f76b10df84d6a2c66178544d42725a8fdaf0 0552d71bda98ebdcaea305cd6058221b https://sqlite.org/sqlite-src-3071100.zip 4705421 4bcb60c0a79e42fe01c64108deac66473123f4e7 73e2340ca2682ae914168d99c1f26e66 https://sqlite.org/sqlite-amalgamation-3071200.zip 1382527 1f8aaaccb74c319cfaca3d62a50d00ec7e65be52 43a27fadf87546e50f843089b28b69a1 https://sqlite.org/sqlite-autoconf-3071200.tar.gz 1827241 30e6b0912f074cff5563697367c2ba9608d83bd3 28842cb51a699c0b554120bafe6418ff https://sqlite.org/sqlite-src-3071200.zip 4648718 a0cd400fd9011972fb76f253ab7b6bb82259efe5 bcdb961f44cbe39de83c5ea4bdbfc765 https://sqlite.org/sqlite-amalgamation-3071201.zip 1383065 5b04d1ac0147ddc00dcd096be28a17320320aaee 94e685a722a5b890cff6eb3903aa9ebe https://sqlite.org/sqlite-autoconf-3071201.tar.gz 1831193 295b9acdf9c0452017ca1af392c0e5db019e6874 eb7bbd258913518ad30971ea7ecb0ca9 https://sqlite.org/sqlite-src-3071201.zip 4672290 61a5aaacf5d82158f5a8ab69378834dd5e988b35 f9455b25017ffadc38d11f3dea5b43ae https://sqlite.org/sqlite-amalgamation-3071300.zip 1397467 6ab58d3d49f5b51a5e18a3093038d15317657884 5fbeff9645ab035a1f580e90b279a16d https://sqlite.org/sqlite-autoconf-3071300.tar.gz 1845778 d3833b6ad68db8505d1044f761dd962f415cd302 c97df403e8a3d5b67bb408fcd6aabd8e https://sqlite.org/sqlite-src-3071300.zip 4923284 186db0ce8c6aad212753f083272004e4c270ecb9 13bb3eaae94592ef3220ea23582763f5 https://sqlite.org/sqlite-amalgamation-3071400.zip 1383952 9ce5ffc690d4499ff16275eafbda03c9a2a37e12 e5a00312017a33d96b2d5adc7bfda6f9 https://sqlite.org/sqlite-autoconf-3071400.tar.gz 1832194 7b429809918201555f4c0fa639183a1c663d3fe0 6464d429b1396a8db35864e791673b65 https://sqlite.org/sqlite-src-3071400.zip 4966421 f0048ca2f09565af1810ebf70f29479d112f7e50 8d67486a003675f21f68014446e08748 https://sqlite.org/sqlite-amalgamation-3071401.zip 1384040 dc64b561fd0f27f5cf2aece439aae12fc3b387cc 8837a4cf0b00712d887cd921131d2df4 https://sqlite.org/sqlite-autoconf-3071401.tar.gz 1832051 c464e0e3efe98227c6546b9b1e786b51b8b642fc 4cf3fd434bbb6e50777019ff3507bfa5 https://sqlite.org/sqlite-src-3071401.zip 4966093 65f5811f7a67eec60e45c66c31c6a1965fa1c77a c594594be4c5353ceb43caade87ca7d6 https://sqlite.org/sqlite-amalgamation-3071500.zip 1396237 310443c06f067701ede4daf6c685df4072cfe602 6831aa143e8c17ce4755cd1a2db5a05b https://sqlite.org/sqlite-autoconf-3071500.tar.gz 1846740 7b463b76b7a0e322dcdf7bf2221f6e508511e392 0d2a2d27b56ac73d49db76dcb6b4619c https://sqlite.org/sqlite-amalgamation-3071501.zip 1396268 80bea29dce445d4d8046e6730374964e0c74422e e0c483347730285a28e8b29843601758 https://sqlite.org/sqlite-autoconf-3071501.tar.gz 1846768 0247b4ff581e7bacaad97663116a029ad1976f1c a67c25afa199a11f0a37aff7ed9d2c14 https://sqlite.org/sqlite-amalgamation-3071502.zip 1396271 1312016fdbb1dd6cb5a38bc05d1d051d9dc0e93e 7f95fd6b0d69a9773ef0258e9f4f3035 https://sqlite.org/sqlite-autoconf-3071502.tar.gz 1846778 075732562183d560cd46a0d8d08b50bc44e34eac bcb0ab0b5b30116b2531cfeef3c861b4 https://sqlite.org/2013/sqlite-amalgamation-3071600.zip 1404350 800ae0e24229bcbe22ae07c69df9761fdfcac94c 31470668b0ae51ba449cbd2b58874e33 https://sqlite.org/2013/sqlite-autoconf-3071600.tar.gz 1854055 bd2e93b04647e453f90e99661b493ee8dec5bfdf 919aedd0c84e1d8fb07129251dbe05bf https://sqlite.org/2013/sqlite-amalgamation-3071601.zip 1404681 92b1b8d79b0554f77b42cb2510ab423193809977 aeed56d2f12e822068a1557cb99660db https://sqlite.org/2013/sqlite-autoconf-3071601.tar.gz 1854392 b0d9b3e2ca3c50f72e5921e9532130787871b7ae 8860d1a1ed26d40f07e3c7ae035b644a https://sqlite.org/2013/sqlite-amalgamation-3071602.zip 1404682 8751c882c9bd9a66ebd32e189bc34baef1fd58ac 9e64fc3067dbda5e4eb4c5ce68cde555 https://sqlite.org/2013/sqlite-autoconf-3071602.tar.gz 1854396 85bf857cf86f34831d55d7ba97606dba581b8d62 ce7d2bc0d9b8dd18995b888c6b0b220f https://sqlite.org/2013/sqlite-amalgamation-3071700.zip 1426563 5cd468d429e23c927ed1761f656dfd2e1285be16 89ac8d4d70a1fa3ff7f37bccdd9088b6 https://sqlite.org/2013/sqlite-autoconf-3071700.tar.gz 1876108 e31958e56b1d7bef9433b1ff2e875c8c290d37f4 18c285053e9562b848209cb0ee16d4ab https://sqlite.org/2013/sqlite-amalgamation-3080000.zip 1434702 084a02f32ff400cc56bc839f497b0ff8e8d4e352 bd8701b04b0368187f925635aec4ee3a https://sqlite.org/2013/sqlite-autoconf-3080000.tar.gz 1878431 94280dbeeb24e22bcfa492a48226cabad2e32208 c1e12ebeab0959741558ecc6b19edbc1 https://sqlite.org/2013/sqlite-amalgamation-3080001.zip 1434777 73b2c4ae0cbbd630163f974cf1551bbacbfffa6f 8bff9179530d3b04b484bf2264338daf https://sqlite.org/2013/sqlite-autoconf-3080001.tar.gz 1878758 d2fd623a7537e0fad5a2f33169dd9d6def268583 ee32c96e5db6c6d651c7c9b71082cf7c https://sqlite.org/2013/sqlite-amalgamation-3080002.zip 1434776 99055b894259dc85cfb2da92971904f74ec3aa3e af1ed6543929376ba13f0788e18ef30f https://sqlite.org/2013/sqlite-autoconf-3080002.tar.gz 1878765 294c30e882a0d45877bce09afe72d08ccfc6b650 6d6cc639a4da04fbbdda7b1a1a01b386 https://sqlite.org/2013/sqlite-amalgamation-3080100.zip 1453207 75a1ab154e796d2d1b391a2c7078679e15512bda f68013764f146c66205923da0a973718 https://sqlite.org/2013/sqlite-autoconf-3080100.tar.gz 1896869 42464b07df2d6f8aa28f73ce4cc6d48b47be810e 8b5a0a02dfcb0c7daf90856a5cfd485a https://sqlite.org/2013/sqlite-amalgamation-3080200.zip 1473300 c0559520d014802fe587977e8d773f350b7e35fe 9b5b7204436700a99d48c81bc0751e0c https://sqlite.org/2013/sqlite-autoconf-3080200.tar.gz 1917056 6033ef603ce221d367c665477514d972ef1dc90e f62206713e6a08d4ccbc60b1fd712a1a https://sqlite.org/2014/sqlite-amalgamation-3080300.zip 1479396 e45a696b740af4dcad66d66c4b5b50be81f6ee09 976371760eb3ca3b34015720d4443bf7 https://sqlite.org/2014/sqlite-autoconf-3080300.tar.gz 1923594 c2a21d71d0c7dc3af71cf90f04dfd22ecfb280c2 11572878dc0ac74ae370367a464ab5cf https://sqlite.org/2014/sqlite-amalgamation-3080301.zip 1478961 e362ef5e4b9e5ebd0ab60ade341da4f5d273118b 012f4501c6a933496f252478f1376e8c https://sqlite.org/2014/sqlite-autoconf-3080301.tar.gz 1923168 a9b0a3d66fb5c2ae51ad2d8a512ca827adc7d59c 509ff98d8dc9729b618b7e96612079c6 https://sqlite.org/2014/sqlite-amalgamation-3080401.zip 1485870 9915f514b2c2293049a8e451bd1ed1082c72efc6 cb717e907e1559cb809de6ffabd047d8 https://sqlite.org/2014/sqlite-autoconf-3080401.tar.gz 1937368 f8a77b74bd2632eb14cea62cd8b92c45026e248c 6b8cb7b9063a1d97f7b5dc517e8ee0c4 https://sqlite.org/2014/sqlite-amalgamation-3080400.zip 1485792 3a2259a275ef77aa74aba7a6858c4023d0957ee6 fd901339de51df1c9e25836b524c7d54 https://sqlite.org/2014/sqlite-autoconf-3080400.tar.gz 1937306 19249a798456f88b3a99ef27fa4119d08d89682f 55964ca8e46657f3df9ea3dbc11de3e1 https://sqlite.org/2014/sqlite-amalgamation-3080402.zip 1485876 511422bc5bb6d7ce36751f9f51ce2b0fa4f1d987 23633b7dd0eeb7d1c9f0e0296349de0e https://sqlite.org/2014/sqlite-autoconf-3080402.tar.gz 1937368 97d727b46ca67c98eb7a0fa3f57a1cfeb7f08f74 6ed5df4f49f03ee82c8943f701574c17 https://sqlite.org/2014/sqlite-amalgamation-3080403.zip 1485873 13d8fe0cfba79e49a5e625f848dd689ea3245ad8 fc3cfdb642a14f4c69f6cdb082ec2bbb https://sqlite.org/2014/sqlite-autoconf-3080403.tar.gz 1937379 70f3b100fa22e5bfebfe1b0a2102612e3c6c53fb 0f5459cde43cb269e5120ecd2c671ced https://sqlite.org/2014/sqlite-amalgamation-3080500.zip 1505324 fd65e80aa04b9c02d6a7154f513b10e0a5d29150 73d59d945121a58e8cc13d9b5682a6f6 https://sqlite.org/2014/sqlite-autoconf-3080500.tar.gz 1957050 7f667e10ccebc26ab2086b8a30cb0a600ca0acae 0544ef6d7afd8ca797935ccc2685a9ed https://sqlite.org/2014/sqlite-amalgamation-3080600.zip 1516814 6f6ab5ee38eed46ee0d9a8cd8f1e13f94bba99b3 4bb15df435d92b1e3e3d3b8383735919 https://sqlite.org/2014/sqlite-autoconf-3080600.tar.gz 1968379 c4b2911bc4a6e1dc2b411aa21d8c4f524113eb64 f7e4a156b583abeba349629e2364224b https://sqlite.org/2014/sqlite-amalgamation-3080700.zip 1546546 3e23079f062fc06705eead4db108ee429878b532 fa949119440fd985c30c8520ae80cd04 https://sqlite.org/2014/sqlite-autoconf-3080700.tar.gz 1998217 8b773b006db46f3ffcbabe065e927823d13bf5c0 10cf856d953d8152cdaf180623102a23 https://sqlite.org/2014/sqlite-amalgamation-3080701.zip 1546683 6a6885488f7bac94f66602e59ccfa3cfe99ba1c1 4f5fc6319c6666a4ffe63a05c91786da https://sqlite.org/2014/sqlite-autoconf-3080701.tar.gz 1998389 5601be1263842209d7c5dbf6128f1cc0b6bbe2e5 8ee4541ebb3e5739e7ef5e9046e30063 https://sqlite.org/2014/sqlite-amalgamation-3080702.zip 1547239 c8c54bca3eba079b40acb7297c48142698118f4d 10587262e4381358b707df75392c895f https://sqlite.org/2014/sqlite-autoconf-3080702.tar.gz 1998965 175283383d010709f76fdd13cb0c8c3c329a1c67 0f847048745ddbdf0c441c82d096fbb4 https://sqlite.org/2014/sqlite-amalgamation-3080703.zip 1547471 eccc4873e4af3f1ef117ff5b89362eb85c1eb594 17a26a8ee88a7b833d0b9b3b733866a7 https://sqlite.org/2014/sqlite-autoconf-3080703.tar.gz 1999200 5cbe9aa4af1b32787045a251dba6debee3f061c0 e738c376d8106d16d1d3e7974e7e15d8 https://sqlite.org/2014/sqlite-amalgamation-3080704.zip 1547492 c59060db42ab51e4a063f4f91ce4bc7aad5ab5fa 429777f515056bb2ddb96e3c0719616d https://sqlite.org/2014/sqlite-autoconf-3080704.tar.gz 1999230 70ca0b8884a6b145b7f777724670566e2b4f3cde 33bb8db0038317ce1b0480ca1185c7ba https://sqlite.org/2015/sqlite-amalgamation-3080800.zip 1568082 f16a0c38877e456e20967d198b1b33b658de0805 86afd36daa375c3a719cd1b699263d70 https://sqlite.org/2015/sqlite-autoconf-3080800.tar.gz 2020708 abb7570b7e331ffede7f40f0037be6e0c03b73af 3a3caebf93308ddf5d120c393d9ecb82 https://sqlite.org/2015/sqlite-amalgamation-3080801.zip 1568437 ea9540e278ed6bb563fe7e21ca77d1e352629f33 b1cbcbd710bdfd762dc169f1676053b5 https://sqlite.org/2015/sqlite-autoconf-3080801.tar.gz 2021072 a140e3ada8a91a7bacf7a58cfd8f5f0e76796258 a6381941ffe8817ba19669ec0c0ede6f https://sqlite.org/2015/sqlite-amalgamation-3080802.zip 1568467 b920eb0f346b55048a92192ded28fe5a8a38b7ca 262ae171e8244ba5fb91455b8d239a84 https://sqlite.org/2015/sqlite-autoconf-3080802.tar.gz 2021091 1db237523419af7110e1d92c6b766e965f9322e4 3425fa580a56880f56bcb887dd26cc06 https://sqlite.org/2015/sqlite-amalgamation-3080803.zip 1568475 fdac82a811fb62a6e326808f0010b2cd8e9055d2 97604645c615d81194541e1398687b61 https://sqlite.org/2015/sqlite-autoconf-3080803.tar.gz 2021112 2fe3f6226a2a08a2e814b97cd53e36bb3c597112 51272e875879ee893e51070b07c33888 https://sqlite.org/2015/sqlite-amalgamation-3080900.zip 1584545 f70e2841e4f26fe45849c2b9f5683bbe502d7523 02e9c3a6daa8b8587cf6bef828c2e33f https://sqlite.org/2015/sqlite-autoconf-3080900.tar.gz 2037200 db70dee268700b312cbaa5b3e5cf8454e1c8b7b9 6a18d4609852f4b63f812a1059df468f https://sqlite.org/2015/sqlite-amalgamation-3081002.zip 1596554 9ad3b477de04cc480fc0f7f1829c322c525f642d cb79cf3d8bab37072209d30cf4d11350 https://sqlite.org/2015/sqlite-autoconf-3081002.tar.gz 2049387 c2f2c17d3dc4c4e179d35cc04e4420636d48a152 a18bfc015cd49a1e7a961b7b77bc3b37 https://sqlite.org/2015/sqlite-amalgamation-3081001.zip 1596541 6e371df68010b9521a8d5e0795ef8a4acb6c3ef8 913478c71e1b179f87772aba9474f683 https://sqlite.org/2015/sqlite-autoconf-3081001.tar.gz 2049377 86bfed5752783fb24c051f3efac5972ce11023f0 8bd9d7df3f67bc771f53c6dda42face2 https://sqlite.org/2015/sqlite-amalgamation-3081000.zip 1596326 a0b064f09ce78e6e38453a7057eb05c4dbf25657 9e944369c6c477b76790abc45ced3843 https://sqlite.org/2015/sqlite-autoconf-3081000.tar.gz 2049170 7e92b4f78d4648fb2a97a4dc721490cc08653a0b 04d0311ef70818e8d914c1dc383eddff https://sqlite.org/2015/sqlite-amalgamation-3081100.zip 1648834 bdd01fa7167b697f30ab76f42f4200bf58b75d98 1867eaa496c9e3dcd77c117fd05a9c5d https://sqlite.org/2015/sqlite-autoconf-3081100.tar.gz 2101901 da1322d883548bdb0f6312941b20ed497e3754fa 77b451925121028befbddbf45ea2bc49 https://sqlite.org/2015/sqlite-amalgamation-3081101.zip 1648868 0dbb29c71c4385d1000c091f14475106784daceb 94907e831502e2080b76e281cfa24dde https://sqlite.org/2015/sqlite-autoconf-3081101.tar.gz 2101885 d0e22d7e361b6f50830a3cdeafe35311443f8f9a 298c8d6af7ca314f68de92bc7a356cbe