sqlalchemy-migrate-0.13.0/0000775000175000017500000000000013553670602015414 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/bindep.txt0000664000175000017500000000112113553670475017421 0ustar zuulzuul00000000000000# This is a cross-platform list tracking distribution packages needed for install and tests; # see https://docs.openstack.org/infra/bindep/ for additional information. # NOTE(mriedem): This list is woefully incomplete but is just listing mysql # and postgresql binary dependencies to make tools/test-setup.sh work. libmysqlclient-dev [platform:dpkg] libpq-dev [platform:dpkg test] mysql [platform:rpm] mysql-client [platform:dpkg] mysql-devel [platform:rpm test] mysql-server postgresql postgresql-client [platform:dpkg] postgresql-devel [platform:rpm test] postgresql-server [platform:rpm] sqlalchemy-migrate-0.13.0/PKG-INFO0000664000175000017500000000547613553670602016525 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: sqlalchemy-migrate Version: 0.13.0 Summary: Database schema migration for SQLAlchemy Home-page: http://www.openstack.org/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: SQLAlchemy Migrate ================== Fork from http://code.google.com/p/sqlalchemy-migrate/ to get it working with SQLAlchemy 0.8. Inspired by Ruby on Rails' migrations, Migrate provides a way to deal with database schema changes in `SQLAlchemy `_ projects. Migrate extends SQLAlchemy to have database changeset handling. It provides a database change repository mechanism which can be used from the command line as well as from inside python code. Help ---- Sphinx documentation is available at the project page `readthedocs.org `_. Users and developers can be found at #openstack-dev on Freenode IRC network and at the public users mailing list `migrate-users `_. New releases and major changes are announced at the public announce mailing list `openstack-dev `_ and at the Python package index `sqlalchemy-migrate `_. Homepage is located at `stackforge `_ You can also clone a current `development version `_ Tests and Bugs -------------- To run automated tests: * install tox: ``pip install -U tox`` * run tox: ``tox`` * to test only a specific Python version: ``tox -e py27`` (Python 2.7) Please report any issues with sqlalchemy-migrate to the issue tracker at `Launchpad issues `_ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 sqlalchemy-migrate-0.13.0/migrate/0000775000175000017500000000000013553670602017044 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/exceptions.py0000664000175000017500000000363013553670475021611 0ustar zuulzuul00000000000000""" Provide exception classes for :mod:`migrate` """ class Error(Exception): """Error base class.""" class ApiError(Error): """Base class for API errors.""" class KnownError(ApiError): """A known error condition.""" class UsageError(ApiError): """A known error condition where help should be displayed.""" class ControlledSchemaError(Error): """Base class for controlled schema errors.""" class InvalidVersionError(ControlledSchemaError): """Invalid version number.""" class VersionNotFoundError(KeyError): """Specified version is not present.""" class DatabaseNotControlledError(ControlledSchemaError): """Database should be under version control, but it's not.""" class DatabaseAlreadyControlledError(ControlledSchemaError): """Database shouldn't be under version control, but it is""" class WrongRepositoryError(ControlledSchemaError): """This database is under version control by another repository.""" class NoSuchTableError(ControlledSchemaError): """The table does not exist.""" class PathError(Error): """Base class for path errors.""" class PathNotFoundError(PathError): """A path with no file was required; found a file.""" class PathFoundError(PathError): """A path with a file was required; found no file.""" class RepositoryError(Error): """Base class for repository errors.""" class InvalidRepositoryError(RepositoryError): """Invalid repository error.""" class ScriptError(Error): """Base class for script errors.""" class InvalidScriptError(ScriptError): """Invalid script error.""" class InvalidVersionError(Error): """Invalid version error.""" # migrate.changeset class NotSupportedError(Error): """Not supported error""" class InvalidConstraintError(Error): """Invalid constraint error""" class MigrateDeprecationWarning(DeprecationWarning): """Warning for deprecated features in Migrate""" sqlalchemy-migrate-0.13.0/migrate/versioning/0000775000175000017500000000000013553670602021227 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/script/0000775000175000017500000000000013553670602022533 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/script/base.py0000664000175000017500000000324413553670475024032 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import logging from migrate import exceptions from migrate.versioning.config import operations from migrate.versioning import pathed log = logging.getLogger(__name__) class BaseScript(pathed.Pathed): """Base class for other types of scripts. All scripts have the following properties: source (script.source()) The source code of the script version (script.version()) The version number of the script operations (script.operations()) The operations defined by the script: upgrade(), downgrade() or both. Returns a tuple of operations. Can also check for an operation with ex. script.operation(Script.ops.up) """ # TODO: sphinxfy this and implement it correctly def __init__(self, path): log.debug('Loading script %s...' % path) self.verify(path) super(BaseScript, self).__init__(path) log.debug('Script %s loaded successfully' % path) @classmethod def verify(cls, path): """Ensure this is a valid script This version simply ensures the script file's existence :raises: :exc:`InvalidScriptError ` """ try: cls.require_found(path) except: raise exceptions.InvalidScriptError(path) def source(self): """:returns: source code of the script. :rtype: string """ fd = open(self.path) ret = fd.read() fd.close() return ret def run(self, engine): """Core of each BaseScript subclass. This method executes the script. """ raise NotImplementedError() sqlalchemy-migrate-0.13.0/migrate/versioning/script/sql.py0000664000175000017500000000536213553670475023722 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import logging import re import shutil import sqlparse from migrate.versioning.script import base from migrate.versioning.template import Template log = logging.getLogger(__name__) class SqlScript(base.BaseScript): """A file containing plain SQL statements.""" @classmethod def create(cls, path, **opts): """Create an empty migration script at specified path :returns: :class:`SqlScript instance `""" cls.require_notfound(path) src = Template(opts.pop('templates_path', None)).get_sql_script(theme=opts.pop('templates_theme', None)) shutil.copy(src, path) return cls(path) # TODO: why is step parameter even here? def run(self, engine, step=None): """Runs SQL script through raw dbapi execute call""" text = self.source() # Don't rely on SA's autocommit here # (SA uses .startswith to check if a commit is needed. What if script # starts with a comment?) conn = engine.connect() try: trans = conn.begin() try: # ignore transaction management statements that are # redundant in SQL script context and result in # operational error being returned. # # Note: we don't ignore ROLLBACK in migration scripts # since its usage would be insane anyway, and we're # better to fail on its occurance instead of ignoring it # (and committing transaction, which is contradictory to # the whole idea of ROLLBACK) ignored_statements = ('BEGIN', 'END', 'COMMIT') ignored_regex = re.compile('^\s*(%s).*;?$' % '|'.join(ignored_statements), re.IGNORECASE) # NOTE(ihrachys): script may contain multiple statements, and # not all drivers reliably handle multistatement queries or # commands passed to .execute(), so split them and execute one # by one text = sqlparse.format(text, strip_comments=True, strip_whitespace=True) for statement in sqlparse.split(text): if statement: if re.match(ignored_regex, statement): log.warning('"%s" found in SQL script; ignoring' % statement) else: conn.execute(statement) trans.commit() except Exception as e: log.error("SQL script %s failed: %s", self.path, e) trans.rollback() raise finally: conn.close() sqlalchemy-migrate-0.13.0/migrate/versioning/script/__init__.py0000664000175000017500000000031713553670475024655 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- from migrate.versioning.script.base import BaseScript from migrate.versioning.script.py import PythonScript from migrate.versioning.script.sql import SqlScript sqlalchemy-migrate-0.13.0/migrate/versioning/script/py.py0000664000175000017500000001315213553670475023547 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import shutil import warnings import logging import inspect import migrate from migrate.versioning import genmodel, schemadiff from migrate.versioning.config import operations from migrate.versioning.template import Template from migrate.versioning.script import base from migrate.versioning.util import import_path, load_model, with_engine from migrate.exceptions import MigrateDeprecationWarning, InvalidScriptError, ScriptError import six from six.moves import StringIO log = logging.getLogger(__name__) __all__ = ['PythonScript'] class PythonScript(base.BaseScript): """Base for Python scripts""" @classmethod def create(cls, path, **opts): """Create an empty migration script at specified path :returns: :class:`PythonScript instance `""" cls.require_notfound(path) src = Template(opts.pop('templates_path', None)).get_script(theme=opts.pop('templates_theme', None)) shutil.copy(src, path) return cls(path) @classmethod def make_update_script_for_model(cls, engine, oldmodel, model, repository, **opts): """Create a migration script based on difference between two SA models. :param repository: path to migrate repository :param oldmodel: dotted.module.name:SAClass or SAClass object :param model: dotted.module.name:SAClass or SAClass object :param engine: SQLAlchemy engine :type repository: string or :class:`Repository instance ` :type oldmodel: string or Class :type model: string or Class :type engine: Engine instance :returns: Upgrade / Downgrade script :rtype: string """ if isinstance(repository, six.string_types): # oh dear, an import cycle! from migrate.versioning.repository import Repository repository = Repository(repository) oldmodel = load_model(oldmodel) model = load_model(model) # Compute differences. diff = schemadiff.getDiffOfModelAgainstModel( model, oldmodel, excludeTables=[repository.version_table]) # TODO: diff can be False (there is no difference?) decls, upgradeCommands, downgradeCommands = \ genmodel.ModelGenerator(diff,engine).genB2AMigration() # Store differences into file. src = Template(opts.pop('templates_path', None)).get_script(opts.pop('templates_theme', None)) f = open(src) contents = f.read() f.close() # generate source search = 'def upgrade(migrate_engine):' contents = contents.replace(search, '\n\n'.join((decls, search)), 1) if upgradeCommands: contents = contents.replace(' pass', upgradeCommands, 1) if downgradeCommands: contents = contents.replace(' pass', downgradeCommands, 1) return contents @classmethod def verify_module(cls, path): """Ensure path is a valid script :param path: Script location :type path: string :raises: :exc:`InvalidScriptError ` :returns: Python module """ # Try to import and get the upgrade() func module = import_path(path) try: assert callable(module.upgrade) except Exception as e: raise InvalidScriptError(path + ': %s' % str(e)) return module def preview_sql(self, url, step, **args): """Mocks SQLAlchemy Engine to store all executed calls in a string and runs :meth:`PythonScript.run ` :returns: SQL file """ buf = StringIO() args['engine_arg_strategy'] = 'mock' args['engine_arg_executor'] = lambda s, p = '': buf.write(str(s) + p) @with_engine def go(url, step, **kw): engine = kw.pop('engine') self.run(engine, step) return buf.getvalue() return go(url, step, **args) def run(self, engine, step): """Core method of Script file. Exectues :func:`update` or :func:`downgrade` functions :param engine: SQLAlchemy Engine :param step: Operation to run :type engine: string :type step: int """ if step in ('downgrade', 'upgrade'): op = step elif step > 0: op = 'upgrade' elif step < 0: op = 'downgrade' else: raise ScriptError("%d is not a valid step" % step) funcname = base.operations[op] script_func = self._func(funcname) # check for old way of using engine arg_spec = None if six.PY2: arg_spec = inspect.getargspec(script_func) else: arg_spec = inspect.getfullargspec(script_func) if not arg_spec[0]: raise TypeError( "upgrade/downgrade functions must accept engine" " parameter (since version 0.5.4)") script_func(engine) @property def module(self): """Calls :meth:`migrate.versioning.script.py.verify_module` and returns it. """ if not hasattr(self, '_module'): self._module = self.verify_module(self.path) return self._module def _func(self, funcname): if not hasattr(self.module, funcname): msg = "Function '%s' is not defined in this script" raise ScriptError(msg % funcname) return getattr(self.module, funcname) sqlalchemy-migrate-0.13.0/migrate/versioning/templates/0000775000175000017500000000000013553670602023225 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/0000775000175000017500000000000013553670602025444 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/default/0000775000175000017500000000000013553670602027070 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/default/versions/0000775000175000017500000000000013553670602030740 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/default/versions/__init__.py0000664000175000017500000000000013553670475033047 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/default/migrate.cfg0000664000175000017500000000250313553670475031211 0ustar zuulzuul00000000000000[db_settings] # Used to identify which repository this database is versioned under. # You can use the name of your project. repository_id={{ locals().pop('repository_id') }} # The name of the database table used to track the schema version. # This name shouldn't already be used by your project. # If this is changed once a database is under version control, you'll need to # change the table name in each database too. version_table={{ locals().pop('version_table') }} # When committing a change script, Migrate will attempt to generate the # sql for all supported databases; normally, if one of them fails - probably # because you don't have that database installed - it is ignored and the # commit continues, perhaps ending successfully. # Databases in this list MUST compile successfully during a commit, or the # entire commit will fail. List the databases your application will actually # be using to ensure your updates to that database work properly. # This must be a list; example: ['postgres','sqlite'] required_dbs={{ locals().pop('required_dbs') }} # When creating new change scripts, Migrate will stamp the new script with # a version number. By default this is latest_version + 1. You can set this # to 'true' to tell Migrate to use the UTC timestamp instead. use_timestamp_numbering={{ locals().pop('use_timestamp_numbering') }} sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/default/__init__.py0000664000175000017500000000000013553670475031177 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/default/README0000664000175000017500000000015313553670475027757 0ustar zuulzuul00000000000000This is a database migration repository. More information at http://code.google.com/p/sqlalchemy-migrate/ sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/pylons/0000775000175000017500000000000013553670602026770 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/pylons/versions/0000775000175000017500000000000013553670602030640 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/pylons/versions/__init__.py0000664000175000017500000000000013553670475032747 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/pylons/migrate.cfg0000664000175000017500000000250313553670475031111 0ustar zuulzuul00000000000000[db_settings] # Used to identify which repository this database is versioned under. # You can use the name of your project. repository_id={{ locals().pop('repository_id') }} # The name of the database table used to track the schema version. # This name shouldn't already be used by your project. # If this is changed once a database is under version control, you'll need to # change the table name in each database too. version_table={{ locals().pop('version_table') }} # When committing a change script, Migrate will attempt to generate the # sql for all supported databases; normally, if one of them fails - probably # because you don't have that database installed - it is ignored and the # commit continues, perhaps ending successfully. # Databases in this list MUST compile successfully during a commit, or the # entire commit will fail. List the databases your application will actually # be using to ensure your updates to that database work properly. # This must be a list; example: ['postgres','sqlite'] required_dbs={{ locals().pop('required_dbs') }} # When creating new change scripts, Migrate will stamp the new script with # a version number. By default this is latest_version + 1. You can set this # to 'true' to tell Migrate to use the UTC timestamp instead. use_timestamp_numbering={{ locals().pop('use_timestamp_numbering') }} sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/pylons/__init__.py0000664000175000017500000000000013553670475031077 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/pylons/README0000664000175000017500000000015313553670475027657 0ustar zuulzuul00000000000000This is a database migration repository. More information at http://code.google.com/p/sqlalchemy-migrate/ sqlalchemy-migrate-0.13.0/migrate/versioning/templates/repository/__init__.py0000664000175000017500000000000013553670475027553 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/script/0000775000175000017500000000000013553670602024531 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/script/default.py_tmpl0000664000175000017500000000044313553670475027574 0ustar zuulzuul00000000000000from sqlalchemy import * from migrate import * def upgrade(migrate_engine): # Upgrade operations go here. Don't create your own engine; bind # migrate_engine to your metadata pass def downgrade(migrate_engine): # Operations to reverse the above upgrade go here. pass sqlalchemy-migrate-0.13.0/migrate/versioning/templates/script/__init__.py0000664000175000017500000000000013553670475026640 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/script/pylons.py_tmpl0000664000175000017500000000044313553670475027474 0ustar zuulzuul00000000000000from sqlalchemy import * from migrate import * def upgrade(migrate_engine): # Upgrade operations go here. Don't create your own engine; bind # migrate_engine to your metadata pass def downgrade(migrate_engine): # Operations to reverse the above upgrade go here. pass sqlalchemy-migrate-0.13.0/migrate/versioning/templates/sql_script/0000775000175000017500000000000013553670602025410 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/sql_script/default.py_tmpl0000664000175000017500000000000013553670475030440 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/sql_script/pylons.py_tmpl0000664000175000017500000000000013553670475030340 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/__init__.py0000664000175000017500000000000013553670475025334 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/manage/0000775000175000017500000000000013553670602024455 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/templates/manage/default.py_tmpl0000664000175000017500000000047513553670475027525 0ustar zuulzuul00000000000000#!/usr/bin/env python from migrate.versioning.shell import main {{py: import six _vars = locals().copy() del _vars['__template_name__'] del _vars['six'] _vars.pop('repository_name', None) defaults = ", ".join(["%s='%s'" % var for var in six.iteritems(_vars)]) }} if __name__ == '__main__': main({{ defaults }}) sqlalchemy-migrate-0.13.0/migrate/versioning/templates/manage/pylons.py_tmpl0000664000175000017500000000157013553670475027422 0ustar zuulzuul00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- import sys from sqlalchemy import engine_from_config from paste.deploy.loadwsgi import ConfigLoader from migrate.versioning.shell import main from {{ locals().pop('repository_name') }}.model import migrations if '-c' in sys.argv: pos = sys.argv.index('-c') conf_path = sys.argv[pos + 1] del sys.argv[pos:pos + 2] else: conf_path = 'development.ini' {{py: import six _vars = locals().copy() del _vars['__template_name__'] del _vars['six'] defaults = ", ".join(["%s='%s'" % var for var in six.iteritems(_vars)]) }} conf_dict = ConfigLoader(conf_path).parser._sections['app:main'] # migrate supports passing url as an existing Engine instance (since 0.6.0) # usage: migrate -c path/to/config.ini COMMANDS if __name__ == '__main__': main(url=engine_from_config(conf_dict), repository=migrations.__path__[0],{{ defaults }}) sqlalchemy-migrate-0.13.0/migrate/versioning/repository.py0000664000175000017500000001736613553670475024045 0ustar zuulzuul00000000000000""" SQLAlchemy migrate repository management. """ import os import shutil import string import logging from pkg_resources import resource_filename from tempita import Template as TempitaTemplate from migrate import exceptions from migrate.versioning import version, pathed, cfgparse from migrate.versioning.template import Template from migrate.versioning.config import * log = logging.getLogger(__name__) class Changeset(dict): """A collection of changes to be applied to a database. Changesets are bound to a repository and manage a set of scripts from that repository. Behaves like a dict, for the most part. Keys are ordered based on step value. """ def __init__(self, start, *changes, **k): """ Give a start version; step must be explicitly stated. """ self.step = k.pop('step', 1) self.start = version.VerNum(start) self.end = self.start for change in changes: self.add(change) def __iter__(self): return iter(self.items()) def keys(self): """ In a series of upgrades x -> y, keys are version x. Sorted. """ ret = list(super(Changeset, self).keys()) # Reverse order if downgrading ret.sort(reverse=(self.step < 1)) return ret def values(self): return [self[k] for k in self.keys()] def items(self): return zip(self.keys(), self.values()) def add(self, change): """Add new change to changeset""" key = self.end self.end += self.step self[key] = change def run(self, *p, **k): """Run the changeset scripts""" for ver, script in self: script.run(*p, **k) class Repository(pathed.Pathed): """A project's change script repository""" _config = 'migrate.cfg' _versions = 'versions' def __init__(self, path): log.debug('Loading repository %s...' % path) self.verify(path) super(Repository, self).__init__(path) self.config = cfgparse.Config(os.path.join(self.path, self._config)) self.versions = version.Collection(os.path.join(self.path, self._versions)) log.debug('Repository %s loaded successfully' % path) log.debug('Config: %r' % self.config.to_dict()) @classmethod def verify(cls, path): """ Ensure the target path is a valid repository. :raises: :exc:`InvalidRepositoryError ` """ # Ensure the existence of required files try: cls.require_found(path) cls.require_found(os.path.join(path, cls._config)) cls.require_found(os.path.join(path, cls._versions)) except exceptions.PathNotFoundError: raise exceptions.InvalidRepositoryError(path) @classmethod def prepare_config(cls, tmpl_dir, name, options=None): """ Prepare a project configuration file for a new project. :param tmpl_dir: Path to Repository template :param config_file: Name of the config file in Repository template :param name: Repository name :type tmpl_dir: string :type config_file: string :type name: string :returns: Populated config file """ if options is None: options = {} options.setdefault('version_table', 'migrate_version') options.setdefault('repository_id', name) options.setdefault('required_dbs', []) options.setdefault('use_timestamp_numbering', False) tmpl = open(os.path.join(tmpl_dir, cls._config)).read() ret = TempitaTemplate(tmpl).substitute(options) # cleanup del options['__template_name__'] return ret @classmethod def create(cls, path, name, **opts): """Create a repository at a specified path""" cls.require_notfound(path) theme = opts.pop('templates_theme', None) t_path = opts.pop('templates_path', None) # Create repository tmpl_dir = Template(t_path).get_repository(theme=theme) shutil.copytree(tmpl_dir, path) # Edit config defaults config_text = cls.prepare_config(tmpl_dir, name, options=opts) fd = open(os.path.join(path, cls._config), 'w') fd.write(config_text) fd.close() opts['repository_name'] = name # Create a management script manager = os.path.join(path, 'manage.py') Repository.create_manage_file(manager, templates_theme=theme, templates_path=t_path, **opts) return cls(path) def create_script(self, description, **k): """API to :meth:`migrate.versioning.version.Collection.create_new_python_version`""" k['use_timestamp_numbering'] = self.use_timestamp_numbering self.versions.create_new_python_version(description, **k) def create_script_sql(self, database, description, **k): """API to :meth:`migrate.versioning.version.Collection.create_new_sql_version`""" k['use_timestamp_numbering'] = self.use_timestamp_numbering self.versions.create_new_sql_version(database, description, **k) @property def latest(self): """API to :attr:`migrate.versioning.version.Collection.latest`""" return self.versions.latest @property def version_table(self): """Returns version_table name specified in config""" return self.config.get('db_settings', 'version_table') @property def id(self): """Returns repository id specified in config""" return self.config.get('db_settings', 'repository_id') @property def use_timestamp_numbering(self): """Returns use_timestamp_numbering specified in config""" if self.config.has_option('db_settings', 'use_timestamp_numbering'): return self.config.getboolean('db_settings', 'use_timestamp_numbering') return False def version(self, *p, **k): """API to :attr:`migrate.versioning.version.Collection.version`""" return self.versions.version(*p, **k) @classmethod def clear(cls): # TODO: deletes repo super(Repository, cls).clear() version.Collection.clear() def changeset(self, database, start, end=None): """Create a changeset to migrate this database from ver. start to end/latest. :param database: name of database to generate changeset :param start: version to start at :param end: version to end at (latest if None given) :type database: string :type start: int :type end: int :returns: :class:`Changeset instance ` """ start = version.VerNum(start) if end is None: end = self.latest else: end = version.VerNum(end) if start <= end: step = 1 range_mod = 1 op = 'upgrade' else: step = -1 range_mod = 0 op = 'downgrade' versions = range(int(start) + range_mod, int(end) + range_mod, step) changes = [self.version(v).script(database, op) for v in versions] ret = Changeset(start, step=step, *changes) return ret @classmethod def create_manage_file(cls, file_, **opts): """Create a project management script (manage.py) :param file_: Destination file to be written :param opts: Options that are passed to :func:`migrate.versioning.shell.main` """ mng_file = Template(opts.pop('templates_path', None))\ .get_manage(theme=opts.pop('templates_theme', None)) tmpl = open(mng_file).read() fd = open(file_, 'w') fd.write(TempitaTemplate(tmpl).substitute(opts)) fd.close() sqlalchemy-migrate-0.13.0/migrate/versioning/genmodel.py0000664000175000017500000002613413553670475023411 0ustar zuulzuul00000000000000""" Code to generate a Python model from a database or differences between a model and database. Some of this is borrowed heavily from the AutoCode project at: http://code.google.com/p/sqlautocode/ """ import sys import logging import six import sqlalchemy import migrate import migrate.changeset log = logging.getLogger(__name__) HEADER = """ ## File autogenerated by genmodel.py from sqlalchemy import * """ META_DEFINITION = "meta = MetaData()" DECLARATIVE_DEFINITION = """ from sqlalchemy.ext import declarative Base = declarative.declarative_base() """ class ModelGenerator(object): """Various transformations from an A, B diff. In the implementation, A tends to be called the model and B the database (although this is not true of all diffs). The diff is directionless, but transformations apply the diff in a particular direction, described in the method name. """ def __init__(self, diff, engine, declarative=False): self.diff = diff self.engine = engine self.declarative = declarative def column_repr(self, col): kwarg = [] if col.key != col.name: kwarg.append('key') if col.primary_key: col.primary_key = True # otherwise it dumps it as 1 kwarg.append('primary_key') if not col.nullable: kwarg.append('nullable') if col.onupdate: kwarg.append('onupdate') if col.default: if col.primary_key: # I found that PostgreSQL automatically creates a # default value for the sequence, but let's not show # that. pass else: kwarg.append('default') args = ['%s=%r' % (k, getattr(col, k)) for k in kwarg] # crs: not sure if this is good idea, but it gets rid of extra # u'' if six.PY3: name = col.name else: name = col.name.encode('utf8') type_ = col.type for cls in col.type.__class__.__mro__: if cls.__module__ == 'sqlalchemy.types' and \ not cls.__name__.isupper(): if cls is not type_.__class__: type_ = cls() break type_repr = repr(type_) if type_repr.endswith('()'): type_repr = type_repr[:-2] constraints = [repr(cn) for cn in col.constraints] data = { 'name': name, 'commonStuff': ', '.join([type_repr] + constraints + args), } if self.declarative: return """%(name)s = Column(%(commonStuff)s)""" % data else: return """Column(%(name)r, %(commonStuff)s)""" % data def _getTableDefn(self, table, metaName='meta'): out = [] tableName = table.name if self.declarative: out.append("class %(table)s(Base):" % {'table': tableName}) out.append(" __tablename__ = '%(table)s'\n" % {'table': tableName}) for col in table.columns: out.append(" %s" % self.column_repr(col)) out.append('\n') else: out.append("%(table)s = Table('%(table)s', %(meta)s," % {'table': tableName, 'meta': metaName}) for col in table.columns: out.append(" %s," % self.column_repr(col)) out.append(")\n") return out def _get_tables(self,missingA=False,missingB=False,modified=False): to_process = [] for bool_,names,metadata in ( (missingA,self.diff.tables_missing_from_A,self.diff.metadataB), (missingB,self.diff.tables_missing_from_B,self.diff.metadataA), (modified,self.diff.tables_different,self.diff.metadataA), ): if bool_: for name in names: yield metadata.tables.get(name) def _genModelHeader(self, tables): out = [] import_index = [] out.append(HEADER) for table in tables: for col in table.columns: if "dialects" in col.type.__module__ and \ col.type.__class__ not in import_index: out.append("from " + col.type.__module__ + " import " + col.type.__class__.__name__) import_index.append(col.type.__class__) out.append("") if self.declarative: out.append(DECLARATIVE_DEFINITION) else: out.append(META_DEFINITION) out.append("") return out def genBDefinition(self): """Generates the source code for a definition of B. Assumes a diff where A is empty. Was: toPython. Assume database (B) is current and model (A) is empty. """ out = [] out.extend(self._genModelHeader(self._get_tables(missingA=True))) for table in self._get_tables(missingA=True): out.extend(self._getTableDefn(table)) return '\n'.join(out) def genB2AMigration(self, indent=' '): '''Generate a migration from B to A. Was: toUpgradeDowngradePython Assume model (A) is most current and database (B) is out-of-date. ''' decls = ['from migrate.changeset import schema', 'pre_meta = MetaData()', 'post_meta = MetaData()', ] upgradeCommands = ['pre_meta.bind = migrate_engine', 'post_meta.bind = migrate_engine'] downgradeCommands = list(upgradeCommands) for tn in self.diff.tables_missing_from_A: pre_table = self.diff.metadataB.tables[tn] decls.extend(self._getTableDefn(pre_table, metaName='pre_meta')) upgradeCommands.append( "pre_meta.tables[%(table)r].drop()" % {'table': tn}) downgradeCommands.append( "pre_meta.tables[%(table)r].create()" % {'table': tn}) for tn in self.diff.tables_missing_from_B: post_table = self.diff.metadataA.tables[tn] decls.extend(self._getTableDefn(post_table, metaName='post_meta')) upgradeCommands.append( "post_meta.tables[%(table)r].create()" % {'table': tn}) downgradeCommands.append( "post_meta.tables[%(table)r].drop()" % {'table': tn}) for (tn, td) in six.iteritems(self.diff.tables_different): if td.columns_missing_from_A or td.columns_different: pre_table = self.diff.metadataB.tables[tn] decls.extend(self._getTableDefn( pre_table, metaName='pre_meta')) if td.columns_missing_from_B or td.columns_different: post_table = self.diff.metadataA.tables[tn] decls.extend(self._getTableDefn( post_table, metaName='post_meta')) for col in td.columns_missing_from_A: upgradeCommands.append( 'pre_meta.tables[%r].columns[%r].drop()' % (tn, col)) downgradeCommands.append( 'pre_meta.tables[%r].columns[%r].create()' % (tn, col)) for col in td.columns_missing_from_B: upgradeCommands.append( 'post_meta.tables[%r].columns[%r].create()' % (tn, col)) downgradeCommands.append( 'post_meta.tables[%r].columns[%r].drop()' % (tn, col)) for modelCol, databaseCol, modelDecl, databaseDecl in td.columns_different: upgradeCommands.append( 'assert False, "Can\'t alter columns: %s:%s=>%s"' % ( tn, modelCol.name, databaseCol.name)) downgradeCommands.append( 'assert False, "Can\'t alter columns: %s:%s=>%s"' % ( tn, modelCol.name, databaseCol.name)) return ( '\n'.join(decls), '\n'.join('%s%s' % (indent, line) for line in upgradeCommands), '\n'.join('%s%s' % (indent, line) for line in downgradeCommands)) def _db_can_handle_this_change(self,td): """Check if the database can handle going from B to A.""" if (td.columns_missing_from_B and not td.columns_missing_from_A and not td.columns_different): # Even sqlite can handle column additions. return True else: return not self.engine.url.drivername.startswith('sqlite') def runB2A(self): """Goes from B to A. Was: applyModel. Apply model (A) to current database (B). """ meta = sqlalchemy.MetaData(self.engine) for table in self._get_tables(missingA=True): table = table.tometadata(meta) table.drop() for table in self._get_tables(missingB=True): table = table.tometadata(meta) table.create() for modelTable in self._get_tables(modified=True): tableName = modelTable.name modelTable = modelTable.tometadata(meta) dbTable = self.diff.metadataB.tables[tableName] td = self.diff.tables_different[tableName] if self._db_can_handle_this_change(td): for col in td.columns_missing_from_B: modelTable.columns[col].create() for col in td.columns_missing_from_A: dbTable.columns[col].drop() # XXX handle column changes here. else: # Sqlite doesn't support drop column, so you have to # do more: create temp table, copy data to it, drop # old table, create new table, copy data back. # # I wonder if this is guaranteed to be unique? tempName = '_temp_%s' % modelTable.name def getCopyStatement(): preparer = self.engine.dialect.preparer commonCols = [] for modelCol in modelTable.columns: if modelCol.name in dbTable.columns: commonCols.append(modelCol.name) commonColsStr = ', '.join(commonCols) return 'INSERT INTO %s (%s) SELECT %s FROM %s' % \ (tableName, commonColsStr, commonColsStr, tempName) # Move the data in one transaction, so that we don't # leave the database in a nasty state. connection = self.engine.connect() trans = connection.begin() try: connection.execute( 'CREATE TEMPORARY TABLE %s as SELECT * from %s' % \ (tempName, modelTable.name)) # make sure the drop takes place inside our # transaction with the bind parameter modelTable.drop(bind=connection) modelTable.create(bind=connection) connection.execute(getCopyStatement()) connection.execute('DROP TABLE %s' % tempName) trans.commit() except: trans.rollback() raise sqlalchemy-migrate-0.13.0/migrate/versioning/schema.py0000664000175000017500000001701013553670475023050 0ustar zuulzuul00000000000000""" Database schema version management. """ import sys import logging import six from sqlalchemy import (Table, Column, MetaData, String, Text, Integer, create_engine) from sqlalchemy.sql import and_ from sqlalchemy import exc as sa_exceptions from sqlalchemy.sql import bindparam from migrate import exceptions from migrate.changeset import SQLA_07 from migrate.versioning import genmodel, schemadiff from migrate.versioning.repository import Repository from migrate.versioning.util import load_model from migrate.versioning.version import VerNum log = logging.getLogger(__name__) class ControlledSchema(object): """A database under version control""" def __init__(self, engine, repository): if isinstance(repository, six.string_types): repository = Repository(repository) self.engine = engine self.repository = repository self.meta = MetaData(engine) self.load() def __eq__(self, other): """Compare two schemas by repositories and versions""" return (self.repository is other.repository \ and self.version == other.version) def load(self): """Load controlled schema version info from DB""" tname = self.repository.version_table try: if not hasattr(self, 'table') or self.table is None: self.table = Table(tname, self.meta, autoload=True) result = self.engine.execute(self.table.select( self.table.c.repository_id == str(self.repository.id))) data = list(result)[0] except: cls, exc, tb = sys.exc_info() six.reraise(exceptions.DatabaseNotControlledError, exceptions.DatabaseNotControlledError(str(exc)), tb) self.version = data['version'] return data def drop(self): """ Remove version control from a database. """ if SQLA_07: try: self.table.drop() except sa_exceptions.DatabaseError: raise exceptions.DatabaseNotControlledError(str(self.table)) else: try: self.table.drop() except (sa_exceptions.SQLError): raise exceptions.DatabaseNotControlledError(str(self.table)) def changeset(self, version=None): """API to Changeset creation. Uses self.version for start version and engine.name to get database name. """ database = self.engine.name start_ver = self.version changeset = self.repository.changeset(database, start_ver, version) return changeset def runchange(self, ver, change, step): startver = ver endver = ver + step # Current database version must be correct! Don't run if corrupt! if self.version != startver: raise exceptions.InvalidVersionError("%s is not %s" % \ (self.version, startver)) # Run the change change.run(self.engine, step) # Update/refresh database version self.update_repository_table(startver, endver) self.load() def update_repository_table(self, startver, endver): """Update version_table with new information""" update = self.table.update(and_(self.table.c.version == int(startver), self.table.c.repository_id == str(self.repository.id))) self.engine.execute(update, version=int(endver)) def upgrade(self, version=None): """ Upgrade (or downgrade) to a specified version, or latest version. """ changeset = self.changeset(version) for ver, change in changeset: self.runchange(ver, change, changeset.step) def update_db_from_model(self, model): """ Modify the database to match the structure of the current Python model. """ model = load_model(model) diff = schemadiff.getDiffOfModelAgainstDatabase( model, self.engine, excludeTables=[self.repository.version_table] ) genmodel.ModelGenerator(diff,self.engine).runB2A() self.update_repository_table(self.version, int(self.repository.latest)) self.load() @classmethod def create(cls, engine, repository, version=None): """ Declare a database to be under a repository's version control. :raises: :exc:`DatabaseAlreadyControlledError` :returns: :class:`ControlledSchema` """ # Confirm that the version # is valid: positive, integer, # exists in repos if isinstance(repository, six.string_types): repository = Repository(repository) version = cls._validate_version(repository, version) table = cls._create_table_version(engine, repository, version) # TODO: history table # Load repository information and return return cls(engine, repository) @classmethod def _validate_version(cls, repository, version): """ Ensures this is a valid version number for this repository. :raises: :exc:`InvalidVersionError` if invalid :return: valid version number """ if version is None: version = 0 try: version = VerNum(version) # raises valueerror if version < 0 or version > repository.latest: raise ValueError() except ValueError: raise exceptions.InvalidVersionError(version) return version @classmethod def _create_table_version(cls, engine, repository, version): """ Creates the versioning table in a database. :raises: :exc:`DatabaseAlreadyControlledError` """ # Create tables tname = repository.version_table meta = MetaData(engine) table = Table( tname, meta, Column('repository_id', String(250), primary_key=True), Column('repository_path', Text), Column('version', Integer), ) # there can be multiple repositories/schemas in the same db if not table.exists(): table.create() # test for existing repository_id s = table.select(table.c.repository_id == bindparam("repository_id")) result = engine.execute(s, repository_id=repository.id) if result.fetchone(): raise exceptions.DatabaseAlreadyControlledError # Insert data engine.execute(table.insert().values( repository_id=repository.id, repository_path=repository.path, version=int(version))) return table @classmethod def compare_model_to_db(cls, engine, model, repository): """ Compare the current model against the current database. """ if isinstance(repository, six.string_types): repository = Repository(repository) model = load_model(model) diff = schemadiff.getDiffOfModelAgainstDatabase( model, engine, excludeTables=[repository.version_table]) return diff @classmethod def create_model(cls, engine, repository, declarative=False): """ Dump the current database as a Python model. """ if isinstance(repository, six.string_types): repository = Repository(repository) diff = schemadiff.getDiffOfModelAgainstDatabase( MetaData(), engine, excludeTables=[repository.version_table] ) return genmodel.ModelGenerator(diff, engine, declarative).genBDefinition() sqlalchemy-migrate-0.13.0/migrate/versioning/schemadiff.py0000664000175000017500000002117013553670475023703 0ustar zuulzuul00000000000000""" Schema differencing support. """ import logging import sqlalchemy from sqlalchemy.types import Float log = logging.getLogger(__name__) def getDiffOfModelAgainstDatabase(metadata, engine, excludeTables=None): """ Return differences of model against database. :return: object which will evaluate to :keyword:`True` if there \ are differences else :keyword:`False`. """ db_metadata = sqlalchemy.MetaData(engine) db_metadata.reflect() # sqlite will include a dynamically generated 'sqlite_sequence' table if # there are autoincrement sequences in the database; this should not be # compared. if engine.dialect.name == 'sqlite': if 'sqlite_sequence' in db_metadata.tables: db_metadata.remove(db_metadata.tables['sqlite_sequence']) return SchemaDiff(metadata, db_metadata, labelA='model', labelB='database', excludeTables=excludeTables) def getDiffOfModelAgainstModel(metadataA, metadataB, excludeTables=None): """ Return differences of model against another model. :return: object which will evaluate to :keyword:`True` if there \ are differences else :keyword:`False`. """ return SchemaDiff(metadataA, metadataB, excludeTables=excludeTables) class ColDiff(object): """ Container for differences in one :class:`~sqlalchemy.schema.Column` between two :class:`~sqlalchemy.schema.Table` instances, ``A`` and ``B``. .. attribute:: col_A The :class:`~sqlalchemy.schema.Column` object for A. .. attribute:: col_B The :class:`~sqlalchemy.schema.Column` object for B. .. attribute:: type_A The most generic type of the :class:`~sqlalchemy.schema.Column` object in A. .. attribute:: type_B The most generic type of the :class:`~sqlalchemy.schema.Column` object in A. """ diff = False def __init__(self,col_A,col_B): self.col_A = col_A self.col_B = col_B self.type_A = col_A.type self.type_B = col_B.type self.affinity_A = self.type_A._type_affinity self.affinity_B = self.type_B._type_affinity if self.affinity_A is not self.affinity_B: self.diff = True return if isinstance(self.type_A,Float) or isinstance(self.type_B,Float): if not (isinstance(self.type_A,Float) and isinstance(self.type_B,Float)): self.diff=True return for attr in ('precision','scale','length'): A = getattr(self.type_A,attr,None) B = getattr(self.type_B,attr,None) if not (A is None or B is None) and A!=B: self.diff=True return def __nonzero__(self): return self.diff __bool__ = __nonzero__ class TableDiff(object): """ Container for differences in one :class:`~sqlalchemy.schema.Table` between two :class:`~sqlalchemy.schema.MetaData` instances, ``A`` and ``B``. .. attribute:: columns_missing_from_A A sequence of column names that were found in B but weren't in A. .. attribute:: columns_missing_from_B A sequence of column names that were found in A but weren't in B. .. attribute:: columns_different A dictionary containing information about columns that were found to be different. It maps column names to a :class:`ColDiff` objects describing the differences found. """ __slots__ = ( 'columns_missing_from_A', 'columns_missing_from_B', 'columns_different', ) def __nonzero__(self): return bool( self.columns_missing_from_A or self.columns_missing_from_B or self.columns_different ) __bool__ = __nonzero__ class SchemaDiff(object): """ Compute the difference between two :class:`~sqlalchemy.schema.MetaData` objects. The string representation of a :class:`SchemaDiff` will summarise the changes found between the two :class:`~sqlalchemy.schema.MetaData` objects. The length of a :class:`SchemaDiff` will give the number of changes found, enabling it to be used much like a boolean in expressions. :param metadataA: First :class:`~sqlalchemy.schema.MetaData` to compare. :param metadataB: Second :class:`~sqlalchemy.schema.MetaData` to compare. :param labelA: The label to use in messages about the first :class:`~sqlalchemy.schema.MetaData`. :param labelB: The label to use in messages about the second :class:`~sqlalchemy.schema.MetaData`. :param excludeTables: A sequence of table names to exclude. .. attribute:: tables_missing_from_A A sequence of table names that were found in B but weren't in A. .. attribute:: tables_missing_from_B A sequence of table names that were found in A but weren't in B. .. attribute:: tables_different A dictionary containing information about tables that were found to be different. It maps table names to a :class:`TableDiff` objects describing the differences found. """ def __init__(self, metadataA, metadataB, labelA='metadataA', labelB='metadataB', excludeTables=None): self.metadataA, self.metadataB = metadataA, metadataB self.labelA, self.labelB = labelA, labelB self.label_width = max(len(labelA),len(labelB)) excludeTables = set(excludeTables or []) A_table_names = set(metadataA.tables.keys()) B_table_names = set(metadataB.tables.keys()) self.tables_missing_from_A = sorted( B_table_names - A_table_names - excludeTables ) self.tables_missing_from_B = sorted( A_table_names - B_table_names - excludeTables ) self.tables_different = {} for table_name in A_table_names.intersection(B_table_names): td = TableDiff() A_table = metadataA.tables[table_name] B_table = metadataB.tables[table_name] A_column_names = set(A_table.columns.keys()) B_column_names = set(B_table.columns.keys()) td.columns_missing_from_A = sorted( B_column_names - A_column_names ) td.columns_missing_from_B = sorted( A_column_names - B_column_names ) td.columns_different = {} for col_name in A_column_names.intersection(B_column_names): cd = ColDiff( A_table.columns.get(col_name), B_table.columns.get(col_name) ) if cd: td.columns_different[col_name]=cd # XXX - index and constraint differences should # be checked for here if td: self.tables_different[table_name]=td def __str__(self): ''' Summarize differences. ''' out = [] column_template =' %%%is: %%r' % self.label_width for names,label in ( (self.tables_missing_from_A,self.labelA), (self.tables_missing_from_B,self.labelB), ): if names: out.append( ' tables missing from %s: %s' % ( label,', '.join(sorted(names)) ) ) for name,td in sorted(self.tables_different.items()): out.append( ' table with differences: %s' % name ) for names,label in ( (td.columns_missing_from_A,self.labelA), (td.columns_missing_from_B,self.labelB), ): if names: out.append( ' %s missing these columns: %s' % ( label,', '.join(sorted(names)) ) ) for name,cd in td.columns_different.items(): out.append(' column with differences: %s' % name) out.append(column_template % (self.labelA,cd.col_A)) out.append(column_template % (self.labelB,cd.col_B)) if out: out.insert(0, 'Schema diffs:') return '\n'.join(out) else: return 'No schema diffs' def __len__(self): """ Used in bool evaluation, return of 0 means no diffs. """ return ( len(self.tables_missing_from_A) + len(self.tables_missing_from_B) + len(self.tables_different) ) sqlalchemy-migrate-0.13.0/migrate/versioning/api.py0000664000175000017500000003154513553670475022372 0ustar zuulzuul00000000000000""" This module provides an external API to the versioning system. .. versionchanged:: 0.6.0 :func:`migrate.versioning.api.test` and schema diff functions changed order of positional arguments so all accept `url` and `repository` as first arguments. .. versionchanged:: 0.5.4 ``--preview_sql`` displays source file when using SQL scripts. If Python script is used, it runs the action with mocked engine and returns captured SQL statements. .. versionchanged:: 0.5.4 Deprecated ``--echo`` parameter in favour of new :func:`migrate.versioning.util.construct_engine` behavior. """ # Dear migrate developers, # # please do not comment this module using sphinx syntax because its # docstrings are presented as user help and most users cannot # interpret sphinx annotated ReStructuredText. # # Thanks, # Jan Dittberner import sys import inspect import logging from migrate import exceptions from migrate.versioning import (repository, schema, version, script as script_) # command name conflict from migrate.versioning.util import catch_known_errors, with_engine log = logging.getLogger(__name__) command_desc = { 'help': 'displays help on a given command', 'create': 'create an empty repository at the specified path', 'script': 'create an empty change Python script', 'script_sql': 'create empty change SQL scripts for given database', 'version': 'display the latest version available in a repository', 'db_version': 'show the current version of the repository under version control', 'source': 'display the Python code for a particular version in this repository', 'version_control': 'mark a database as under this repository\'s version control', 'upgrade': 'upgrade a database to a later version', 'downgrade': 'downgrade a database to an earlier version', 'drop_version_control': 'removes version control from a database', 'manage': 'creates a Python script that runs Migrate with a set of default values', 'test': 'performs the upgrade and downgrade command on the given database', 'compare_model_to_db': 'compare MetaData against the current database state', 'create_model': 'dump the current database as a Python model to stdout', 'make_update_script_for_model': 'create a script changing the old MetaData to the new (current) MetaData', 'update_db_from_model': 'modify the database to match the structure of the current MetaData', } __all__ = command_desc.keys() Repository = repository.Repository ControlledSchema = schema.ControlledSchema VerNum = version.VerNum PythonScript = script_.PythonScript SqlScript = script_.SqlScript # deprecated def help(cmd=None, **opts): """%prog help COMMAND Displays help on a given command. """ if cmd is None: raise exceptions.UsageError(None) try: func = globals()[cmd] except: raise exceptions.UsageError( "'%s' isn't a valid command. Try 'help COMMAND'" % cmd) ret = func.__doc__ if sys.argv[0]: ret = ret.replace('%prog', sys.argv[0]) return ret @catch_known_errors def create(repository, name, **opts): """%prog create REPOSITORY_PATH NAME [--table=TABLE] Create an empty repository at the specified path. You can specify the version_table to be used; by default, it is 'migrate_version'. This table is created in all version-controlled databases. """ repo_path = Repository.create(repository, name, **opts) @catch_known_errors def script(description, repository, **opts): """%prog script DESCRIPTION REPOSITORY_PATH Create an empty change script using the next unused version number appended with the given description. For instance, manage.py script "Add initial tables" creates: repository/versions/001_Add_initial_tables.py """ repo = Repository(repository) repo.create_script(description, **opts) @catch_known_errors def script_sql(database, description, repository, **opts): """%prog script_sql DATABASE DESCRIPTION REPOSITORY_PATH Create empty change SQL scripts for given DATABASE, where DATABASE is either specific ('postgresql', 'mysql', 'oracle', 'sqlite', etc.) or generic ('default'). For instance, manage.py script_sql postgresql description creates: repository/versions/001_description_postgresql_upgrade.sql and repository/versions/001_description_postgresql_downgrade.sql """ repo = Repository(repository) repo.create_script_sql(database, description, **opts) def version(repository, **opts): """%prog version REPOSITORY_PATH Display the latest version available in a repository. """ repo = Repository(repository) return repo.latest @with_engine def db_version(url, repository, **opts): """%prog db_version URL REPOSITORY_PATH Show the current version of the repository with the given connection string, under version control of the specified repository. The url should be any valid SQLAlchemy connection string. """ engine = opts.pop('engine') schema = ControlledSchema(engine, repository) return schema.version def source(version, dest=None, repository=None, **opts): """%prog source VERSION [DESTINATION] --repository=REPOSITORY_PATH Display the Python code for a particular version in this repository. Save it to the file at DESTINATION or, if omitted, send to stdout. """ if repository is None: raise exceptions.UsageError("A repository must be specified") repo = Repository(repository) ret = repo.version(version).script().source() if dest is not None: dest = open(dest, 'w') dest.write(ret) dest.close() ret = None return ret def upgrade(url, repository, version=None, **opts): """%prog upgrade URL REPOSITORY_PATH [VERSION] [--preview_py|--preview_sql] Upgrade a database to a later version. This runs the upgrade() function defined in your change scripts. By default, the database is updated to the latest available version. You may specify a version instead, if you wish. You may preview the Python or SQL code to be executed, rather than actually executing it, using the appropriate 'preview' option. """ err = "Cannot upgrade a database of version %s to version %s. "\ "Try 'downgrade' instead." return _migrate(url, repository, version, upgrade=True, err=err, **opts) def downgrade(url, repository, version, **opts): """%prog downgrade URL REPOSITORY_PATH VERSION [--preview_py|--preview_sql] Downgrade a database to an earlier version. This is the reverse of upgrade; this runs the downgrade() function defined in your change scripts. You may preview the Python or SQL code to be executed, rather than actually executing it, using the appropriate 'preview' option. """ err = "Cannot downgrade a database of version %s to version %s. "\ "Try 'upgrade' instead." return _migrate(url, repository, version, upgrade=False, err=err, **opts) @with_engine def test(url, repository, **opts): """%prog test URL REPOSITORY_PATH [VERSION] Performs the upgrade and downgrade option on the given database. This is not a real test and may leave the database in a bad state. You should therefore better run the test on a copy of your database. """ engine = opts.pop('engine') repos = Repository(repository) # Upgrade log.info("Upgrading...") script = repos.version(None).script(engine.name, 'upgrade') script.run(engine, 1) log.info("done") log.info("Downgrading...") script = repos.version(None).script(engine.name, 'downgrade') script.run(engine, -1) log.info("done") log.info("Success") @with_engine def version_control(url, repository, version=None, **opts): """%prog version_control URL REPOSITORY_PATH [VERSION] Mark a database as under this repository's version control. Once a database is under version control, schema changes should only be done via change scripts in this repository. This creates the table version_table in the database. The url should be any valid SQLAlchemy connection string. By default, the database begins at version 0 and is assumed to be empty. If the database is not empty, you may specify a version at which to begin instead. No attempt is made to verify this version's correctness - the database schema is expected to be identical to what it would be if the database were created from scratch. """ engine = opts.pop('engine') ControlledSchema.create(engine, repository, version) @with_engine def drop_version_control(url, repository, **opts): """%prog drop_version_control URL REPOSITORY_PATH Removes version control from a database. """ engine = opts.pop('engine') schema = ControlledSchema(engine, repository) schema.drop() def manage(file, **opts): """%prog manage FILENAME [VARIABLES...] Creates a script that runs Migrate with a set of default values. For example:: %prog manage manage.py --repository=/path/to/repository \ --url=sqlite:///project.db would create the script manage.py. The following two commands would then have exactly the same results:: python manage.py version %prog version --repository=/path/to/repository """ Repository.create_manage_file(file, **opts) @with_engine def compare_model_to_db(url, repository, model, **opts): """%prog compare_model_to_db URL REPOSITORY_PATH MODEL Compare the current model (assumed to be a module level variable of type sqlalchemy.MetaData) against the current database. NOTE: This is EXPERIMENTAL. """ # TODO: get rid of EXPERIMENTAL label engine = opts.pop('engine') return ControlledSchema.compare_model_to_db(engine, model, repository) @with_engine def create_model(url, repository, **opts): """%prog create_model URL REPOSITORY_PATH [DECLERATIVE=True] Dump the current database as a Python model to stdout. NOTE: This is EXPERIMENTAL. """ # TODO: get rid of EXPERIMENTAL label engine = opts.pop('engine') declarative = opts.get('declarative', False) return ControlledSchema.create_model(engine, repository, declarative) @catch_known_errors @with_engine def make_update_script_for_model(url, repository, oldmodel, model, **opts): """%prog make_update_script_for_model URL OLDMODEL MODEL REPOSITORY_PATH Create a script changing the old Python model to the new (current) Python model, sending to stdout. NOTE: This is EXPERIMENTAL. """ # TODO: get rid of EXPERIMENTAL label engine = opts.pop('engine') return PythonScript.make_update_script_for_model( engine, oldmodel, model, repository, **opts) @with_engine def update_db_from_model(url, repository, model, **opts): """%prog update_db_from_model URL REPOSITORY_PATH MODEL Modify the database to match the structure of the current Python model. This also sets the db_version number to the latest in the repository. NOTE: This is EXPERIMENTAL. """ # TODO: get rid of EXPERIMENTAL label engine = opts.pop('engine') schema = ControlledSchema(engine, repository) schema.update_db_from_model(model) @with_engine def _migrate(url, repository, version, upgrade, err, **opts): engine = opts.pop('engine') url = str(engine.url) schema = ControlledSchema(engine, repository) version = _migrate_version(schema, version, upgrade, err) changeset = schema.changeset(version) for ver, change in changeset: nextver = ver + changeset.step log.info('%s -> %s... ', ver, nextver) if opts.get('preview_sql'): if isinstance(change, PythonScript): log.info(change.preview_sql(url, changeset.step, **opts)) elif isinstance(change, SqlScript): log.info(change.source()) elif opts.get('preview_py'): if not isinstance(change, PythonScript): raise exceptions.UsageError("Python source can be only displayed" " for python migration files") source_ver = max(ver, nextver) module = schema.repository.version(source_ver).script().module funcname = upgrade and "upgrade" or "downgrade" func = getattr(module, funcname) log.info(inspect.getsource(func)) else: schema.runchange(ver, change, changeset.step) log.info('done') def _migrate_version(schema, version, upgrade, err): if version is None: return version # Version is specified: ensure we're upgrading in the right direction # (current version < target version for upgrading; reverse for down) version = VerNum(version) cur = schema.version if upgrade is not None: if upgrade: direction = cur <= version else: direction = cur >= version if not direction: raise exceptions.KnownError(err % (cur, version)) return version sqlalchemy-migrate-0.13.0/migrate/versioning/config.py0000664000175000017500000000052313553670475023056 0ustar zuulzuul00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- from sqlalchemy.util import OrderedDict __all__ = ['databases', 'operations'] databases = ('sqlite', 'postgres', 'mysql', 'oracle', 'mssql', 'firebird') # Map operation names to function names operations = OrderedDict() operations['upgrade'] = 'upgrade' operations['downgrade'] = 'downgrade' sqlalchemy-migrate-0.13.0/migrate/versioning/shell.py0000664000175000017500000001455113553670475022726 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """The migrate command-line tool.""" import sys import inspect import logging from optparse import OptionParser, BadOptionError from migrate import exceptions from migrate.versioning import api from migrate.versioning.config import * from migrate.versioning.util import asbool import six alias = dict( s=api.script, vc=api.version_control, dbv=api.db_version, v=api.version, ) def alias_setup(): global alias for key, val in six.iteritems(alias): setattr(api, key, val) alias_setup() class PassiveOptionParser(OptionParser): def _process_args(self, largs, rargs, values): """little hack to support all --some_option=value parameters""" while rargs: arg = rargs[0] if arg == "--": del rargs[0] return elif arg[0:2] == "--": # if parser does not know about the option # pass it along (make it anonymous) try: opt = arg.split('=', 1)[0] self._match_long_opt(opt) except BadOptionError: largs.append(arg) del rargs[0] else: self._process_long_opt(rargs, values) elif arg[:1] == "-" and len(arg) > 1: self._process_short_opts(rargs, values) elif self.allow_interspersed_args: largs.append(arg) del rargs[0] def main(argv=None, **kwargs): """Shell interface to :mod:`migrate.versioning.api`. kwargs are default options that can be overriden with passing --some_option as command line option :param disable_logging: Let migrate configure logging :type disable_logging: bool """ if argv is not None: argv = argv else: argv = list(sys.argv[1:]) commands = list(api.__all__) commands.sort() usage = """%%prog COMMAND ... Available commands: %s Enter "%%prog help COMMAND" for information on a particular command. """ % '\n\t'.join(["%s - %s" % (command.ljust(28), api.command_desc.get(command)) for command in commands]) parser = PassiveOptionParser(usage=usage) parser.add_option("-d", "--debug", action="store_true", dest="debug", default=False, help="Shortcut to turn on DEBUG mode for logging") parser.add_option("-q", "--disable_logging", action="store_true", dest="disable_logging", default=False, help="Use this option to disable logging configuration") help_commands = ['help', '-h', '--help'] HELP = False try: command = argv.pop(0) if command in help_commands: HELP = True command = argv.pop(0) except IndexError: parser.print_help() return command_func = getattr(api, command, None) if command_func is None or command.startswith('_'): parser.error("Invalid command %s" % command) parser.set_usage(inspect.getdoc(command_func)) f_args, f_varargs, f_kwargs, f_defaults = inspect.getargspec(command_func) for arg in f_args: parser.add_option( "--%s" % arg, dest=arg, action='store', type="string") # display help of the current command if HELP: parser.print_help() return options, args = parser.parse_args(argv) # override kwargs with anonymous parameters override_kwargs = dict() for arg in list(args): if arg.startswith('--'): args.remove(arg) if '=' in arg: opt, value = arg[2:].split('=', 1) else: opt = arg[2:] value = True override_kwargs[opt] = value # override kwargs with options if user is overwriting for key, value in six.iteritems(options.__dict__): if value is not None: override_kwargs[key] = value # arguments that function accepts without passed kwargs f_required = list(f_args) candidates = dict(kwargs) candidates.update(override_kwargs) for key, value in six.iteritems(candidates): if key in f_args: f_required.remove(key) # map function arguments to parsed arguments for arg in args: try: kw = f_required.pop(0) except IndexError: parser.error("Too many arguments for command %s: %s" % (command, arg)) kwargs[kw] = arg # apply overrides kwargs.update(override_kwargs) # configure options for key, value in six.iteritems(options.__dict__): kwargs.setdefault(key, value) # configure logging if not asbool(kwargs.pop('disable_logging', False)): # filter to log =< INFO into stdout and rest to stderr class SingleLevelFilter(logging.Filter): def __init__(self, min=None, max=None): self.min = min or 0 self.max = max or 100 def filter(self, record): return self.min <= record.levelno <= self.max logger = logging.getLogger() h1 = logging.StreamHandler(sys.stdout) f1 = SingleLevelFilter(max=logging.INFO) h1.addFilter(f1) h2 = logging.StreamHandler(sys.stderr) f2 = SingleLevelFilter(min=logging.WARN) h2.addFilter(f2) logger.addHandler(h1) logger.addHandler(h2) if options.debug: logger.setLevel(logging.DEBUG) else: logger.setLevel(logging.INFO) log = logging.getLogger(__name__) # check if all args are given try: num_defaults = len(f_defaults) except TypeError: num_defaults = 0 f_args_default = f_args[len(f_args) - num_defaults:] required = list(set(f_required) - set(f_args_default)) required.sort() if required: parser.error("Not enough arguments for command %s: %s not specified" \ % (command, ', '.join(required))) # handle command try: ret = command_func(**kwargs) if ret is not None: log.info(ret) except (exceptions.UsageError, exceptions.KnownError) as e: parser.error(e.args[0]) if __name__ == "__main__": main() sqlalchemy-migrate-0.13.0/migrate/versioning/version.py0000664000175000017500000002126713553670475023306 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os import re import shutil import logging from migrate import exceptions from migrate.versioning import pathed, script from datetime import datetime import six log = logging.getLogger(__name__) class VerNum(object): """A version number that behaves like a string and int at the same time""" _instances = dict() def __new__(cls, value): val = str(value) if val not in cls._instances: cls._instances[val] = super(VerNum, cls).__new__(cls) ret = cls._instances[val] return ret def __init__(self,value): self.value = str(int(value)) if self < 0: raise ValueError("Version number cannot be negative") def __add__(self, value): ret = int(self) + int(value) return VerNum(ret) def __sub__(self, value): return self + (int(value) * -1) def __eq__(self, value): return int(self) == int(value) def __ne__(self, value): return int(self) != int(value) def __lt__(self, value): return int(self) < int(value) def __gt__(self, value): return int(self) > int(value) def __ge__(self, value): return int(self) >= int(value) def __le__(self, value): return int(self) <= int(value) def __repr__(self): return "" % self.value def __str__(self): return str(self.value) def __int__(self): return int(self.value) def __index__(self): return int(self.value) if six.PY3: def __hash__(self): return hash(self.value) class Collection(pathed.Pathed): """A collection of versioning scripts in a repository""" FILENAME_WITH_VERSION = re.compile(r'^(\d{3,}).*') def __init__(self, path): """Collect current version scripts in repository and store them in self.versions """ super(Collection, self).__init__(path) # Create temporary list of files, allowing skipped version numbers. files = os.listdir(path) if '1' in files: # deprecation raise Exception('It looks like you have a repository in the old ' 'format (with directories for each version). ' 'Please convert repository before proceeding.') tempVersions = dict() for filename in files: match = self.FILENAME_WITH_VERSION.match(filename) if match: num = int(match.group(1)) tempVersions.setdefault(num, []).append(filename) else: pass # Must be a helper file or something, let's ignore it. # Create the versions member where the keys # are VerNum's and the values are Version's. self.versions = dict() for num, files in tempVersions.items(): self.versions[VerNum(num)] = Version(num, path, files) @property def latest(self): """:returns: Latest version in Collection""" return max([VerNum(0)] + list(self.versions.keys())) def _next_ver_num(self, use_timestamp_numbering): if use_timestamp_numbering == True: return VerNum(int(datetime.utcnow().strftime('%Y%m%d%H%M%S'))) else: return self.latest + 1 def create_new_python_version(self, description, **k): """Create Python files for new version""" ver = self._next_ver_num(k.pop('use_timestamp_numbering', False)) extra = str_to_filename(description) if extra: if extra == '_': extra = '' elif not extra.startswith('_'): extra = '_%s' % extra filename = '%03d%s.py' % (ver, extra) filepath = self._version_path(filename) script.PythonScript.create(filepath, **k) self.versions[ver] = Version(ver, self.path, [filename]) def create_new_sql_version(self, database, description, **k): """Create SQL files for new version""" ver = self._next_ver_num(k.pop('use_timestamp_numbering', False)) self.versions[ver] = Version(ver, self.path, []) extra = str_to_filename(description) if extra: if extra == '_': extra = '' elif not extra.startswith('_'): extra = '_%s' % extra # Create new files. for op in ('upgrade', 'downgrade'): filename = '%03d%s_%s_%s.sql' % (ver, extra, database, op) filepath = self._version_path(filename) script.SqlScript.create(filepath, **k) self.versions[ver].add_script(filepath) def version(self, vernum=None): """Returns required version. If vernum is not given latest version will be returned otherwise required version will be returned. :raises: : exceptions.VersionNotFoundError if respective migration script file of version is not present in the migration repository. """ if vernum is None: vernum = self.latest try: return self.versions[VerNum(vernum)] except KeyError: raise exceptions.VersionNotFoundError( ("Database schema file with version %(args)s doesn't " "exist.") % {'args': VerNum(vernum)}) @classmethod def clear(cls): super(Collection, cls).clear() def _version_path(self, ver): """Returns path of file in versions repository""" return os.path.join(self.path, str(ver)) class Version(object): """A single version in a collection :param vernum: Version Number :param path: Path to script files :param filelist: List of scripts :type vernum: int, VerNum :type path: string :type filelist: list """ def __init__(self, vernum, path, filelist): self.version = VerNum(vernum) # Collect scripts in this folder self.sql = dict() self.python = None for script in filelist: self.add_script(os.path.join(path, script)) def script(self, database=None, operation=None): """Returns SQL or Python Script""" for db in (database, 'default'): # Try to return a .sql script first try: return self.sql[db][operation] except KeyError: continue # No .sql script exists # TODO: maybe add force Python parameter? ret = self.python assert ret is not None, \ "There is no script for %d version" % self.version return ret def add_script(self, path): """Add script to Collection/Version""" if path.endswith(Extensions.py): self._add_script_py(path) elif path.endswith(Extensions.sql): self._add_script_sql(path) SQL_FILENAME = re.compile(r'^.*\.sql') def _add_script_sql(self, path): basename = os.path.basename(path) match = self.SQL_FILENAME.match(basename) if match: basename = basename.replace('.sql', '') parts = basename.split('_') if len(parts) < 3: raise exceptions.ScriptError( "Invalid SQL script name %s " % basename + \ "(needs to be ###_description_database_operation.sql)") version = parts[0] op = parts[-1] # NOTE(mriedem): check for ibm_db_sa as the database in the name if 'ibm_db_sa' in basename: if len(parts) == 6: dbms = '_'.join(parts[-4: -1]) else: raise exceptions.ScriptError( "Invalid ibm_db_sa SQL script name '%s'; " "(needs to be " "###_description_ibm_db_sa_operation.sql)" % basename) else: dbms = parts[-2] else: raise exceptions.ScriptError( "Invalid SQL script name %s " % basename + \ "(needs to be ###_description_database_operation.sql)") # File the script into a dictionary self.sql.setdefault(dbms, {})[op] = script.SqlScript(path) def _add_script_py(self, path): if self.python is not None: raise exceptions.ScriptError('You can only have one Python script ' 'per version, but you have: %s and %s' % (self.python, path)) self.python = script.PythonScript(path) class Extensions(object): """A namespace for file extensions""" py = 'py' sql = 'sql' def str_to_filename(s): """Replaces spaces, (double and single) quotes and double underscores to underscores """ s = s.replace(' ', '_').replace('"', '_').replace("'", '_').replace(".", "_") while '__' in s: s = s.replace('__', '_') return s sqlalchemy-migrate-0.13.0/migrate/versioning/pathed.py0000664000175000017500000000401313553670475023054 0ustar zuulzuul00000000000000""" A path/directory class. """ import os import shutil import logging from migrate import exceptions from migrate.versioning.config import * from migrate.versioning.util import KeyedInstance log = logging.getLogger(__name__) class Pathed(KeyedInstance): """ A class associated with a path/directory tree. Only one instance of this class may exist for a particular file; __new__ will return an existing instance if possible """ parent = None @classmethod def _key(cls, path): return str(path) def __init__(self, path): self.path = path if self.__class__.parent is not None: self._init_parent(path) def _init_parent(self, path): """Try to initialize this object's parent, if it has one""" parent_path = self.__class__._parent_path(path) self.parent = self.__class__.parent(parent_path) log.debug("Getting parent %r:%r" % (self.__class__.parent, parent_path)) self.parent._init_child(path, self) def _init_child(self, child, path): """Run when a child of this object is initialized. Parameters: the child object; the path to this object (its parent) """ @classmethod def _parent_path(cls, path): """ Fetch the path of this object's parent from this object's path. """ # os.path.dirname(), but strip directories like files (like # unix basename) # # Treat directories like files... if path[-1] == '/': path = path[:-1] ret = os.path.dirname(path) return ret @classmethod def require_notfound(cls, path): """Ensures a given path does not already exist""" if os.path.exists(path): raise exceptions.PathFoundError(path) @classmethod def require_found(cls, path): """Ensures a given path already exists""" if not os.path.exists(path): raise exceptions.PathNotFoundError(path) def __str__(self): return self.path sqlalchemy-migrate-0.13.0/migrate/versioning/template.py0000664000175000017500000000545213553670475023432 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os import shutil import sys from pkg_resources import resource_filename from migrate.versioning.config import * from migrate.versioning import pathed class Collection(pathed.Pathed): """A collection of templates of a specific type""" _mask = None def get_path(self, file): return os.path.join(self.path, str(file)) class RepositoryCollection(Collection): _mask = '%s' class ScriptCollection(Collection): _mask = '%s.py_tmpl' class ManageCollection(Collection): _mask = '%s.py_tmpl' class SQLScriptCollection(Collection): _mask = '%s.py_tmpl' class Template(pathed.Pathed): """Finds the paths/packages of various Migrate templates. :param path: Templates are loaded from migrate package if `path` is not provided. """ pkg = 'migrate.versioning.templates' def __new__(cls, path=None): if path is None: path = cls._find_path(cls.pkg) return super(Template, cls).__new__(cls, path) def __init__(self, path=None): if path is None: path = Template._find_path(self.pkg) super(Template, self).__init__(path) self.repository = RepositoryCollection(os.path.join(path, 'repository')) self.script = ScriptCollection(os.path.join(path, 'script')) self.manage = ManageCollection(os.path.join(path, 'manage')) self.sql_script = SQLScriptCollection(os.path.join(path, 'sql_script')) @classmethod def _find_path(cls, pkg): """Returns absolute path to dotted python package.""" tmp_pkg = pkg.rsplit('.', 1) if len(tmp_pkg) != 1: return resource_filename(tmp_pkg[0], tmp_pkg[1]) else: return resource_filename(tmp_pkg[0], '') def _get_item(self, collection, theme=None): """Locates and returns collection. :param collection: name of collection to locate :param type_: type of subfolder in collection (defaults to "_default") :returns: (package, source) :rtype: str, str """ item = getattr(self, collection) theme_mask = getattr(item, '_mask') theme = theme_mask % (theme or 'default') return item.get_path(theme) def get_repository(self, *a, **kw): """Calls self._get_item('repository', *a, **kw)""" return self._get_item('repository', *a, **kw) def get_script(self, *a, **kw): """Calls self._get_item('script', *a, **kw)""" return self._get_item('script', *a, **kw) def get_sql_script(self, *a, **kw): """Calls self._get_item('sql_script', *a, **kw)""" return self._get_item('sql_script', *a, **kw) def get_manage(self, *a, **kw): """Calls self._get_item('manage', *a, **kw)""" return self._get_item('manage', *a, **kw) sqlalchemy-migrate-0.13.0/migrate/versioning/__init__.py0000664000175000017500000000024113553670475023345 0ustar zuulzuul00000000000000""" This package provides functionality to create and manage repositories of database schema changesets and to apply these changesets to databases. """ sqlalchemy-migrate-0.13.0/migrate/versioning/util/0000775000175000017500000000000013553670602022204 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/versioning/util/keyedinstance.py0000664000175000017500000000217413553670475025420 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- class KeyedInstance(object): """A class whose instances have a unique identifier of some sort No two instances with the same unique ID should exist - if we try to create a second instance, the first should be returned. """ _instances = dict() def __new__(cls, *p, **k): instances = cls._instances clskey = str(cls) if clskey not in instances: instances[clskey] = dict() instances = instances[clskey] key = cls._key(*p, **k) if key not in instances: instances[key] = super(KeyedInstance, cls).__new__(cls) return instances[key] @classmethod def _key(cls, *p, **k): """Given a unique identifier, return a dictionary key This should be overridden by child classes, to specify which parameters should determine an object's uniqueness """ raise NotImplementedError() @classmethod def clear(cls): # Allow cls.clear() as well as uniqueInstance.clear(cls) if str(cls) in cls._instances: del cls._instances[str(cls)] sqlalchemy-migrate-0.13.0/migrate/versioning/util/__init__.py0000664000175000017500000001270413553670475024331 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """.. currentmodule:: migrate.versioning.util""" import warnings import logging from decorator import decorator from pkg_resources import EntryPoint import six from sqlalchemy import create_engine from sqlalchemy.engine import Engine from sqlalchemy.pool import StaticPool from migrate import exceptions from migrate.versioning.util.keyedinstance import KeyedInstance from migrate.versioning.util.importpath import import_path log = logging.getLogger(__name__) def load_model(dotted_name): """Import module and use module-level variable". :param dotted_name: path to model in form of string: ``some.python.module:Class`` .. versionchanged:: 0.5.4 """ if isinstance(dotted_name, six.string_types): if ':' not in dotted_name: # backwards compatibility warnings.warn('model should be in form of module.model:User ' 'and not module.model.User', exceptions.MigrateDeprecationWarning) dotted_name = ':'.join(dotted_name.rsplit('.', 1)) ep = EntryPoint.parse('x=%s' % dotted_name) if hasattr(ep, 'resolve'): # this is available on setuptools >= 10.2 return ep.resolve() else: # this causes a DeprecationWarning on setuptools >= 11.3 return ep.load(False) else: # Assume it's already loaded. return dotted_name def asbool(obj): """Do everything to use object as bool""" if isinstance(obj, six.string_types): obj = obj.strip().lower() if obj in ['true', 'yes', 'on', 'y', 't', '1']: return True elif obj in ['false', 'no', 'off', 'n', 'f', '0']: return False else: raise ValueError("String is not true/false: %r" % obj) if obj in (True, False): return bool(obj) else: raise ValueError("String is not true/false: %r" % obj) def guess_obj_type(obj): """Do everything to guess object type from string Tries to convert to `int`, `bool` and finally returns if not succeded. .. versionadded: 0.5.4 """ result = None try: result = int(obj) except: pass if result is None: try: result = asbool(obj) except: pass if result is not None: return result else: return obj @decorator def catch_known_errors(f, *a, **kw): """Decorator that catches known api errors .. versionadded: 0.5.4 """ try: return f(*a, **kw) except exceptions.PathFoundError as e: raise exceptions.KnownError("The path %s already exists" % e.args[0]) def construct_engine(engine, **opts): """.. versionadded:: 0.5.4 Constructs and returns SQLAlchemy engine. Currently, there are 2 ways to pass create_engine options to :mod:`migrate.versioning.api` functions: :param engine: connection string or a existing engine :param engine_dict: python dictionary of options to pass to `create_engine` :param engine_arg_*: keyword parameters to pass to `create_engine` (evaluated with :func:`migrate.versioning.util.guess_obj_type`) :type engine_dict: dict :type engine: string or Engine instance :type engine_arg_*: string :returns: SQLAlchemy Engine .. note:: keyword parameters override ``engine_dict`` values. """ if isinstance(engine, Engine): return engine elif not isinstance(engine, six.string_types): raise ValueError("you need to pass either an existing engine or a database uri") # get options for create_engine if opts.get('engine_dict') and isinstance(opts['engine_dict'], dict): kwargs = opts['engine_dict'] else: kwargs = dict() # DEPRECATED: handle echo the old way echo = asbool(opts.get('echo', False)) if echo: warnings.warn('echo=True parameter is deprecated, pass ' 'engine_arg_echo=True or engine_dict={"echo": True}', exceptions.MigrateDeprecationWarning) kwargs['echo'] = echo # parse keyword arguments for key, value in six.iteritems(opts): if key.startswith('engine_arg_'): kwargs[key[11:]] = guess_obj_type(value) log.debug('Constructing engine') # TODO: return create_engine(engine, poolclass=StaticPool, **kwargs) # seems like 0.5.x branch does not work with engine.dispose and staticpool return create_engine(engine, **kwargs) @decorator def with_engine(f, *a, **kw): """Decorator for :mod:`migrate.versioning.api` functions to safely close resources after function usage. Passes engine parameters to :func:`construct_engine` and resulting parameter is available as kw['engine']. Engine is disposed after wrapped function is executed. .. versionadded: 0.6.0 """ url = a[0] engine = construct_engine(url, **kw) try: kw['engine'] = engine return f(*a, **kw) finally: if isinstance(engine, Engine) and engine is not url: log.debug('Disposing SQLAlchemy engine %s', engine) engine.dispose() class Memoize(object): """Memoize(fn) - an instance which acts like fn but memoizes its arguments Will only work on functions with non-mutable arguments ActiveState Code 52201 """ def __init__(self, fn): self.fn = fn self.memo = {} def __call__(self, *args): if args not in self.memo: self.memo[args] = self.fn(*args) return self.memo[args] sqlalchemy-migrate-0.13.0/migrate/versioning/util/importpath.py0000664000175000017500000000156613553670475024765 0ustar zuulzuul00000000000000import os import sys PY33 = sys.version_info >= (3, 3) if PY33: from importlib import machinery else: from six.moves import reload_module as reload def import_path(fullpath): """ Import a file with full path specification. Allows one to import from anywhere, something __import__ does not do. """ if PY33: name = os.path.splitext(os.path.basename(fullpath))[0] return machinery.SourceFileLoader( name, fullpath).load_module(name) else: # http://zephyrfalcon.org/weblog/arch_d7_2002_08_31.html path, filename = os.path.split(fullpath) filename, ext = os.path.splitext(filename) sys.path.append(path) try: module = __import__(filename) reload(module) # Might be out of date during tests return module finally: del sys.path[-1] sqlalchemy-migrate-0.13.0/migrate/versioning/cfgparse.py0000664000175000017500000000124713553670475023407 0ustar zuulzuul00000000000000""" Configuration parser module. """ from six.moves.configparser import ConfigParser from migrate.versioning.config import * from migrate.versioning import pathed class Parser(ConfigParser): """A project configuration file.""" def to_dict(self, sections=None): """It's easier to access config values like dictionaries""" return self._sections class Config(pathed.Pathed, Parser): """Configuration class.""" def __init__(self, path, *p, **k): """Confirm the config file exists; read it.""" self.require_found(path) pathed.Pathed.__init__(self, path) Parser.__init__(self, *p, **k) self.read(path) sqlalchemy-migrate-0.13.0/migrate/versioning/migrate_repository.py0000664000175000017500000000605213553670475025543 0ustar zuulzuul00000000000000""" Script to migrate repository from sqlalchemy <= 0.4.4 to the new repository schema. This shouldn't use any other migrate modules, so that it can work in any version. """ import os import sys import logging log = logging.getLogger(__name__) def usage(): """Gives usage information.""" print("Usage: %s repository-to-migrate" % sys.argv[0]) print("Upgrade your repository to the new flat format.") print("NOTE: You should probably make a backup before running this.") sys.exit(1) def delete_file(filepath): """Deletes a file and prints a message.""" log.info('Deleting file: %s' % filepath) os.remove(filepath) def move_file(src, tgt): """Moves a file and prints a message.""" log.info('Moving file %s to %s' % (src, tgt)) if os.path.exists(tgt): raise Exception( 'Cannot move file %s because target %s already exists' % \ (src, tgt)) os.rename(src, tgt) def delete_directory(dirpath): """Delete a directory and print a message.""" log.info('Deleting directory: %s' % dirpath) os.rmdir(dirpath) def migrate_repository(repos): """Does the actual migration to the new repository format.""" log.info('Migrating repository at: %s to new format' % repos) versions = '%s/versions' % repos dirs = os.listdir(versions) # Only use int's in list. numdirs = [int(dirname) for dirname in dirs if dirname.isdigit()] numdirs.sort() # Sort list. for dirname in numdirs: origdir = '%s/%s' % (versions, dirname) log.info('Working on directory: %s' % origdir) files = os.listdir(origdir) files.sort() for filename in files: # Delete compiled Python files. if filename.endswith('.pyc') or filename.endswith('.pyo'): delete_file('%s/%s' % (origdir, filename)) # Delete empty __init__.py files. origfile = '%s/__init__.py' % origdir if os.path.exists(origfile) and len(open(origfile).read()) == 0: delete_file(origfile) # Move sql upgrade scripts. if filename.endswith('.sql'): version, dbms, operation = filename.split('.', 3)[0:3] origfile = '%s/%s' % (origdir, filename) # For instance: 2.postgres.upgrade.sql -> # 002_postgres_upgrade.sql tgtfile = '%s/%03d_%s_%s.sql' % ( versions, int(version), dbms, operation) move_file(origfile, tgtfile) # Move Python upgrade script. pyfile = '%s.py' % dirname pyfilepath = '%s/%s' % (origdir, pyfile) if os.path.exists(pyfilepath): tgtfile = '%s/%03d.py' % (versions, int(dirname)) move_file(pyfilepath, tgtfile) # Try to remove directory. Will fail if it's not empty. delete_directory(origdir) def main(): """Main function to be called when using this script.""" if len(sys.argv) != 2: usage() migrate_repository(sys.argv[1]) if __name__ == '__main__': main() sqlalchemy-migrate-0.13.0/migrate/__init__.py0000664000175000017500000000065013553670475021166 0ustar zuulzuul00000000000000""" SQLAlchemy migrate provides two APIs :mod:`migrate.versioning` for database schema version and repository management and :mod:`migrate.changeset` that allows to define database schema changes using Python. """ import pkg_resources from migrate.versioning import * from migrate.changeset import * __version__ = pkg_resources.get_provider( pkg_resources.Requirement.parse('sqlalchemy-migrate')).version sqlalchemy-migrate-0.13.0/migrate/changeset/0000775000175000017500000000000013553670602021005 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/changeset/databases/0000775000175000017500000000000013553670602022734 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/changeset/databases/mysql.py0000664000175000017500000000411613553670475024465 0ustar zuulzuul00000000000000""" MySQL database specific implementations of changeset classes. """ import sqlalchemy from sqlalchemy.databases import mysql as sa_base from sqlalchemy import types as sqltypes from migrate import exceptions from migrate.changeset import ansisql from migrate.changeset import util MySQLSchemaGenerator = sa_base.MySQLDDLCompiler class MySQLColumnGenerator(MySQLSchemaGenerator, ansisql.ANSIColumnGenerator): pass class MySQLColumnDropper(ansisql.ANSIColumnDropper): pass class MySQLSchemaChanger(MySQLSchemaGenerator, ansisql.ANSISchemaChanger): def visit_column(self, delta): table = delta.table colspec = self.get_column_specification(delta.result_column) if delta.result_column.autoincrement: primary_keys = [c for c in table.primary_key.columns if (c.autoincrement and isinstance(c.type, sqltypes.Integer) and not c.foreign_keys)] if primary_keys: first = primary_keys.pop(0) if first.name == delta.current_name: colspec += " AUTO_INCREMENT" old_col_name = self.preparer.quote(delta.current_name) self.start_alter_table(table) self.append("CHANGE COLUMN %s " % old_col_name) self.append(colspec) self.execute() def visit_index(self, param): # If MySQL can do this, I can't find how raise exceptions.NotSupportedError("MySQL cannot rename indexes") class MySQLConstraintGenerator(ansisql.ANSIConstraintGenerator): pass class MySQLConstraintDropper(MySQLSchemaGenerator, ansisql.ANSIConstraintDropper): def visit_migrate_check_constraint(self, *p, **k): raise exceptions.NotSupportedError("MySQL does not support CHECK" " constraints, use triggers instead.") class MySQLDialect(ansisql.ANSIDialect): columngenerator = MySQLColumnGenerator columndropper = MySQLColumnDropper schemachanger = MySQLSchemaChanger constraintgenerator = MySQLConstraintGenerator constraintdropper = MySQLConstraintDropper sqlalchemy-migrate-0.13.0/migrate/changeset/databases/oracle.py0000664000175000017500000000710713553670475024570 0ustar zuulzuul00000000000000""" Oracle database specific implementations of changeset classes. """ import sqlalchemy as sa from sqlalchemy.databases import oracle as sa_base from migrate import exceptions from migrate.changeset import ansisql OracleSchemaGenerator = sa_base.OracleDDLCompiler class OracleColumnGenerator(OracleSchemaGenerator, ansisql.ANSIColumnGenerator): pass class OracleColumnDropper(ansisql.ANSIColumnDropper): pass class OracleSchemaChanger(OracleSchemaGenerator, ansisql.ANSISchemaChanger): def get_column_specification(self, column, **kwargs): # Ignore the NOT NULL generated override_nullable = kwargs.pop('override_nullable', None) if override_nullable: orig = column.nullable column.nullable = True ret = super(OracleSchemaChanger, self).get_column_specification( column, **kwargs) if override_nullable: column.nullable = orig return ret def visit_column(self, delta): keys = delta.keys() if 'name' in keys: self._run_subvisit(delta, self._visit_column_name, start_alter=False) if len(set(('type', 'nullable', 'server_default')).intersection(keys)): self._run_subvisit(delta, self._visit_column_change, start_alter=False) def _visit_column_change(self, table, column, delta): # Oracle cannot drop a default once created, but it can set it # to null. We'll do that if default=None # http://forums.oracle.com/forums/message.jspa?messageID=1273234#1273234 dropdefault_hack = (column.server_default is None \ and 'server_default' in delta.keys()) # Oracle apparently doesn't like it when we say "not null" if # the column's already not null. Fudge it, so we don't need a # new function notnull_hack = ((not column.nullable) \ and ('nullable' not in delta.keys())) # We need to specify NULL if we're removing a NOT NULL # constraint null_hack = (column.nullable and ('nullable' in delta.keys())) if dropdefault_hack: column.server_default = sa.PassiveDefault(sa.sql.null()) if notnull_hack: column.nullable = True colspec = self.get_column_specification(column, override_nullable=null_hack) if null_hack: colspec += ' NULL' if notnull_hack: column.nullable = False if dropdefault_hack: column.server_default = None self.start_alter_table(table) self.append("MODIFY (") self.append(colspec) self.append(")") class OracleConstraintCommon(object): def get_constraint_name(self, cons): # Oracle constraints can't guess their name like other DBs if not cons.name: raise exceptions.NotSupportedError( "Oracle constraint names must be explicitly stated") return cons.name class OracleConstraintGenerator(OracleConstraintCommon, ansisql.ANSIConstraintGenerator): pass class OracleConstraintDropper(OracleConstraintCommon, ansisql.ANSIConstraintDropper): pass class OracleDialect(ansisql.ANSIDialect): columngenerator = OracleColumnGenerator columndropper = OracleColumnDropper schemachanger = OracleSchemaChanger constraintgenerator = OracleConstraintGenerator constraintdropper = OracleConstraintDropper sqlalchemy-migrate-0.13.0/migrate/changeset/databases/ibmdb2.py0000664000175000017500000003003013553670475024451 0ustar zuulzuul00000000000000""" DB2 database specific implementations of changeset classes. """ import logging from ibm_db_sa import base from sqlalchemy.schema import (AddConstraint, CreateIndex, DropConstraint) from sqlalchemy.schema import (Index, PrimaryKeyConstraint, UniqueConstraint) from migrate.changeset import ansisql from migrate.changeset import constraint from migrate.changeset import util from migrate import exceptions LOG = logging.getLogger(__name__) IBMDBSchemaGenerator = base.IBM_DBDDLCompiler def get_server_version_info(dialect): """Returns the DB2 server major and minor version as a list of ints.""" return [int(ver_token) for ver_token in dialect.dbms_ver.split('.')[0:2]] def is_unique_constraint_with_null_columns_supported(dialect): """Checks to see if the DB2 version is at least 10.5. This is needed for checking if unique constraints with null columns are supported. """ return get_server_version_info(dialect) >= [10, 5] class IBMDBColumnGenerator(IBMDBSchemaGenerator, ansisql.ANSIColumnGenerator): def visit_column(self, column): nullable = True if not column.nullable: nullable = False column.nullable = True table = self.start_alter_table(column) self.append("ADD COLUMN ") self.append(self.get_column_specification(column)) for cons in column.constraints: self.traverse_single(cons) if column.default is not None: self.traverse_single(column.default) self.execute() #ALTER TABLE STATEMENTS if not nullable: self.start_alter_table(column) self.append("ALTER COLUMN %s SET NOT NULL" % self.preparer.format_column(column)) self.execute() self.append("CALL SYSPROC.ADMIN_CMD('REORG TABLE %s')" % self.preparer.format_table(table)) self.execute() # add indexes and unique constraints if column.index_name: Index(column.index_name, column).create() elif column.unique_name: constraint.UniqueConstraint(column, name=column.unique_name).create() # SA bounds FK constraints to table, add manually for fk in column.foreign_keys: self.add_foreignkey(fk.constraint) # add primary key constraint if needed if column.primary_key_name: pk = constraint.PrimaryKeyConstraint( column, name=column.primary_key_name) pk.create() self.append("COMMIT") self.execute() self.append("CALL SYSPROC.ADMIN_CMD('REORG TABLE %s')" % self.preparer.format_table(table)) self.execute() class IBMDBColumnDropper(ansisql.ANSIColumnDropper): def visit_column(self, column): """Drop a column from its table. :param column: the column object :type column: :class:`sqlalchemy.Column` """ #table = self.start_alter_table(column) super(IBMDBColumnDropper, self).visit_column(column) self.append("CALL SYSPROC.ADMIN_CMD('REORG TABLE %s')" % self.preparer.format_table(column.table)) self.execute() class IBMDBSchemaChanger(IBMDBSchemaGenerator, ansisql.ANSISchemaChanger): def visit_table(self, table): """Rename a table; #38. Other ops aren't supported.""" self._rename_table(table) self.append("TO %s" % self.preparer.quote(table.new_name)) self.execute() self.append("COMMIT") self.execute() def _rename_table(self, table): self.append("RENAME TABLE %s " % self.preparer.format_table(table)) def visit_index(self, index): if hasattr(self, '_index_identifier'): # SA >= 0.6.5, < 0.8 old_name = self.preparer.quote( self._index_identifier(index.name)) new_name = self.preparer.quote( self._index_identifier(index.new_name)) else: # SA >= 0.8 class NewName(object): """Map obj.name -> obj.new_name""" def __init__(self, index): self.name = index.new_name self._obj = index def __getattr__(self, attr): if attr == 'name': return getattr(self, attr) return getattr(self._obj, attr) old_name = self._prepared_index_name(index) new_name = self._prepared_index_name(NewName(index)) self.append("RENAME INDEX %s TO %s" % (old_name, new_name)) self.execute() self.append("COMMIT") self.execute() def _run_subvisit(self, delta, func, start_alter=True): """Runs visit method based on what needs to be changed on column""" table = delta.table if start_alter: self.start_alter_table(table) ret = func(table, self.preparer.quote(delta.current_name), delta) self.execute() self._reorg_table(self.preparer.format_table(delta.table)) def _reorg_table(self, delta): self.append("CALL SYSPROC.ADMIN_CMD('REORG TABLE %s')" % delta) self.execute() def visit_column(self, delta): keys = delta.keys() tr = self.connection.begin() column = delta.result_column.copy() if 'type' in keys: try: self._run_subvisit(delta, self._visit_column_change, False) except Exception as e: LOG.warn("Unable to change the column type. Error: %s" % e) if column.primary_key and 'primary_key' not in keys: try: self._run_subvisit(delta, self._visit_primary_key) except Exception as e: LOG.warn("Unable to add primary key. Error: %s" % e) if 'nullable' in keys: self._run_subvisit(delta, self._visit_column_nullable) if 'server_default' in keys: self._run_subvisit(delta, self._visit_column_default) if 'primary_key' in keys: self._run_subvisit(delta, self._visit_primary_key) self._run_subvisit(delta, self._visit_unique_constraint) if 'name' in keys: try: self._run_subvisit(delta, self._visit_column_name, False) except Exception as e: LOG.warn("Unable to change column %(name)s. Error: %(error)s" % {'name': delta.current_name, 'error': e}) self._reorg_table(self.preparer.format_table(delta.table)) self.append("COMMIT") self.execute() tr.commit() def _visit_unique_constraint(self, table, col_name, delta): # Add primary key to the current column self.append("ADD CONSTRAINT %s " % col_name) self.append("UNIQUE (%s)" % col_name) def _visit_primary_key(self, table, col_name, delta): # Add primary key to the current column self.append("ADD PRIMARY KEY (%s)" % col_name) def _visit_column_name(self, table, col_name, delta): column = delta.result_column.copy() # Delete the primary key before renaming the column if column.primary_key: try: self.start_alter_table(table) self.append("DROP PRIMARY KEY") self.execute() except Exception: LOG.debug("Continue since Primary key does not exist.") self.start_alter_table(table) new_name = self.preparer.format_column(delta.result_column) self.append("RENAME COLUMN %s TO %s" % (col_name, new_name)) if column.primary_key: # execute the rename before adding primary key back self.execute() self.start_alter_table(table) self.append("ADD PRIMARY KEY (%s)" % new_name) def _visit_column_nullable(self, table, col_name, delta): self.append("ALTER COLUMN %s " % col_name) nullable = delta['nullable'] if nullable: self.append("DROP NOT NULL") else: self.append("SET NOT NULL") def _visit_column_default(self, table, col_name, delta): default_text = self.get_column_default_string(delta.result_column) self.append("ALTER COLUMN %s " % col_name) if default_text is None: self.append("DROP DEFAULT") else: self.append("SET WITH DEFAULT %s" % default_text) def _visit_column_change(self, table, col_name, delta): column = delta.result_column.copy() # Delete the primary key before if column.primary_key: try: self.start_alter_table(table) self.append("DROP PRIMARY KEY") self.execute() except Exception: LOG.debug("Continue since Primary key does not exist.") # Delete the identity before try: self.start_alter_table(table) self.append("ALTER COLUMN %s DROP IDENTITY" % col_name) self.execute() except Exception: LOG.debug("Continue since identity does not exist.") column.default = None if not column.table: column.table = delta.table self.start_alter_table(table) self.append("ALTER COLUMN %s " % col_name) self.append("SET DATA TYPE ") type_text = self.dialect.type_compiler.process( delta.result_column.type) self.append(type_text) class IBMDBConstraintGenerator(ansisql.ANSIConstraintGenerator): def _visit_constraint(self, constraint): constraint.name = self.get_constraint_name(constraint) if (isinstance(constraint, UniqueConstraint) and is_unique_constraint_with_null_columns_supported( self.dialect)): for column in constraint: if column.nullable: constraint.exclude_nulls = True break if getattr(constraint, 'exclude_nulls', None): index = Index(constraint.name, *(column for column in constraint), unique=True) sql = self.process(CreateIndex(index)) sql += ' EXCLUDE NULL KEYS' else: sql = self.process(AddConstraint(constraint)) self.append(sql) self.execute() class IBMDBConstraintDropper(ansisql.ANSIConstraintDropper, ansisql.ANSIConstraintCommon): def _visit_constraint(self, constraint): constraint.name = self.get_constraint_name(constraint) if (isinstance(constraint, UniqueConstraint) and is_unique_constraint_with_null_columns_supported( self.dialect)): for column in constraint: if column.nullable: constraint.exclude_nulls = True break if getattr(constraint, 'exclude_nulls', None): if hasattr(self, '_index_identifier'): # SA >= 0.6.5, < 0.8 index_name = self.preparer.quote( self._index_identifier(constraint.name)) else: # SA >= 0.8 index_name = self._prepared_index_name(constraint) sql = 'DROP INDEX %s ' % index_name else: sql = self.process(DropConstraint(constraint, cascade=constraint.cascade)) self.append(sql) self.execute() def visit_migrate_primary_key_constraint(self, constraint): self.start_alter_table(constraint.table) self.append("DROP PRIMARY KEY") self.execute() class IBMDBDialect(ansisql.ANSIDialect): columngenerator = IBMDBColumnGenerator columndropper = IBMDBColumnDropper schemachanger = IBMDBSchemaChanger constraintgenerator = IBMDBConstraintGenerator constraintdropper = IBMDBConstraintDropper sqlalchemy-migrate-0.13.0/migrate/changeset/databases/postgres.py0000664000175000017500000000215113553670475025163 0ustar zuulzuul00000000000000""" `PostgreSQL`_ database specific implementations of changeset classes. .. _`PostgreSQL`: http://www.postgresql.org/ """ from migrate.changeset import ansisql from sqlalchemy.databases import postgresql as sa_base PGSchemaGenerator = sa_base.PGDDLCompiler class PGColumnGenerator(PGSchemaGenerator, ansisql.ANSIColumnGenerator): """PostgreSQL column generator implementation.""" pass class PGColumnDropper(ansisql.ANSIColumnDropper): """PostgreSQL column dropper implementation.""" pass class PGSchemaChanger(ansisql.ANSISchemaChanger): """PostgreSQL schema changer implementation.""" pass class PGConstraintGenerator(ansisql.ANSIConstraintGenerator): """PostgreSQL constraint generator implementation.""" pass class PGConstraintDropper(ansisql.ANSIConstraintDropper): """PostgreSQL constaint dropper implementation.""" pass class PGDialect(ansisql.ANSIDialect): columngenerator = PGColumnGenerator columndropper = PGColumnDropper schemachanger = PGSchemaChanger constraintgenerator = PGConstraintGenerator constraintdropper = PGConstraintDropper sqlalchemy-migrate-0.13.0/migrate/changeset/databases/firebird.py0000664000175000017500000000673013553670475025112 0ustar zuulzuul00000000000000""" Firebird database specific implementations of changeset classes. """ from sqlalchemy.databases import firebird as sa_base from sqlalchemy.schema import PrimaryKeyConstraint from migrate import exceptions from migrate.changeset import ansisql FBSchemaGenerator = sa_base.FBDDLCompiler class FBColumnGenerator(FBSchemaGenerator, ansisql.ANSIColumnGenerator): """Firebird column generator implementation.""" class FBColumnDropper(ansisql.ANSIColumnDropper): """Firebird column dropper implementation.""" def visit_column(self, column): """Firebird supports 'DROP col' instead of 'DROP COLUMN col' syntax Drop primary key and unique constraints if dropped column is referencing it.""" if column.primary_key: if column.table.primary_key.columns.contains_column(column): column.table.primary_key.drop() # TODO: recreate primary key if it references more than this column for index in column.table.indexes: # "column in index.columns" causes problems as all # column objects compare equal and return a SQL expression if column.name in [col.name for col in index.columns]: index.drop() # TODO: recreate index if it references more than this column for cons in column.table.constraints: if isinstance(cons,PrimaryKeyConstraint): # will be deleted only when the column its on # is deleted! continue should_drop = column.name in cons.columns if should_drop: self.start_alter_table(column) self.append("DROP CONSTRAINT ") self.append(self.preparer.format_constraint(cons)) self.execute() # TODO: recreate unique constraint if it refenrences more than this column self.start_alter_table(column) self.append('DROP %s' % self.preparer.format_column(column)) self.execute() class FBSchemaChanger(ansisql.ANSISchemaChanger): """Firebird schema changer implementation.""" def visit_table(self, table): """Rename table not supported""" raise exceptions.NotSupportedError( "Firebird does not support renaming tables.") def _visit_column_name(self, table, column, delta): self.start_alter_table(table) col_name = self.preparer.quote(delta.current_name) new_name = self.preparer.format_column(delta.result_column) self.append('ALTER COLUMN %s TO %s' % (col_name, new_name)) def _visit_column_nullable(self, table, column, delta): """Changing NULL is not supported""" # TODO: http://www.firebirdfaq.org/faq103/ raise exceptions.NotSupportedError( "Firebird does not support altering NULL bevahior.") class FBConstraintGenerator(ansisql.ANSIConstraintGenerator): """Firebird constraint generator implementation.""" class FBConstraintDropper(ansisql.ANSIConstraintDropper): """Firebird constaint dropper implementation.""" def cascade_constraint(self, constraint): """Cascading constraints is not supported""" raise exceptions.NotSupportedError( "Firebird does not support cascading constraints") class FBDialect(ansisql.ANSIDialect): columngenerator = FBColumnGenerator columndropper = FBColumnDropper schemachanger = FBSchemaChanger constraintgenerator = FBConstraintGenerator constraintdropper = FBConstraintDropper sqlalchemy-migrate-0.13.0/migrate/changeset/databases/__init__.py0000664000175000017500000000025513553670475025057 0ustar zuulzuul00000000000000""" This module contains database dialect specific changeset implementations. """ __all__ = [ 'postgres', 'sqlite', 'mysql', 'oracle', 'ibmdb2', ] sqlalchemy-migrate-0.13.0/migrate/changeset/databases/visitor.py0000664000175000017500000000511513553670475025017 0ustar zuulzuul00000000000000""" Module for visitor class mapping. """ import sqlalchemy as sa from migrate.changeset import ansisql from migrate.changeset.databases import (sqlite, postgres, mysql, oracle, firebird) # Map SA dialects to the corresponding Migrate extensions DIALECTS = { "default": ansisql.ANSIDialect, "sqlite": sqlite.SQLiteDialect, "postgres": postgres.PGDialect, "postgresql": postgres.PGDialect, "mysql": mysql.MySQLDialect, "oracle": oracle.OracleDialect, "firebird": firebird.FBDialect, } # NOTE(mriedem): We have to conditionally check for DB2 in case ibm_db_sa # isn't available since ibm_db_sa is not packaged in sqlalchemy like the # other dialects. try: from migrate.changeset.databases import ibmdb2 DIALECTS["ibm_db_sa"] = ibmdb2.IBMDBDialect except ImportError: pass def get_engine_visitor(engine, name): """ Get the visitor implementation for the given database engine. :param engine: SQLAlchemy Engine :param name: Name of the visitor :type name: string :type engine: Engine :returns: visitor """ # TODO: link to supported visitors return get_dialect_visitor(engine.dialect, name) def get_dialect_visitor(sa_dialect, name): """ Get the visitor implementation for the given dialect. Finds the visitor implementation based on the dialect class and returns and instance initialized with the given name. Binds dialect specific preparer to visitor. """ # map sa dialect to migrate dialect and return visitor sa_dialect_name = getattr(sa_dialect, 'name', 'default') migrate_dialect_cls = DIALECTS[sa_dialect_name] visitor = getattr(migrate_dialect_cls, name) # bind preparer visitor.preparer = sa_dialect.preparer(sa_dialect) return visitor def run_single_visitor(engine, visitorcallable, element, connection=None, **kwargs): """Taken from :meth:`sqlalchemy.engine.base.Engine._run_single_visitor` with support for migrate visitors. """ if connection is None: conn = engine.connect() else: conn = connection visitor = visitorcallable(engine.dialect, conn) try: if hasattr(element, '__migrate_visit_name__'): fn = getattr(visitor, 'visit_' + element.__migrate_visit_name__) else: fn = getattr(visitor, 'visit_' + element.__visit_name__) fn(element, **kwargs) finally: if connection is None: conn.close() sqlalchemy-migrate-0.13.0/migrate/changeset/databases/sqlite.py0000664000175000017500000002047513553670475024627 0ustar zuulzuul00000000000000""" `SQLite`_ database specific implementations of changeset classes. .. _`SQLite`: http://www.sqlite.org/ """ try: # Python 3 from collections.abc import MutableMapping as DictMixin except ImportError: # Python 2 from UserDict import DictMixin from copy import copy import re from sqlalchemy.databases import sqlite as sa_base from sqlalchemy.schema import ForeignKeyConstraint from sqlalchemy.schema import UniqueConstraint from migrate import exceptions from migrate.changeset import ansisql SQLiteSchemaGenerator = sa_base.SQLiteDDLCompiler class SQLiteCommon(object): def _not_supported(self, op): raise exceptions.NotSupportedError("SQLite does not support " "%s; see http://www.sqlite.org/lang_altertable.html" % op) class SQLiteHelper(SQLiteCommon): def _filter_columns(self, cols, table): """Splits the string of columns and returns those only in the table. :param cols: comma-delimited string of table columns :param table: the table to check :return: list of columns in the table """ columns = [] for c in cols.split(","): if c in table.columns: # There was a bug in reflection of SQLite columns with # reserved identifiers as names (SQLite can return them # wrapped with double quotes), so strip double quotes. columns.extend(c.strip(' "')) return columns def _get_constraints(self, table): """Retrieve information about existing constraints of the table This feature is needed for recreate_table() to work properly. """ data = table.metadata.bind.execute( """SELECT sql FROM sqlite_master WHERE type='table' AND name=:table_name""", table_name=table.name ).fetchone()[0] UNIQUE_PATTERN = "CONSTRAINT (\w+) UNIQUE \(([^\)]+)\)" constraints = [] for name, cols in re.findall(UNIQUE_PATTERN, data): # Filter out any columns that were dropped from the table. columns = self._filter_columns(cols, table) if columns: constraints.extend(UniqueConstraint(*columns, name=name)) FKEY_PATTERN = "CONSTRAINT (\w+) FOREIGN KEY \(([^\)]+)\)" for name, cols in re.findall(FKEY_PATTERN, data): # Filter out any columns that were dropped from the table. columns = self._filter_columns(cols, table) if columns: constraints.extend(ForeignKeyConstraint(*columns, name=name)) return constraints def recreate_table(self, table, column=None, delta=None, omit_constraints=None): table_name = self.preparer.format_table(table) # we remove all indexes so as not to have # problems during copy and re-create for index in table.indexes: index.drop() # reflect existing constraints for constraint in self._get_constraints(table): table.append_constraint(constraint) # omit given constraints when creating a new table if required table.constraints = set([ cons for cons in table.constraints if omit_constraints is None or cons.name not in omit_constraints ]) # Use "PRAGMA legacy_alter_table = ON" with sqlite >= 3.26 when # using "ALTER TABLE RENAME TO migration_tmp" to maintain legacy # behavior. See: https://www.sqlite.org/src/info/ae9638e9c0ad0c36 if self.connection.engine.dialect.server_version_info >= (3, 26): self.append('PRAGMA legacy_alter_table = ON') self.execute() self.append('ALTER TABLE %s RENAME TO migration_tmp' % table_name) self.execute() if self.connection.engine.dialect.server_version_info >= (3, 26): self.append('PRAGMA legacy_alter_table = OFF') self.execute() insertion_string = self._modify_table(table, column, delta) table.create(bind=self.connection) self.append(insertion_string % {'table_name': table_name}) self.execute() self.append('DROP TABLE migration_tmp') self.execute() def visit_column(self, delta): if isinstance(delta, DictMixin): column = delta.result_column table = self._to_table(delta.table) else: column = delta table = self._to_table(column.table) self.recreate_table(table,column,delta) class SQLiteColumnGenerator(SQLiteSchemaGenerator, ansisql.ANSIColumnGenerator, # at the end so we get the normal # visit_column by default SQLiteHelper, SQLiteCommon ): """SQLite ColumnGenerator""" def _modify_table(self, table, column, delta): columns = ' ,'.join(map( self.preparer.format_column, [c for c in table.columns if c.name!=column.name])) return ('INSERT INTO %%(table_name)s (%(cols)s) ' 'SELECT %(cols)s from migration_tmp')%{'cols':columns} def visit_column(self,column): if column.foreign_keys: SQLiteHelper.visit_column(self,column) else: super(SQLiteColumnGenerator,self).visit_column(column) class SQLiteColumnDropper(SQLiteHelper, ansisql.ANSIColumnDropper): """SQLite ColumnDropper""" def _modify_table(self, table, column, delta): columns = ' ,'.join(map(self.preparer.format_column, table.columns)) return 'INSERT INTO %(table_name)s SELECT ' + columns + \ ' from migration_tmp' def visit_column(self,column): # For SQLite, we *have* to remove the column here so the table # is re-created properly. column.remove_from_table(column.table,unset_table=False) super(SQLiteColumnDropper,self).visit_column(column) class SQLiteSchemaChanger(SQLiteHelper, ansisql.ANSISchemaChanger): """SQLite SchemaChanger""" def _modify_table(self, table, column, delta): return 'INSERT INTO %(table_name)s SELECT * from migration_tmp' def visit_index(self, index): """Does not support ALTER INDEX""" self._not_supported('ALTER INDEX') class SQLiteConstraintGenerator(ansisql.ANSIConstraintGenerator, SQLiteHelper, SQLiteCommon): def visit_migrate_primary_key_constraint(self, constraint): tmpl = "CREATE UNIQUE INDEX %s ON %s ( %s )" cols = ', '.join(map(self.preparer.format_column, constraint.columns)) tname = self.preparer.format_table(constraint.table) name = self.get_constraint_name(constraint) msg = tmpl % (name, tname, cols) self.append(msg) self.execute() def _modify_table(self, table, column, delta): return 'INSERT INTO %(table_name)s SELECT * from migration_tmp' def visit_migrate_foreign_key_constraint(self, *p, **k): self.recreate_table(p[0].table) def visit_migrate_unique_constraint(self, *p, **k): self.recreate_table(p[0].table) class SQLiteConstraintDropper(ansisql.ANSIColumnDropper, SQLiteHelper, ansisql.ANSIConstraintCommon): def _modify_table(self, table, column, delta): return 'INSERT INTO %(table_name)s SELECT * from migration_tmp' def visit_migrate_primary_key_constraint(self, constraint): tmpl = "DROP INDEX %s " name = self.get_constraint_name(constraint) msg = tmpl % (name) self.append(msg) self.execute() def visit_migrate_foreign_key_constraint(self, *p, **k): self.recreate_table(p[0].table, omit_constraints=[p[0].name]) def visit_migrate_check_constraint(self, *p, **k): self._not_supported('ALTER TABLE DROP CONSTRAINT') def visit_migrate_unique_constraint(self, *p, **k): self.recreate_table(p[0].table, omit_constraints=[p[0].name]) # TODO: technically primary key is a NOT NULL + UNIQUE constraint, should add NOT NULL to index class SQLiteDialect(ansisql.ANSIDialect): columngenerator = SQLiteColumnGenerator columndropper = SQLiteColumnDropper schemachanger = SQLiteSchemaChanger constraintgenerator = SQLiteConstraintGenerator constraintdropper = SQLiteConstraintDropper sqlalchemy-migrate-0.13.0/migrate/changeset/util.py0000664000175000017500000000042513553670475022345 0ustar zuulzuul00000000000000from migrate.changeset import SQLA_10 def fk_column_names(constraint): if SQLA_10: return [ constraint.columns[key].name for key in constraint.column_keys] else: return [ element.parent.name for element in constraint.elements] sqlalchemy-migrate-0.13.0/migrate/changeset/schema.py0000664000175000017500000006047213553670475022640 0ustar zuulzuul00000000000000""" Schema module providing common schema operations. """ import abc try: # Python 3 from collections.abc import MutableMapping as DictMixin except ImportError: # Python 2 from UserDict import DictMixin import warnings import six import sqlalchemy from sqlalchemy.schema import ForeignKeyConstraint from sqlalchemy.schema import UniqueConstraint from migrate.exceptions import * from migrate.changeset import SQLA_07, SQLA_08 from migrate.changeset import util from migrate.changeset.databases.visitor import (get_engine_visitor, run_single_visitor) __all__ = [ 'create_column', 'drop_column', 'alter_column', 'rename_table', 'rename_index', 'ChangesetTable', 'ChangesetColumn', 'ChangesetIndex', 'ChangesetDefaultClause', 'ColumnDelta', ] def create_column(column, table=None, *p, **kw): """Create a column, given the table. API to :meth:`ChangesetColumn.create`. """ if table is not None: return table.create_column(column, *p, **kw) return column.create(*p, **kw) def drop_column(column, table=None, *p, **kw): """Drop a column, given the table. API to :meth:`ChangesetColumn.drop`. """ if table is not None: return table.drop_column(column, *p, **kw) return column.drop(*p, **kw) def rename_table(table, name, engine=None, **kw): """Rename a table. If Table instance is given, engine is not used. API to :meth:`ChangesetTable.rename`. :param table: Table to be renamed. :param name: New name for Table. :param engine: Engine instance. :type table: string or Table instance :type name: string :type engine: obj """ table = _to_table(table, engine) table.rename(name, **kw) def rename_index(index, name, table=None, engine=None, **kw): """Rename an index. If Index instance is given, table and engine are not used. API to :meth:`ChangesetIndex.rename`. :param index: Index to be renamed. :param name: New name for index. :param table: Table to which Index is reffered. :param engine: Engine instance. :type index: string or Index instance :type name: string :type table: string or Table instance :type engine: obj """ index = _to_index(index, table, engine) index.rename(name, **kw) def alter_column(*p, **k): """Alter a column. This is a helper function that creates a :class:`ColumnDelta` and runs it. :argument column: The name of the column to be altered or a :class:`ChangesetColumn` column representing it. :param table: A :class:`~sqlalchemy.schema.Table` or table name to for the table where the column will be changed. :param engine: The :class:`~sqlalchemy.engine.base.Engine` to use for table reflection and schema alterations. :returns: A :class:`ColumnDelta` instance representing the change. """ if 'table' not in k and isinstance(p[0], sqlalchemy.Column): k['table'] = p[0].table if 'engine' not in k: k['engine'] = k['table'].bind # deprecation if len(p) >= 2 and isinstance(p[1], sqlalchemy.Column): warnings.warn( "Passing a Column object to alter_column is deprecated." " Just pass in keyword parameters instead.", MigrateDeprecationWarning ) engine = k['engine'] # enough tests seem to break when metadata is always altered # that this crutch has to be left in until they can be sorted # out k['alter_metadata']=True delta = ColumnDelta(*p, **k) visitorcallable = get_engine_visitor(engine, 'schemachanger') _run_visitor(engine, visitorcallable, delta) return delta def _to_table(table, engine=None): """Return if instance of Table, else construct new with metadata""" if isinstance(table, sqlalchemy.Table): return table # Given: table name, maybe an engine meta = sqlalchemy.MetaData() if engine is not None: meta.bind = engine return sqlalchemy.Table(table, meta) def _to_index(index, table=None, engine=None): """Return if instance of Index, else construct new with metadata""" if isinstance(index, sqlalchemy.Index): return index # Given: index name; table name required table = _to_table(table, engine) ret = sqlalchemy.Index(index) ret.table = table return ret def _run_visitor( connectable, visitorcallable, element, connection=None, **kwargs ): if connection is not None: visitorcallable( connection.dialect, connection, **kwargs).traverse_single(element) else: conn = connectable.connect() try: visitorcallable( conn.dialect, conn, **kwargs).traverse_single(element) finally: conn.close() # Python3: if we just use: # # class ColumnDelta(DictMixin, sqlalchemy.schema.SchemaItem): # ... # # We get the following error: # TypeError: metaclass conflict: the metaclass of a derived class must be a # (non-strict) subclass of the metaclasses of all its bases. # # The complete inheritance/metaclass relationship list of ColumnDelta can be # summarized by this following dot file: # # digraph test123 { # ColumnDelta -> MutableMapping; # MutableMapping -> Mapping; # Mapping -> {Sized Iterable Container}; # {Sized Iterable Container} -> ABCMeta[style=dashed]; # # ColumnDelta -> SchemaItem; # SchemaItem -> {SchemaEventTarget Visitable}; # SchemaEventTarget -> object; # Visitable -> {VisitableType object} [style=dashed]; # VisitableType -> type; # } # # We need to use a metaclass that inherits from all the metaclasses of # DictMixin and sqlalchemy.schema.SchemaItem. Let's call it "MyMeta". class MyMeta(sqlalchemy.sql.visitors.VisitableType, abc.ABCMeta, object): pass class ColumnDelta(six.with_metaclass(MyMeta, DictMixin, sqlalchemy.schema.SchemaItem)): """Extracts the differences between two columns/column-parameters May receive parameters arranged in several different ways: * **current_column, new_column, \*p, \*\*kw** Additional parameters can be specified to override column differences. * **current_column, \*p, \*\*kw** Additional parameters alter current_column. Table name is extracted from current_column object. Name is changed to current_column.name from current_name, if current_name is specified. * **current_col_name, \*p, \*\*kw** Table kw must specified. :param table: Table at which current Column should be bound to.\ If table name is given, reflection will be used. :type table: string or Table instance :param metadata: A :class:`MetaData` instance to store reflected table names :param engine: When reflecting tables, either engine or metadata must \ be specified to acquire engine object. :type engine: :class:`Engine` instance :returns: :class:`ColumnDelta` instance provides interface for altered attributes to \ `result_column` through :func:`dict` alike object. * :class:`ColumnDelta`.result_column is altered column with new attributes * :class:`ColumnDelta`.current_name is current name of column in db """ # Column attributes that can be altered diff_keys = ('name', 'type', 'primary_key', 'nullable', 'server_onupdate', 'server_default', 'autoincrement') diffs = dict() __visit_name__ = 'column' def __init__(self, *p, **kw): # 'alter_metadata' is not a public api. It exists purely # as a crutch until the tests that fail when 'alter_metadata' # behaviour always happens can be sorted out self.alter_metadata = kw.pop("alter_metadata", False) self.meta = kw.pop("metadata", None) self.engine = kw.pop("engine", None) # Things are initialized differently depending on how many column # parameters are given. Figure out how many and call the appropriate # method. if len(p) >= 1 and isinstance(p[0], sqlalchemy.Column): # At least one column specified if len(p) >= 2 and isinstance(p[1], sqlalchemy.Column): # Two columns specified diffs = self.compare_2_columns(*p, **kw) else: # Exactly one column specified diffs = self.compare_1_column(*p, **kw) else: # Zero columns specified if not len(p) or not isinstance(p[0], six.string_types): raise ValueError("First argument must be column name") diffs = self.compare_parameters(*p, **kw) self.apply_diffs(diffs) def __repr__(self): return '' % ( self.alter_metadata, super(ColumnDelta, self).__repr__() ) def __getitem__(self, key): if key not in self.keys(): raise KeyError("No such diff key, available: %s" % self.diffs ) return getattr(self.result_column, key) def __setitem__(self, key, value): if key not in self.keys(): raise KeyError("No such diff key, available: %s" % self.diffs ) setattr(self.result_column, key, value) def __delitem__(self, key): raise NotImplementedError def __len__(self): raise NotImplementedError def __iter__(self): raise NotImplementedError def keys(self): return self.diffs.keys() def compare_parameters(self, current_name, *p, **k): """Compares Column objects with reflection""" self.table = k.pop('table') self.result_column = self._table.c.get(current_name) if len(p): k = self._extract_parameters(p, k, self.result_column) return k def compare_1_column(self, col, *p, **k): """Compares one Column object""" self.table = k.pop('table', None) if self.table is None: self.table = col.table self.result_column = col if len(p): k = self._extract_parameters(p, k, self.result_column) return k def compare_2_columns(self, old_col, new_col, *p, **k): """Compares two Column objects""" self.process_column(new_col) self.table = k.pop('table', None) # we cannot use bool() on table in SA06 if self.table is None: self.table = old_col.table if self.table is None: new_col.table self.result_column = old_col # set differences # leave out some stuff for later comp for key in (set(self.diff_keys) - set(('type',))): val = getattr(new_col, key, None) if getattr(self.result_column, key, None) != val: k.setdefault(key, val) # inspect types if not self.are_column_types_eq(self.result_column.type, new_col.type): k.setdefault('type', new_col.type) if len(p): k = self._extract_parameters(p, k, self.result_column) return k def apply_diffs(self, diffs): """Populate dict and column object with new values""" self.diffs = diffs for key in self.diff_keys: if key in diffs: setattr(self.result_column, key, diffs[key]) self.process_column(self.result_column) # create an instance of class type if not yet if 'type' in diffs: if callable(self.result_column.type): self.result_column.type = self.result_column.type() if self.result_column.autoincrement and \ not issubclass( self.result_column.type._type_affinity, sqlalchemy.Integer): self.result_column.autoincrement = False # add column to the table if self.table is not None and self.alter_metadata: self.result_column.add_to_table(self.table) def are_column_types_eq(self, old_type, new_type): """Compares two types to be equal""" ret = old_type.__class__ == new_type.__class__ # String length is a special case if ret and isinstance(new_type, sqlalchemy.types.String): ret = (getattr(old_type, 'length', None) == \ getattr(new_type, 'length', None)) return ret def _extract_parameters(self, p, k, column): """Extracts data from p and modifies diffs""" p = list(p) while len(p): if isinstance(p[0], six.string_types): k.setdefault('name', p.pop(0)) elif isinstance(p[0], sqlalchemy.types.TypeEngine): k.setdefault('type', p.pop(0)) elif callable(p[0]): p[0] = p[0]() else: break if len(p): new_col = column.copy_fixed() new_col._init_items(*p) k = self.compare_2_columns(column, new_col, **k) return k def process_column(self, column): """Processes default values for column""" # XXX: this is a snippet from SA processing of positional parameters toinit = list() if column.server_default is not None: if isinstance(column.server_default, sqlalchemy.FetchedValue): toinit.append(column.server_default) else: toinit.append(sqlalchemy.DefaultClause(column.server_default)) if column.server_onupdate is not None: if isinstance(column.server_onupdate, FetchedValue): toinit.append(column.server_default) else: toinit.append(sqlalchemy.DefaultClause(column.server_onupdate, for_update=True)) if toinit: column._init_items(*toinit) def _get_table(self): return getattr(self, '_table', None) def _set_table(self, table): if isinstance(table, six.string_types): if self.alter_metadata: if not self.meta: raise ValueError("metadata must be specified for table" " reflection when using alter_metadata") meta = self.meta if self.engine: meta.bind = self.engine else: if not self.engine and not self.meta: raise ValueError("engine or metadata must be specified" " to reflect tables") if not self.engine: self.engine = self.meta.bind meta = sqlalchemy.MetaData(bind=self.engine) self._table = sqlalchemy.Table(table, meta, autoload=True) elif isinstance(table, sqlalchemy.Table): self._table = table if not self.alter_metadata: self._table.meta = sqlalchemy.MetaData(bind=self._table.bind) def _get_result_column(self): return getattr(self, '_result_column', None) def _set_result_column(self, column): """Set Column to Table based on alter_metadata evaluation.""" self.process_column(column) if not hasattr(self, 'current_name'): self.current_name = column.name if self.alter_metadata: self._result_column = column else: self._result_column = column.copy_fixed() table = property(_get_table, _set_table) result_column = property(_get_result_column, _set_result_column) class ChangesetTable(object): """Changeset extensions to SQLAlchemy tables.""" def create_column(self, column, *p, **kw): """Creates a column. The column parameter may be a column definition or the name of a column in this table. API to :meth:`ChangesetColumn.create` :param column: Column to be created :type column: Column instance or string """ if not isinstance(column, sqlalchemy.Column): # It's a column name column = getattr(self.c, str(column)) column.create(table=self, *p, **kw) def drop_column(self, column, *p, **kw): """Drop a column, given its name or definition. API to :meth:`ChangesetColumn.drop` :param column: Column to be droped :type column: Column instance or string """ if not isinstance(column, sqlalchemy.Column): # It's a column name try: column = getattr(self.c, str(column)) except AttributeError: # That column isn't part of the table. We don't need # its entire definition to drop the column, just its # name, so create a dummy column with the same name. column = sqlalchemy.Column(str(column), sqlalchemy.Integer()) column.drop(table=self, *p, **kw) def rename(self, name, connection=None, **kwargs): """Rename this table. :param name: New name of the table. :type name: string :param connection: reuse connection istead of creating new one. :type connection: :class:`sqlalchemy.engine.base.Connection` instance """ engine = self.bind self.new_name = name visitorcallable = get_engine_visitor(engine, 'schemachanger') run_single_visitor(engine, visitorcallable, self, connection, **kwargs) # Fix metadata registration self.name = name self.deregister() self._set_parent(self.metadata) def _meta_key(self): """Get the meta key for this table.""" return sqlalchemy.schema._get_table_key(self.name, self.schema) def deregister(self): """Remove this table from its metadata""" if SQLA_07: self.metadata._remove_table(self.name, self.schema) else: key = self._meta_key() meta = self.metadata if key in meta.tables: del meta.tables[key] class ChangesetColumn(object): """Changeset extensions to SQLAlchemy columns.""" def alter(self, *p, **k): """Makes a call to :func:`alter_column` for the column this method is called on. """ if 'table' not in k: k['table'] = self.table if 'engine' not in k: k['engine'] = k['table'].bind return alter_column(self, *p, **k) def create(self, table=None, index_name=None, unique_name=None, primary_key_name=None, populate_default=True, connection=None, **kwargs): """Create this column in the database. Assumes the given table exists. ``ALTER TABLE ADD COLUMN``, for most databases. :param table: Table instance to create on. :param index_name: Creates :class:`ChangesetIndex` on this column. :param unique_name: Creates :class:\ `~migrate.changeset.constraint.UniqueConstraint` on this column. :param primary_key_name: Creates :class:\ `~migrate.changeset.constraint.PrimaryKeyConstraint` on this column. :param populate_default: If True, created column will be \ populated with defaults :param connection: reuse connection istead of creating new one. :type table: Table instance :type index_name: string :type unique_name: string :type primary_key_name: string :type populate_default: bool :type connection: :class:`sqlalchemy.engine.base.Connection` instance :returns: self """ self.populate_default = populate_default self.index_name = index_name self.unique_name = unique_name self.primary_key_name = primary_key_name for cons in ('index_name', 'unique_name', 'primary_key_name'): self._check_sanity_constraints(cons) self.add_to_table(table) engine = self.table.bind visitorcallable = get_engine_visitor(engine, 'columngenerator') _run_visitor(engine, visitorcallable, self, connection, **kwargs) # TODO: reuse existing connection if self.populate_default and self.default is not None: stmt = table.update().values({self: engine._execute_default(self.default)}) engine.execute(stmt) return self def drop(self, table=None, connection=None, **kwargs): """Drop this column from the database, leaving its table intact. ``ALTER TABLE DROP COLUMN``, for most databases. :param connection: reuse connection istead of creating new one. :type connection: :class:`sqlalchemy.engine.base.Connection` instance """ if table is not None: self.table = table engine = self.table.bind visitorcallable = get_engine_visitor(engine, 'columndropper') _run_visitor(engine, visitorcallable, self, connection, **kwargs) self.remove_from_table(self.table, unset_table=False) self.table = None return self def add_to_table(self, table): if table is not None and self.table is None: if SQLA_07: table.append_column(self) else: self._set_parent(table) def _col_name_in_constraint(self,cons,name): return False def remove_from_table(self, table, unset_table=True): # TODO: remove primary keys, constraints, etc if unset_table: self.table = None to_drop = set() for index in table.indexes: columns = [] for col in index.columns: if col.name!=self.name: columns.append(col) if columns: index.columns = columns if SQLA_08: index.expressions = columns else: to_drop.add(index) table.indexes = table.indexes - to_drop to_drop = set() for cons in table.constraints: # TODO: deal with other types of constraint if isinstance(cons,(ForeignKeyConstraint, UniqueConstraint)): for col_name in cons.columns: if not isinstance(col_name,six.string_types): col_name = col_name.name if self.name==col_name: to_drop.add(cons) table.constraints = table.constraints - to_drop if table.c.contains_column(self): if SQLA_07: table._columns.remove(self) else: table.c.remove(self) # TODO: this is fixed in 0.6 def copy_fixed(self, **kw): """Create a copy of this ``Column``, with all attributes.""" return sqlalchemy.Column(self.name, self.type, self.default, key=self.key, primary_key=self.primary_key, nullable=self.nullable, index=self.index, unique=self.unique, onupdate=self.onupdate, autoincrement=self.autoincrement, server_default=self.server_default, server_onupdate=self.server_onupdate, *[c.copy(**kw) for c in self.constraints]) def _check_sanity_constraints(self, name): """Check if constraints names are correct""" obj = getattr(self, name) if (getattr(self, name[:-5]) and not obj): raise InvalidConstraintError("Column.create() accepts index_name," " primary_key_name and unique_name to generate constraints") if not isinstance(obj, six.string_types) and obj is not None: raise InvalidConstraintError( "%s argument for column must be constraint name" % name) class ChangesetIndex(object): """Changeset extensions to SQLAlchemy Indexes.""" __visit_name__ = 'index' def rename(self, name, connection=None, **kwargs): """Change the name of an index. :param name: New name of the Index. :type name: string :param connection: reuse connection istead of creating new one. :type connection: :class:`sqlalchemy.engine.base.Connection` instance """ engine = self.table.bind self.new_name = name visitorcallable = get_engine_visitor(engine, 'schemachanger') _run_visitor(engine, visitorcallable, self, connection, **kwargs) self.name = name class ChangesetDefaultClause(object): """Implements comparison between :class:`DefaultClause` instances""" def __eq__(self, other): if isinstance(other, self.__class__): if self.arg == other.arg: return True def __ne__(self, other): return not self.__eq__(other) sqlalchemy-migrate-0.13.0/migrate/changeset/constraint.py0000664000175000017500000001622613553670475023562 0ustar zuulzuul00000000000000""" This module defines standalone schema constraint classes. """ from sqlalchemy import schema from migrate.exceptions import * class ConstraintChangeset(object): """Base class for Constraint classes.""" def _normalize_columns(self, cols, table_name=False): """Given: column objects or names; return col names and (maybe) a table""" colnames = [] table = None for col in cols: if isinstance(col, schema.Column): if col.table is not None and table is None: table = col.table if table_name: col = '.'.join((col.table.name, col.name)) else: col = col.name colnames.append(col) return colnames, table def __do_imports(self, visitor_name, *a, **kw): engine = kw.pop('engine', self.table.bind) from migrate.changeset.databases.visitor import (get_engine_visitor, run_single_visitor) visitorcallable = get_engine_visitor(engine, visitor_name) run_single_visitor(engine, visitorcallable, self, *a, **kw) def create(self, *a, **kw): """Create the constraint in the database. :param engine: the database engine to use. If this is \ :keyword:`None` the instance's engine will be used :type engine: :class:`sqlalchemy.engine.base.Engine` :param connection: reuse connection istead of creating new one. :type connection: :class:`sqlalchemy.engine.base.Connection` instance """ # TODO: set the parent here instead of in __init__ self.__do_imports('constraintgenerator', *a, **kw) def drop(self, *a, **kw): """Drop the constraint from the database. :param engine: the database engine to use. If this is :keyword:`None` the instance's engine will be used :param cascade: Issue CASCADE drop if database supports it :type engine: :class:`sqlalchemy.engine.base.Engine` :type cascade: bool :param connection: reuse connection istead of creating new one. :type connection: :class:`sqlalchemy.engine.base.Connection` instance :returns: Instance with cleared columns """ self.cascade = kw.pop('cascade', False) self.__do_imports('constraintdropper', *a, **kw) # the spirit of Constraint objects is that they # are immutable (just like in a DB. they're only ADDed # or DROPped). #self.columns.clear() return self class PrimaryKeyConstraint(ConstraintChangeset, schema.PrimaryKeyConstraint): """Construct PrimaryKeyConstraint Migrate's additional parameters: :param cols: Columns in constraint. :param table: If columns are passed as strings, this kw is required :type table: Table instance :type cols: strings or Column instances """ __migrate_visit_name__ = 'migrate_primary_key_constraint' def __init__(self, *cols, **kwargs): colnames, table = self._normalize_columns(cols) table = kwargs.pop('table', table) super(PrimaryKeyConstraint, self).__init__(*colnames, **kwargs) if table is not None: self._set_parent(table) def autoname(self): """Mimic the database's automatic constraint names""" return "%s_pkey" % self.table.name class ForeignKeyConstraint(ConstraintChangeset, schema.ForeignKeyConstraint): """Construct ForeignKeyConstraint Migrate's additional parameters: :param columns: Columns in constraint :param refcolumns: Columns that this FK reffers to in another table. :param table: If columns are passed as strings, this kw is required :type table: Table instance :type columns: list of strings or Column instances :type refcolumns: list of strings or Column instances """ __migrate_visit_name__ = 'migrate_foreign_key_constraint' def __init__(self, columns, refcolumns, *args, **kwargs): colnames, table = self._normalize_columns(columns) table = kwargs.pop('table', table) refcolnames, reftable = self._normalize_columns(refcolumns, table_name=True) super(ForeignKeyConstraint, self).__init__(colnames, refcolnames, *args, **kwargs) if table is not None: self._set_parent(table) @property def referenced(self): return [e.column for e in self.elements] @property def reftable(self): return self.referenced[0].table def autoname(self): """Mimic the database's automatic constraint names""" if hasattr(self.columns, 'keys'): # SA <= 0.5 firstcol = self.columns[self.columns.keys()[0]] ret = "%(table)s_%(firstcolumn)s_fkey" % dict( table=firstcol.table.name, firstcolumn=firstcol.name,) else: # SA >= 0.6 ret = "%(table)s_%(firstcolumn)s_fkey" % dict( table=self.table.name, firstcolumn=self.columns[0],) return ret class CheckConstraint(ConstraintChangeset, schema.CheckConstraint): """Construct CheckConstraint Migrate's additional parameters: :param sqltext: Plain SQL text to check condition :param columns: If not name is applied, you must supply this kw\ to autoname constraint :param table: If columns are passed as strings, this kw is required :type table: Table instance :type columns: list of Columns instances :type sqltext: string """ __migrate_visit_name__ = 'migrate_check_constraint' def __init__(self, sqltext, *args, **kwargs): cols = kwargs.pop('columns', []) if not cols and not kwargs.get('name', False): raise InvalidConstraintError('You must either set "name"' 'parameter or "columns" to autogenarate it.') colnames, table = self._normalize_columns(cols) table = kwargs.pop('table', table) schema.CheckConstraint.__init__(self, sqltext, *args, **kwargs) if table is not None: self._set_parent(table) self.colnames = colnames def autoname(self): return "%(table)s_%(cols)s_check" % \ dict(table=self.table.name, cols="_".join(self.colnames)) class UniqueConstraint(ConstraintChangeset, schema.UniqueConstraint): """Construct UniqueConstraint Migrate's additional parameters: :param cols: Columns in constraint. :param table: If columns are passed as strings, this kw is required :type table: Table instance :type cols: strings or Column instances .. versionadded:: 0.6.0 """ __migrate_visit_name__ = 'migrate_unique_constraint' def __init__(self, *cols, **kwargs): self.colnames, table = self._normalize_columns(cols) table = kwargs.pop('table', table) super(UniqueConstraint, self).__init__(*self.colnames, **kwargs) if table is not None: self._set_parent(table) def autoname(self): """Mimic the database's automatic constraint names""" return "%s_%s_key" % (self.table.name, self.colnames[0]) sqlalchemy-migrate-0.13.0/migrate/changeset/__init__.py0000664000175000017500000000140413553670475023125 0ustar zuulzuul00000000000000""" This module extends SQLAlchemy and provides additional DDL [#]_ support. .. [#] SQL Data Definition Language """ import re import sqlalchemy from sqlalchemy import __version__ as _sa_version _sa_version = tuple(int(re.match("\d+", x).group(0)) for x in _sa_version.split(".")) SQLA_07 = _sa_version >= (0, 7) SQLA_08 = _sa_version >= (0, 8) SQLA_09 = _sa_version >= (0, 9) SQLA_10 = _sa_version >= (1, 0) del re del _sa_version from migrate.changeset.schema import * from migrate.changeset.constraint import * sqlalchemy.schema.Table.__bases__ += (ChangesetTable, ) sqlalchemy.schema.Column.__bases__ += (ChangesetColumn, ) sqlalchemy.schema.Index.__bases__ += (ChangesetIndex, ) sqlalchemy.schema.DefaultClause.__bases__ += (ChangesetDefaultClause, ) sqlalchemy-migrate-0.13.0/migrate/changeset/ansisql.py0000664000175000017500000002577013553670475023054 0ustar zuulzuul00000000000000""" Extensions to SQLAlchemy for altering existing tables. At the moment, this isn't so much based off of ANSI as much as things that just happen to work with multiple databases. """ import sqlalchemy as sa from sqlalchemy.schema import SchemaVisitor from sqlalchemy.engine.default import DefaultDialect from sqlalchemy.sql import ClauseElement from sqlalchemy.schema import (ForeignKeyConstraint, PrimaryKeyConstraint, CheckConstraint, UniqueConstraint, Index) from migrate import exceptions import sqlalchemy.sql.compiler from migrate.changeset import constraint from migrate.changeset import util from six.moves import StringIO from sqlalchemy.schema import AddConstraint, DropConstraint from sqlalchemy.sql.compiler import DDLCompiler SchemaGenerator = SchemaDropper = DDLCompiler class AlterTableVisitor(SchemaVisitor): """Common operations for ``ALTER TABLE`` statements.""" # engine.Compiler looks for .statement # when it spawns off a new compiler statement = ClauseElement() def append(self, s): """Append content to the SchemaIterator's query buffer.""" self.buffer.write(s) def execute(self): """Execute the contents of the SchemaIterator's buffer.""" try: return self.connection.execute(self.buffer.getvalue()) finally: self.buffer.seek(0) self.buffer.truncate() def __init__(self, dialect, connection, **kw): self.connection = connection self.buffer = StringIO() self.preparer = dialect.identifier_preparer self.dialect = dialect def traverse_single(self, elem): ret = super(AlterTableVisitor, self).traverse_single(elem) if ret: # adapt to 0.6 which uses a string-returning # object self.append(" %s" % ret) def _to_table(self, param): """Returns the table object for the given param object.""" if isinstance(param, (sa.Column, sa.Index, sa.schema.Constraint)): ret = param.table else: ret = param return ret def start_alter_table(self, param): """Returns the start of an ``ALTER TABLE`` SQL-Statement. Use the param object to determine the table name and use it for building the SQL statement. :param param: object to determine the table from :type param: :class:`sqlalchemy.Column`, :class:`sqlalchemy.Index`, :class:`sqlalchemy.schema.Constraint`, :class:`sqlalchemy.Table`, or string (table name) """ table = self._to_table(param) self.append('\nALTER TABLE %s ' % self.preparer.format_table(table)) return table class ANSIColumnGenerator(AlterTableVisitor, SchemaGenerator): """Extends ansisql generator for column creation (alter table add col)""" def visit_column(self, column): """Create a column (table already exists). :param column: column object :type column: :class:`sqlalchemy.Column` instance """ if column.default is not None: self.traverse_single(column.default) table = self.start_alter_table(column) self.append("ADD ") self.append(self.get_column_specification(column)) for cons in column.constraints: self.traverse_single(cons) self.execute() # ALTER TABLE STATEMENTS # add indexes and unique constraints if column.index_name: Index(column.index_name,column).create() elif column.unique_name: constraint.UniqueConstraint(column, name=column.unique_name).create() # SA bounds FK constraints to table, add manually for fk in column.foreign_keys: self.add_foreignkey(fk.constraint) # add primary key constraint if needed if column.primary_key_name: cons = constraint.PrimaryKeyConstraint(column, name=column.primary_key_name) cons.create() def add_foreignkey(self, fk): self.connection.execute(AddConstraint(fk)) class ANSIColumnDropper(AlterTableVisitor, SchemaDropper): """Extends ANSI SQL dropper for column dropping (``ALTER TABLE DROP COLUMN``). """ def visit_column(self, column): """Drop a column from its table. :param column: the column object :type column: :class:`sqlalchemy.Column` """ table = self.start_alter_table(column) self.append('DROP COLUMN %s' % self.preparer.format_column(column)) self.execute() class ANSISchemaChanger(AlterTableVisitor, SchemaGenerator): """Manages changes to existing schema elements. Note that columns are schema elements; ``ALTER TABLE ADD COLUMN`` is in SchemaGenerator. All items may be renamed. Columns can also have many of their properties - type, for example - changed. Each function is passed a tuple, containing (object, name); where object is a type of object you'd expect for that function (ie. table for visit_table) and name is the object's new name. NONE means the name is unchanged. """ def visit_table(self, table): """Rename a table. Other ops aren't supported.""" self.start_alter_table(table) self.append("RENAME TO %s" % self.preparer.quote(table.new_name)) self.execute() def visit_index(self, index): """Rename an index""" if hasattr(self, '_validate_identifier'): # SA <= 0.6.3 self.append("ALTER INDEX %s RENAME TO %s" % ( self.preparer.quote( self._validate_identifier( index.name, True)), self.preparer.quote( self._validate_identifier( index.new_name, True)))) elif hasattr(self, '_index_identifier'): # SA >= 0.6.5, < 0.8 self.append("ALTER INDEX %s RENAME TO %s" % ( self.preparer.quote( self._index_identifier( index.name)), self.preparer.quote( self._index_identifier( index.new_name)))) else: # SA >= 0.8 class NewName(object): """Map obj.name -> obj.new_name""" def __init__(self, index): self.name = index.new_name self._obj = index def __getattr__(self, attr): if attr == 'name': return getattr(self, attr) return getattr(self._obj, attr) self.append("ALTER INDEX %s RENAME TO %s" % ( self._prepared_index_name(index), self._prepared_index_name(NewName(index)))) self.execute() def visit_column(self, delta): """Rename/change a column.""" # ALTER COLUMN is implemented as several ALTER statements keys = delta.keys() if 'type' in keys: self._run_subvisit(delta, self._visit_column_type) if 'nullable' in keys: self._run_subvisit(delta, self._visit_column_nullable) if 'server_default' in keys: # Skip 'default': only handle server-side defaults, others # are managed by the app, not the db. self._run_subvisit(delta, self._visit_column_default) if 'name' in keys: self._run_subvisit(delta, self._visit_column_name, start_alter=False) def _run_subvisit(self, delta, func, start_alter=True): """Runs visit method based on what needs to be changed on column""" table = self._to_table(delta.table) col_name = delta.current_name if start_alter: self.start_alter_column(table, col_name) ret = func(table, delta.result_column, delta) self.execute() def start_alter_column(self, table, col_name): """Starts ALTER COLUMN""" self.start_alter_table(table) self.append("ALTER COLUMN %s " % self.preparer.quote(col_name)) def _visit_column_nullable(self, table, column, delta): nullable = delta['nullable'] if nullable: self.append("DROP NOT NULL") else: self.append("SET NOT NULL") def _visit_column_default(self, table, column, delta): default_text = self.get_column_default_string(column) if default_text is not None: self.append("SET DEFAULT %s" % default_text) else: self.append("DROP DEFAULT") def _visit_column_type(self, table, column, delta): type_ = delta['type'] type_text = str(type_.compile(dialect=self.dialect)) self.append("TYPE %s" % type_text) def _visit_column_name(self, table, column, delta): self.start_alter_table(table) col_name = self.preparer.quote(delta.current_name) new_name = self.preparer.format_column(delta.result_column) self.append('RENAME COLUMN %s TO %s' % (col_name, new_name)) class ANSIConstraintCommon(AlterTableVisitor): """ Migrate's constraints require a separate creation function from SA's: Migrate's constraints are created independently of a table; SA's are created at the same time as the table. """ def get_constraint_name(self, cons): """Gets a name for the given constraint. If the name is already set it will be used otherwise the constraint's :meth:`autoname ` method is used. :param cons: constraint object """ if cons.name is not None: ret = cons.name else: ret = cons.name = cons.autoname() return ret def visit_migrate_primary_key_constraint(self, *p, **k): self._visit_constraint(*p, **k) def visit_migrate_foreign_key_constraint(self, *p, **k): self._visit_constraint(*p, **k) def visit_migrate_check_constraint(self, *p, **k): self._visit_constraint(*p, **k) def visit_migrate_unique_constraint(self, *p, **k): self._visit_constraint(*p, **k) class ANSIConstraintGenerator(ANSIConstraintCommon, SchemaGenerator): def _visit_constraint(self, constraint): constraint.name = self.get_constraint_name(constraint) self.append(self.process(AddConstraint(constraint))) self.execute() class ANSIConstraintDropper(ANSIConstraintCommon, SchemaDropper): def _visit_constraint(self, constraint): constraint.name = self.get_constraint_name(constraint) self.append(self.process(DropConstraint(constraint, cascade=constraint.cascade))) self.execute() class ANSIDialect(DefaultDialect): columngenerator = ANSIColumnGenerator columndropper = ANSIColumnDropper schemachanger = ANSISchemaChanger constraintgenerator = ANSIConstraintGenerator constraintdropper = ANSIConstraintDropper sqlalchemy-migrate-0.13.0/migrate/tests/0000775000175000017500000000000013553670602020206 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/integrated/0000775000175000017500000000000013553670602022334 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/integrated/test_docs.py0000664000175000017500000000070513553670475024707 0ustar zuulzuul00000000000000import doctest import os from migrate.tests import fixture # Collect tests for all handwritten docs: doc/*.rst dir = ('..','..','..','doc','source') absdir = (os.path.dirname(os.path.abspath(__file__)),)+dir dirpath = os.path.join(*absdir) files = [f for f in os.listdir(dirpath) if f.endswith('.rst')] paths = [os.path.join(*(dir+(f,))) for f in files] assert len(paths) > 0 suite = doctest.DocFileSuite(*paths) def test_docs(): suite.debug() sqlalchemy-migrate-0.13.0/migrate/tests/integrated/__init__.py0000664000175000017500000000000013553670475024443 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/versioning/0000775000175000017500000000000013553670602022371 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_runchangeset.py0000664000175000017500000000325413553670475026504 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os,shutil from migrate.tests import fixture from migrate.versioning.schema import * from migrate.versioning import script class TestRunChangeset(fixture.Pathed,fixture.DB): level=fixture.DB.CONNECT def _setup(self, url): super(TestRunChangeset, self)._setup(url) Repository.clear() self.path_repos=self.tmp_repos() # Create repository, script Repository.create(self.path_repos,'repository_name') @fixture.usedb() def test_changeset_run(self): """Running a changeset against a repository gives expected results""" repos=Repository(self.path_repos) for i in range(10): repos.create_script('') try: ControlledSchema(self.engine,repos).drop() except: pass db=ControlledSchema.create(self.engine,repos) # Scripts are empty; we'll check version # correctness. # (Correct application of their content is checked elsewhere) self.assertEqual(db.version,0) db.upgrade(1) self.assertEqual(db.version,1) db.upgrade(5) self.assertEqual(db.version,5) db.upgrade(5) self.assertEqual(db.version,5) db.upgrade(None) # Latest is implied self.assertEqual(db.version,10) self.assertRaises(Exception,db.upgrade,11) self.assertEqual(db.version,10) db.upgrade(9) self.assertEqual(db.version,9) db.upgrade(0) self.assertEqual(db.version,0) self.assertRaises(Exception,db.upgrade,-1) self.assertEqual(db.version,0) #changeset = repos.changeset(self.url,0) db.drop() sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_genmodel.py0000664000175000017500000002100413553670475025601 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- import os import six import sqlalchemy from sqlalchemy import * from migrate.versioning import genmodel, schemadiff from migrate.changeset import schema from migrate.tests import fixture class TestSchemaDiff(fixture.DB): table_name = 'tmp_schemadiff' level = fixture.DB.CONNECT def _setup(self, url): super(TestSchemaDiff, self)._setup(url) self.meta = MetaData(self.engine) self.meta.reflect() self.meta.drop_all() # in case junk tables are lying around in the test database self.meta = MetaData(self.engine) self.meta.reflect() # needed if we just deleted some tables self.table = Table(self.table_name, self.meta, Column('id',Integer(), primary_key=True), Column('name', UnicodeText()), Column('data', UnicodeText()), ) def _teardown(self): if self.table.exists(): self.meta = MetaData(self.engine) self.meta.reflect() self.meta.drop_all() super(TestSchemaDiff, self)._teardown() def _applyLatestModel(self): diff = schemadiff.getDiffOfModelAgainstDatabase(self.meta, self.engine, excludeTables=['migrate_version']) genmodel.ModelGenerator(diff,self.engine).runB2A() # NOTE(mriedem): DB2 handles UnicodeText as LONG VARGRAPHIC # so the schema diffs on the columns don't work with this test. @fixture.usedb(not_supported='ibm_db_sa') def test_functional(self): def assertDiff(isDiff, tablesMissingInDatabase, tablesMissingInModel, tablesWithDiff): diff = schemadiff.getDiffOfModelAgainstDatabase(self.meta, self.engine, excludeTables=['migrate_version']) self.assertEqual( (diff.tables_missing_from_B, diff.tables_missing_from_A, list(diff.tables_different.keys()), bool(diff)), (tablesMissingInDatabase, tablesMissingInModel, tablesWithDiff, isDiff) ) # Model is defined but database is empty. assertDiff(True, [self.table_name], [], []) # Check Python upgrade and downgrade of database from updated model. diff = schemadiff.getDiffOfModelAgainstDatabase(self.meta, self.engine, excludeTables=['migrate_version']) decls, upgradeCommands, downgradeCommands = genmodel.ModelGenerator(diff,self.engine).genB2AMigration() # Feature test for a recent SQLa feature; # expect different output in that case. if repr(String()) == 'String()': self.assertEqualIgnoreWhitespace(decls, ''' from migrate.changeset import schema pre_meta = MetaData() post_meta = MetaData() tmp_schemadiff = Table('tmp_schemadiff', post_meta, Column('id', Integer, primary_key=True, nullable=False), Column('name', UnicodeText), Column('data', UnicodeText), ) ''') else: self.assertEqualIgnoreWhitespace(decls, ''' from migrate.changeset import schema pre_meta = MetaData() post_meta = MetaData() tmp_schemadiff = Table('tmp_schemadiff', post_meta, Column('id', Integer, primary_key=True, nullable=False), Column('name', UnicodeText(length=None)), Column('data', UnicodeText(length=None)), ) ''') # Create table in database, now model should match database. self._applyLatestModel() assertDiff(False, [], [], []) # Check Python code gen from database. diff = schemadiff.getDiffOfModelAgainstDatabase(MetaData(), self.engine, excludeTables=['migrate_version']) src = genmodel.ModelGenerator(diff,self.engine).genBDefinition() namespace = {} six.exec_(src, namespace) c1 = Table('tmp_schemadiff', self.meta, autoload=True).c c2 = namespace['tmp_schemadiff'].c self.compare_columns_equal(c1, c2, ['type']) # TODO: get rid of ignoring type if not self.engine.name == 'oracle': # Add data, later we'll make sure it's still present. result = self.engine.execute(self.table.insert(), id=1, name=u'mydata') dataId = result.inserted_primary_key[0] # Modify table in model (by removing it and adding it back to model) # Drop column data, add columns data2 and data3. self.meta.remove(self.table) self.table = Table(self.table_name,self.meta, Column('id',Integer(),primary_key=True), Column('name',UnicodeText(length=None)), Column('data2',Integer(),nullable=True), Column('data3',Integer(),nullable=True), ) assertDiff(True, [], [], [self.table_name]) # Apply latest model changes and find no more diffs. self._applyLatestModel() assertDiff(False, [], [], []) # Drop column data3, add data4 self.meta.remove(self.table) self.table = Table(self.table_name,self.meta, Column('id',Integer(),primary_key=True), Column('name',UnicodeText(length=None)), Column('data2',Integer(),nullable=True), Column('data4',Float(),nullable=True), ) assertDiff(True, [], [], [self.table_name]) diff = schemadiff.getDiffOfModelAgainstDatabase( self.meta, self.engine, excludeTables=['migrate_version']) decls, upgradeCommands, downgradeCommands = genmodel.ModelGenerator(diff,self.engine).genB2AMigration(indent='') # decls have changed since genBDefinition six.exec_(decls, namespace) # migration commands expect a namespace containing migrate_engine namespace['migrate_engine'] = self.engine # run the migration up and down six.exec_(upgradeCommands, namespace) assertDiff(False, [], [], []) six.exec_(decls, namespace) six.exec_(downgradeCommands, namespace) assertDiff(True, [], [], [self.table_name]) six.exec_(decls, namespace) six.exec_(upgradeCommands, namespace) assertDiff(False, [], [], []) if not self.engine.name == 'oracle': # Make sure data is still present. result = self.engine.execute(self.table.select(self.table.c.id==dataId)) rows = result.fetchall() self.assertEqual(len(rows), 1) self.assertEqual(rows[0].name, 'mydata') # Add data, later we'll make sure it's still present. result = self.engine.execute(self.table.insert(), id=2, name=u'mydata2', data2=123) dataId2 = result.inserted_primary_key[0] # Change column type in model. self.meta.remove(self.table) self.table = Table(self.table_name,self.meta, Column('id',Integer(),primary_key=True), Column('name',UnicodeText(length=None)), Column('data2',String(255),nullable=True), ) # XXX test type diff return assertDiff(True, [], [], [self.table_name]) # Apply latest model changes and find no more diffs. self._applyLatestModel() assertDiff(False, [], [], []) if not self.engine.name == 'oracle': # Make sure data is still present. result = self.engine.execute(self.table.select(self.table.c.id==dataId2)) rows = result.fetchall() self.assertEqual(len(rows), 1) self.assertEqual(rows[0].name, 'mydata2') self.assertEqual(rows[0].data2, '123') # Delete data, since we're about to make a required column. # Not even using sqlalchemy.PassiveDefault helps because we're doing explicit column select. self.engine.execute(self.table.delete(), id=dataId) if not self.engine.name == 'firebird': # Change column nullable in model. self.meta.remove(self.table) self.table = Table(self.table_name,self.meta, Column('id',Integer(),primary_key=True), Column('name',UnicodeText(length=None)), Column('data2',String(255),nullable=False), ) assertDiff(True, [], [], [self.table_name]) # TODO test nullable diff # Apply latest model changes and find no more diffs. self._applyLatestModel() assertDiff(False, [], [], []) # Remove table from model. self.meta.remove(self.table) assertDiff(True, [], [self.table_name], []) sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_cfgparse.py0000664000175000017500000000213013553670475025600 0ustar zuulzuul00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- from migrate.versioning import cfgparse from migrate.versioning.repository import * from migrate.versioning.template import Template from migrate.tests import fixture class TestConfigParser(fixture.Base): def test_to_dict(self): """Correctly interpret config results as dictionaries""" parser = cfgparse.Parser(dict(default_value=42)) self.assertTrue(len(parser.sections()) == 0) parser.add_section('section') parser.set('section','option','value') self.assertEqual(parser.get('section', 'option'), 'value') self.assertEqual(parser.to_dict()['section']['option'], 'value') def test_table_config(self): """We should be able to specify the table to be used with a repository""" default_text = Repository.prepare_config(Template().get_repository(), 'repository_name', {}) specified_text = Repository.prepare_config(Template().get_repository(), 'repository_name', {'version_table': '_other_table'}) self.assertNotEqual(default_text, specified_text) sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_api.py0000664000175000017500000001022413553670475024562 0ustar zuulzuul00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- import six from migrate.exceptions import * from migrate.versioning import api from migrate.tests.fixture.pathed import * from migrate.tests.fixture import models from migrate.tests import fixture class TestAPI(Pathed): def test_help(self): self.assertTrue(isinstance(api.help('help'), six.string_types)) self.assertRaises(UsageError, api.help) self.assertRaises(UsageError, api.help, 'foobar') self.assertTrue(isinstance(api.help('create'), str)) # test that all commands return some text for cmd in api.__all__: content = api.help(cmd) self.assertTrue(content) def test_create(self): tmprepo = self.tmp_repos() api.create(tmprepo, 'temp') # repository already exists self.assertRaises(KnownError, api.create, tmprepo, 'temp') def test_script(self): repo = self.tmp_repos() api.create(repo, 'temp') api.script('first version', repo) def test_script_sql(self): repo = self.tmp_repos() api.create(repo, 'temp') api.script_sql('postgres', 'desc', repo) def test_version(self): repo = self.tmp_repos() api.create(repo, 'temp') api.version(repo) def test_version_control(self): repo = self.tmp_repos() api.create(repo, 'temp') api.version_control('sqlite:///', repo) api.version_control('sqlite:///', six.text_type(repo)) def test_source(self): repo = self.tmp_repos() api.create(repo, 'temp') api.script('first version', repo) api.script_sql('default', 'desc', repo) # no repository self.assertRaises(UsageError, api.source, 1) # stdout out = api.source(1, dest=None, repository=repo) self.assertTrue(out) # file out = api.source(1, dest=self.tmp_repos(), repository=repo) self.assertFalse(out) def test_manage(self): output = api.manage(os.path.join(self.temp_usable_dir, 'manage.py')) class TestSchemaAPI(fixture.DB, Pathed): def _setup(self, url): super(TestSchemaAPI, self)._setup(url) self.repo = self.tmp_repos() api.create(self.repo, 'temp') self.schema = api.version_control(url, self.repo) def _teardown(self): self.schema = api.drop_version_control(self.url, self.repo) super(TestSchemaAPI, self)._teardown() @fixture.usedb() def test_workflow(self): self.assertEqual(api.db_version(self.url, self.repo), 0) api.script('First Version', self.repo) self.assertEqual(api.db_version(self.url, self.repo), 0) api.upgrade(self.url, self.repo, 1) self.assertEqual(api.db_version(self.url, self.repo), 1) api.downgrade(self.url, self.repo, 0) self.assertEqual(api.db_version(self.url, self.repo), 0) api.test(self.url, self.repo) self.assertEqual(api.db_version(self.url, self.repo), 0) # preview # TODO: test output out = api.upgrade(self.url, self.repo, preview_py=True) out = api.upgrade(self.url, self.repo, preview_sql=True) api.upgrade(self.url, self.repo, 1) api.script_sql('default', 'desc', self.repo) self.assertRaises(UsageError, api.upgrade, self.url, self.repo, 2, preview_py=True) out = api.upgrade(self.url, self.repo, 2, preview_sql=True) # cant upgrade to version 1, already at version 1 self.assertEqual(api.db_version(self.url, self.repo), 1) self.assertRaises(KnownError, api.upgrade, self.url, self.repo, 0) @fixture.usedb() def test_compare_model_to_db(self): diff = api.compare_model_to_db(self.url, self.repo, models.meta) @fixture.usedb() def test_create_model(self): model = api.create_model(self.url, self.repo) @fixture.usedb() def test_make_update_script_for_model(self): model = api.make_update_script_for_model(self.url, self.repo, models.meta_old_rundiffs, models.meta_rundiffs) @fixture.usedb() def test_update_db_from_model(self): model = api.update_db_from_model(self.url, self.repo, models.meta_rundiffs) sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_schema.py0000664000175000017500000001513613553670475025260 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os import shutil import six from migrate import exceptions from migrate.versioning.schema import * from migrate.versioning import script, schemadiff from sqlalchemy import * from migrate.tests import fixture class TestControlledSchema(fixture.Pathed, fixture.DB): # Transactions break postgres in this test; we'll clean up after ourselves level = fixture.DB.CONNECT def setUp(self): super(TestControlledSchema, self).setUp() self.path_repos = self.temp_usable_dir + '/repo/' self.repos = Repository.create(self.path_repos, 'repo_name') def _setup(self, url): self.setUp() super(TestControlledSchema, self)._setup(url) self.cleanup() def _teardown(self): super(TestControlledSchema, self)._teardown() self.cleanup() self.tearDown() def cleanup(self): # drop existing version table if necessary try: ControlledSchema(self.engine, self.repos).drop() except: # No table to drop; that's fine, be silent pass def tearDown(self): self.cleanup() super(TestControlledSchema, self).tearDown() @fixture.usedb() def test_version_control(self): """Establish version control on a particular database""" # Establish version control on this database dbcontrol = ControlledSchema.create(self.engine, self.repos) # Trying to create another DB this way fails: table exists self.assertRaises(exceptions.DatabaseAlreadyControlledError, ControlledSchema.create, self.engine, self.repos) # We can load a controlled DB this way, too dbcontrol0 = ControlledSchema(self.engine, self.repos) self.assertEqual(dbcontrol, dbcontrol0) # We can also use a repository path, instead of a repository dbcontrol0 = ControlledSchema(self.engine, self.repos.path) self.assertEqual(dbcontrol, dbcontrol0) # We don't have to use the same connection engine = create_engine(self.url) dbcontrol0 = ControlledSchema(engine, self.repos.path) self.assertEqual(dbcontrol, dbcontrol0) # Clean up: dbcontrol.drop() # Attempting to drop vc from a db without it should fail self.assertRaises(exceptions.DatabaseNotControlledError, dbcontrol.drop) # No table defined should raise error self.assertRaises(exceptions.DatabaseNotControlledError, ControlledSchema, self.engine, self.repos) @fixture.usedb() def test_version_control_specified(self): """Establish version control with a specified version""" # Establish version control on this database version = 0 dbcontrol = ControlledSchema.create(self.engine, self.repos, version) self.assertEqual(dbcontrol.version, version) # Correct when we load it, too dbcontrol = ControlledSchema(self.engine, self.repos) self.assertEqual(dbcontrol.version, version) dbcontrol.drop() # Now try it with a nonzero value version = 10 for i in range(version): self.repos.create_script('') self.assertEqual(self.repos.latest, version) # Test with some mid-range value dbcontrol = ControlledSchema.create(self.engine,self.repos, 5) self.assertEqual(dbcontrol.version, 5) dbcontrol.drop() # Test with max value dbcontrol = ControlledSchema.create(self.engine, self.repos, version) self.assertEqual(dbcontrol.version, version) dbcontrol.drop() @fixture.usedb() def test_version_control_invalid(self): """Try to establish version control with an invalid version""" versions = ('Thirteen', '-1', -1, '' , 13) # A fresh repository doesn't go up to version 13 yet for version in versions: #self.assertRaises(ControlledSchema.InvalidVersionError, # Can't have custom errors with assertRaises... try: ControlledSchema.create(self.engine, self.repos, version) self.assertTrue(False, repr(version)) except exceptions.InvalidVersionError: pass @fixture.usedb() def test_changeset(self): """Create changeset from controlled schema""" dbschema = ControlledSchema.create(self.engine, self.repos) # empty schema doesn't have changesets cs = dbschema.changeset() self.assertEqual(cs, {}) for i in range(5): self.repos.create_script('') self.assertEqual(self.repos.latest, 5) cs = dbschema.changeset(5) self.assertEqual(len(cs), 5) # cleanup dbschema.drop() @fixture.usedb() def test_upgrade_runchange(self): dbschema = ControlledSchema.create(self.engine, self.repos) for i in range(10): self.repos.create_script('') self.assertEqual(self.repos.latest, 10) dbschema.upgrade(10) self.assertRaises(ValueError, dbschema.upgrade, 'a') self.assertRaises(exceptions.InvalidVersionError, dbschema.runchange, 20, '', 1) # TODO: test for table version in db # cleanup dbschema.drop() @fixture.usedb() def test_create_model(self): """Test workflow to generate create_model""" model = ControlledSchema.create_model(self.engine, self.repos, declarative=False) self.assertTrue(isinstance(model, six.string_types)) model = ControlledSchema.create_model(self.engine, self.repos.path, declarative=True) self.assertTrue(isinstance(model, six.string_types)) @fixture.usedb() def test_compare_model_to_db(self): meta = self.construct_model() diff = ControlledSchema.compare_model_to_db(self.engine, meta, self.repos) self.assertTrue(isinstance(diff, schemadiff.SchemaDiff)) diff = ControlledSchema.compare_model_to_db(self.engine, meta, self.repos.path) self.assertTrue(isinstance(diff, schemadiff.SchemaDiff)) meta.drop_all(self.engine) @fixture.usedb() def test_update_db_from_model(self): dbschema = ControlledSchema.create(self.engine, self.repos) meta = self.construct_model() dbschema.update_db_from_model(meta) # TODO: test for table version in db # cleanup dbschema.drop() meta.drop_all(self.engine) def construct_model(self): meta = MetaData() user = Table('temp_model_schema', meta, Column('id', Integer), Column('user', String(245))) return meta # TODO: test how are tables populated in db sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_template.py0000664000175000017500000000611013553670475025623 0ustar zuulzuul00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- import os import shutil import migrate.versioning.templates from migrate.versioning.template import * from migrate.versioning import api from migrate.tests import fixture class TestTemplate(fixture.Pathed): def test_templates(self): """We can find the path to all repository templates""" path = str(Template()) self.assertTrue(os.path.exists(path)) def test_repository(self): """We can find the path to the default repository""" path = Template().get_repository() self.assertTrue(os.path.exists(path)) def test_script(self): """We can find the path to the default migration script""" path = Template().get_script() self.assertTrue(os.path.exists(path)) def test_custom_templates_and_themes(self): """Users can define their own templates with themes""" new_templates_dir = os.path.join(self.temp_usable_dir, 'templates') manage_tmpl_file = os.path.join(new_templates_dir, 'manage/custom.py_tmpl') repository_tmpl_file = os.path.join(new_templates_dir, 'repository/custom/README') script_tmpl_file = os.path.join(new_templates_dir, 'script/custom.py_tmpl') sql_script_tmpl_file = os.path.join(new_templates_dir, 'sql_script/custom.py_tmpl') MANAGE_CONTENTS = 'print "manage.py"' README_CONTENTS = 'MIGRATE README!' SCRIPT_FILE_CONTENTS = 'print "script.py"' new_repo_dest = self.tmp_repos() new_manage_dest = self.tmp_py() # make new templates dir shutil.copytree(migrate.versioning.templates.__path__[0], new_templates_dir) shutil.copytree(os.path.join(new_templates_dir, 'repository/default'), os.path.join(new_templates_dir, 'repository/custom')) # edit templates f = open(manage_tmpl_file, 'w').write(MANAGE_CONTENTS) f = open(repository_tmpl_file, 'w').write(README_CONTENTS) f = open(script_tmpl_file, 'w').write(SCRIPT_FILE_CONTENTS) f = open(sql_script_tmpl_file, 'w').write(SCRIPT_FILE_CONTENTS) # create repository, manage file and python script kw = {} kw['templates_path'] = new_templates_dir kw['templates_theme'] = 'custom' api.create(new_repo_dest, 'repo_name', **kw) api.script('test', new_repo_dest, **kw) api.script_sql('postgres', 'foo', new_repo_dest, **kw) api.manage(new_manage_dest, **kw) # assert changes self.assertEqual(open(new_manage_dest).read(), MANAGE_CONTENTS) self.assertEqual(open(os.path.join(new_repo_dest, 'manage.py')).read(), MANAGE_CONTENTS) self.assertEqual(open(os.path.join(new_repo_dest, 'README')).read(), README_CONTENTS) self.assertEqual(open(os.path.join(new_repo_dest, 'versions/001_test.py')).read(), SCRIPT_FILE_CONTENTS) self.assertEqual(open(os.path.join(new_repo_dest, 'versions/002_foo_postgres_downgrade.sql')).read(), SCRIPT_FILE_CONTENTS) self.assertEqual(open(os.path.join(new_repo_dest, 'versions/002_foo_postgres_upgrade.sql')).read(), SCRIPT_FILE_CONTENTS) sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_shell.py0000664000175000017500000006133113553670475025125 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os import sys import tempfile import six from six.moves import cStringIO from sqlalchemy import MetaData, Table from migrate.exceptions import * from migrate.versioning.repository import Repository from migrate.versioning import genmodel, shell, api from migrate.tests.fixture import Shell, DB, usedb from migrate.tests.fixture import models class TestShellCommands(Shell): """Tests migrate.py commands""" def test_help(self): """Displays default help dialog""" self.assertEqual(self.env.run('migrate -h').returncode, 0) self.assertEqual(self.env.run('migrate --help').returncode, 0) self.assertEqual(self.env.run('migrate help').returncode, 0) def test_help_commands(self): """Display help on a specific command""" # we can only test that we get some output for cmd in api.__all__: result = self.env.run('migrate help %s' % cmd) self.assertTrue(isinstance(result.stdout, six.string_types)) self.assertTrue(result.stdout) self.assertFalse(result.stderr) def test_shutdown_logging(self): """Try to shutdown logging output""" repos = self.tmp_repos() result = self.env.run('migrate create %s repository_name' % repos) result = self.env.run('migrate version %s --disable_logging' % repos) self.assertEqual(result.stdout, '') result = self.env.run('migrate version %s -q' % repos) self.assertEqual(result.stdout, '') # TODO: assert logging messages to 0 shell.main(['version', repos], logging=False) def test_main_with_runpy(self): if sys.version_info[:2] == (2, 4): self.skipTest("runpy is not part of python2.4") from runpy import run_module try: original = sys.argv sys.argv=['X','--help'] run_module('migrate.versioning.shell', run_name='__main__') finally: sys.argv = original def _check_error(self,args,code,expected,**kw): original = sys.stderr try: actual = cStringIO() sys.stderr = actual try: shell.main(args,**kw) except SystemExit as e: self.assertEqual(code,e.args[0]) else: self.fail('No exception raised') finally: sys.stderr = original actual = actual.getvalue() self.assertTrue(expected in actual,'%r not in:\n"""\n%s\n"""'%(expected,actual)) def test_main(self): """Test main() function""" repos = self.tmp_repos() shell.main(['help']) shell.main(['help', 'create']) shell.main(['create', 'repo_name', '--preview_sql'], repository=repos) shell.main(['version', '--', '--repository=%s' % repos]) shell.main(['version', '-d', '--repository=%s' % repos, '--version=2']) self._check_error(['foobar'],2,'error: Invalid command foobar') self._check_error(['create', 'f', 'o', 'o'],2,'error: Too many arguments for command create: o') self._check_error(['create'],2,'error: Not enough arguments for command create: name, repository not specified') self._check_error(['create', 'repo_name'],2,'already exists', repository=repos) def test_create(self): """Repositories are created successfully""" repos = self.tmp_repos() # Creating a file that doesn't exist should succeed result = self.env.run('migrate create %s repository_name' % repos) # Files should actually be created self.assertTrue(os.path.exists(repos)) # The default table should not be None repos_ = Repository(repos) self.assertNotEqual(repos_.config.get('db_settings', 'version_table'), 'None') # Can't create it again: it already exists result = self.env.run('migrate create %s repository_name' % repos, expect_error=True) self.assertEqual(result.returncode, 2) def test_script(self): """We can create a migration script via the command line""" repos = self.tmp_repos() result = self.env.run('migrate create %s repository_name' % repos) result = self.env.run('migrate script --repository=%s Desc' % repos) self.assertTrue(os.path.exists('%s/versions/001_Desc.py' % repos)) result = self.env.run('migrate script More %s' % repos) self.assertTrue(os.path.exists('%s/versions/002_More.py' % repos)) result = self.env.run('migrate script "Some Random name" %s' % repos) self.assertTrue(os.path.exists('%s/versions/003_Some_Random_name.py' % repos)) def test_script_sql(self): """We can create a migration sql script via the command line""" repos = self.tmp_repos() result = self.env.run('migrate create %s repository_name' % repos) result = self.env.run('migrate script_sql mydb foo %s' % repos) self.assertTrue(os.path.exists('%s/versions/001_foo_mydb_upgrade.sql' % repos)) self.assertTrue(os.path.exists('%s/versions/001_foo_mydb_downgrade.sql' % repos)) # Test creating a second result = self.env.run('migrate script_sql postgres foo --repository=%s' % repos) self.assertTrue(os.path.exists('%s/versions/002_foo_postgres_upgrade.sql' % repos)) self.assertTrue(os.path.exists('%s/versions/002_foo_postgres_downgrade.sql' % repos)) # TODO: test --previews def test_manage(self): """Create a project management script""" script = self.tmp_py() self.assertTrue(not os.path.exists(script)) # No attempt is made to verify correctness of the repository path here result = self.env.run('migrate manage %s --repository=/bla/' % script) self.assertTrue(os.path.exists(script)) class TestShellRepository(Shell): """Shell commands on an existing repository/python script""" def setUp(self): """Create repository, python change script""" super(TestShellRepository, self).setUp() self.path_repos = self.tmp_repos() result = self.env.run('migrate create %s repository_name' % self.path_repos) def test_version(self): """Correctly detect repository version""" # Version: 0 (no scripts yet); successful execution result = self.env.run('migrate version --repository=%s' % self.path_repos) self.assertEqual(result.stdout.strip(), "0") # Also works as a positional param result = self.env.run('migrate version %s' % self.path_repos) self.assertEqual(result.stdout.strip(), "0") # Create a script and version should increment result = self.env.run('migrate script Desc %s' % self.path_repos) result = self.env.run('migrate version %s' % self.path_repos) self.assertEqual(result.stdout.strip(), "1") def test_source(self): """Correctly fetch a script's source""" result = self.env.run('migrate script Desc --repository=%s' % self.path_repos) filename = '%s/versions/001_Desc.py' % self.path_repos source = open(filename).read() self.assertTrue(source.find('def upgrade') >= 0) # Version is now 1 result = self.env.run('migrate version %s' % self.path_repos) self.assertEqual(result.stdout.strip(), "1") # Output/verify the source of version 1 result = self.env.run('migrate source 1 --repository=%s' % self.path_repos) self.assertEqual(result.stdout.strip(), source.strip()) # We can also send the source to a file... test that too result = self.env.run('migrate source 1 %s --repository=%s' % (filename, self.path_repos)) self.assertTrue(os.path.exists(filename)) fd = open(filename) result = fd.read() self.assertTrue(result.strip() == source.strip()) class TestShellDatabase(Shell, DB): """Commands associated with a particular database""" # We'll need to clean up after ourself, since the shell creates its own txn; # we need to connect to the DB to see if things worked level = DB.CONNECT @usedb() def test_version_control(self): """Ensure we can set version control on a database""" path_repos = repos = self.tmp_repos() url = self.url result = self.env.run('migrate create %s repository_name' % repos) result = self.env.run('migrate drop_version_control %(url)s %(repos)s'\ % locals(), expect_error=True) self.assertEqual(result.returncode, 1) result = self.env.run('migrate version_control %(url)s %(repos)s' % locals()) # Clean up result = self.env.run('migrate drop_version_control %(url)s %(repos)s' % locals()) # Attempting to drop vc from a database without it should fail result = self.env.run('migrate drop_version_control %(url)s %(repos)s'\ % locals(), expect_error=True) self.assertEqual(result.returncode, 1) @usedb() def test_wrapped_kwargs(self): """Commands with default arguments set by manage.py""" path_repos = repos = self.tmp_repos() url = self.url result = self.env.run('migrate create --name=repository_name %s' % repos) result = self.env.run('migrate drop_version_control %(url)s %(repos)s' % locals(), expect_error=True) self.assertEqual(result.returncode, 1) result = self.env.run('migrate version_control %(url)s %(repos)s' % locals()) result = self.env.run('migrate drop_version_control %(url)s %(repos)s' % locals()) @usedb() def test_version_control_specified(self): """Ensure we can set version control to a particular version""" path_repos = self.tmp_repos() url = self.url result = self.env.run('migrate create --name=repository_name %s' % path_repos) result = self.env.run('migrate drop_version_control %(url)s %(path_repos)s' % locals(), expect_error=True) self.assertEqual(result.returncode, 1) # Fill the repository path_script = self.tmp_py() version = 2 for i in range(version): result = self.env.run('migrate script Desc --repository=%s' % path_repos) # Repository version is correct result = self.env.run('migrate version %s' % path_repos) self.assertEqual(result.stdout.strip(), str(version)) # Apply versioning to DB result = self.env.run('migrate version_control %(url)s %(path_repos)s %(version)s' % locals()) # Test db version number (should start at 2) result = self.env.run('migrate db_version %(url)s %(path_repos)s' % locals()) self.assertEqual(result.stdout.strip(), str(version)) # Clean up result = self.env.run('migrate drop_version_control %(url)s %(path_repos)s' % locals()) @usedb() def test_upgrade(self): """Can upgrade a versioned database""" # Create a repository repos_name = 'repos_name' repos_path = self.tmp() result = self.env.run('migrate create %(repos_path)s %(repos_name)s' % locals()) self.assertEqual(self.run_version(repos_path), 0) # Version the DB result = self.env.run('migrate drop_version_control %s %s' % (self.url, repos_path), expect_error=True) result = self.env.run('migrate version_control %s %s' % (self.url, repos_path)) # Upgrades with latest version == 0 self.assertEqual(self.run_db_version(self.url, repos_path), 0) result = self.env.run('migrate upgrade %s %s' % (self.url, repos_path)) self.assertEqual(self.run_db_version(self.url, repos_path), 0) result = self.env.run('migrate upgrade %s %s' % (self.url, repos_path)) self.assertEqual(self.run_db_version(self.url, repos_path), 0) result = self.env.run('migrate upgrade %s %s 1' % (self.url, repos_path), expect_error=True) self.assertEqual(result.returncode, 1) result = self.env.run('migrate upgrade %s %s -1' % (self.url, repos_path), expect_error=True) self.assertEqual(result.returncode, 2) # Add a script to the repository; upgrade the db result = self.env.run('migrate script Desc --repository=%s' % (repos_path)) self.assertEqual(self.run_version(repos_path), 1) self.assertEqual(self.run_db_version(self.url, repos_path), 0) # Test preview result = self.env.run('migrate upgrade %s %s 0 --preview_sql' % (self.url, repos_path)) result = self.env.run('migrate upgrade %s %s 0 --preview_py' % (self.url, repos_path)) result = self.env.run('migrate upgrade %s %s' % (self.url, repos_path)) self.assertEqual(self.run_db_version(self.url, repos_path), 1) # Downgrade must have a valid version specified result = self.env.run('migrate downgrade %s %s' % (self.url, repos_path), expect_error=True) self.assertEqual(result.returncode, 2) result = self.env.run('migrate downgrade %s %s -1' % (self.url, repos_path), expect_error=True) self.assertEqual(result.returncode, 2) result = self.env.run('migrate downgrade %s %s 2' % (self.url, repos_path), expect_error=True) self.assertEqual(result.returncode, 2) self.assertEqual(self.run_db_version(self.url, repos_path), 1) result = self.env.run('migrate downgrade %s %s 0' % (self.url, repos_path)) self.assertEqual(self.run_db_version(self.url, repos_path), 0) result = self.env.run('migrate downgrade %s %s 1' % (self.url, repos_path), expect_error=True) self.assertEqual(result.returncode, 2) self.assertEqual(self.run_db_version(self.url, repos_path), 0) result = self.env.run('migrate drop_version_control %s %s' % (self.url, repos_path)) def _run_test_sqlfile(self, upgrade_script, downgrade_script): # TODO: add test script that checks if db really changed repos_path = self.tmp() repos_name = 'repos' result = self.env.run('migrate create %s %s' % (repos_path, repos_name)) result = self.env.run('migrate drop_version_control %s %s' % (self.url, repos_path), expect_error=True) result = self.env.run('migrate version_control %s %s' % (self.url, repos_path)) self.assertEqual(self.run_version(repos_path), 0) self.assertEqual(self.run_db_version(self.url, repos_path), 0) beforeCount = len(os.listdir(os.path.join(repos_path, 'versions'))) # hmm, this number changes sometimes based on running from svn result = self.env.run('migrate script_sql %s --repository=%s' % ('postgres', repos_path)) self.assertEqual(self.run_version(repos_path), 1) self.assertEqual(len(os.listdir(os.path.join(repos_path, 'versions'))), beforeCount + 2) open('%s/versions/001_postgres_upgrade.sql' % repos_path, 'a').write(upgrade_script) open('%s/versions/001_postgres_downgrade.sql' % repos_path, 'a').write(downgrade_script) self.assertEqual(self.run_db_version(self.url, repos_path), 0) self.assertRaises(Exception, self.engine.text('select * from t_table').execute) result = self.env.run('migrate upgrade %s %s' % (self.url, repos_path)) self.assertEqual(self.run_db_version(self.url, repos_path), 1) self.engine.text('select * from t_table').execute() result = self.env.run('migrate downgrade %s %s 0' % (self.url, repos_path)) self.assertEqual(self.run_db_version(self.url, repos_path), 0) self.assertRaises(Exception, self.engine.text('select * from t_table').execute) # The tests below are written with some postgres syntax, but the stuff # being tested (.sql files) ought to work with any db. @usedb(supported='postgres') def test_sqlfile(self): upgrade_script = """ create table t_table ( id serial, primary key(id) ); """ downgrade_script = """ drop table t_table; """ self.meta.drop_all() self._run_test_sqlfile(upgrade_script, downgrade_script) @usedb(supported='postgres') def test_sqlfile_comment(self): upgrade_script = """ -- Comments in SQL break postgres autocommit create table t_table ( id serial, primary key(id) ); """ downgrade_script = """ -- Comments in SQL break postgres autocommit drop table t_table; """ self._run_test_sqlfile(upgrade_script, downgrade_script) @usedb() def test_command_test(self): repos_name = 'repos_name' repos_path = self.tmp() result = self.env.run('migrate create repository_name --repository=%s' % repos_path) result = self.env.run('migrate drop_version_control %s %s' % (self.url, repos_path), expect_error=True) result = self.env.run('migrate version_control %s %s' % (self.url, repos_path)) self.assertEqual(self.run_version(repos_path), 0) self.assertEqual(self.run_db_version(self.url, repos_path), 0) # Empty script should succeed result = self.env.run('migrate script Desc %s' % repos_path) result = self.env.run('migrate test %s %s' % (self.url, repos_path)) self.assertEqual(self.run_version(repos_path), 1) self.assertEqual(self.run_db_version(self.url, repos_path), 0) # Error script should fail script_path = self.tmp_py() script_text=''' from sqlalchemy import * from migrate import * def upgrade(): print 'fgsfds' raise Exception() def downgrade(): print 'sdfsgf' raise Exception() '''.replace("\n ", "\n") file = open(script_path, 'w') file.write(script_text) file.close() result = self.env.run('migrate test %s %s bla' % (self.url, repos_path), expect_error=True) self.assertEqual(result.returncode, 2) self.assertEqual(self.run_version(repos_path), 1) self.assertEqual(self.run_db_version(self.url, repos_path), 0) # Nonempty script using migrate_engine should succeed script_path = self.tmp_py() script_text = ''' from sqlalchemy import * from migrate import * from migrate.changeset import schema meta = MetaData(migrate_engine) account = Table('account', meta, Column('id', Integer, primary_key=True), Column('login', Text), Column('passwd', Text), ) def upgrade(): # Upgrade operations go here. Don't create your own engine; use the engine # named 'migrate_engine' imported from migrate. meta.create_all() def downgrade(): # Operations to reverse the above upgrade go here. meta.drop_all() '''.replace("\n ", "\n") file = open(script_path, 'w') file.write(script_text) file.close() result = self.env.run('migrate test %s %s' % (self.url, repos_path)) self.assertEqual(self.run_version(repos_path), 1) self.assertEqual(self.run_db_version(self.url, repos_path), 0) @usedb() def test_rundiffs_in_shell(self): # This is a variant of the test_schemadiff tests but run through the shell level. # These shell tests are hard to debug (since they keep forking processes) # so they shouldn't replace the lower-level tests. repos_name = 'repos_name' repos_path = self.tmp() script_path = self.tmp_py() model_module = 'migrate.tests.fixture.models:meta_rundiffs' old_model_module = 'migrate.tests.fixture.models:meta_old_rundiffs' # Create empty repository. self.meta = MetaData(self.engine) self.meta.reflect() self.meta.drop_all() # in case junk tables are lying around in the test database result = self.env.run( 'migrate create %s %s' % (repos_path, repos_name), expect_stderr=True) result = self.env.run( 'migrate drop_version_control %s %s' % (self.url, repos_path), expect_stderr=True, expect_error=True) result = self.env.run( 'migrate version_control %s %s' % (self.url, repos_path), expect_stderr=True) self.assertEqual(self.run_version(repos_path), 0) self.assertEqual(self.run_db_version(self.url, repos_path), 0) # Setup helper script. result = self.env.run( 'migrate manage %s --repository=%s --url=%s --model=%s'\ % (script_path, repos_path, self.url, model_module), expect_stderr=True) self.assertTrue(os.path.exists(script_path)) # Model is defined but database is empty. result = self.env.run('migrate compare_model_to_db %s %s --model=%s' \ % (self.url, repos_path, model_module), expect_stderr=True) self.assertTrue( "tables missing from database: tmp_account_rundiffs" in result.stdout) # Test Deprecation result = self.env.run('migrate compare_model_to_db %s %s --model=%s' \ % (self.url, repos_path, model_module.replace(":", ".")), expect_stderr=True, expect_error=True) self.assertEqual(result.returncode, 0) self.assertTrue( "tables missing from database: tmp_account_rundiffs" in result.stdout) # Update db to latest model. result = self.env.run('migrate update_db_from_model %s %s %s'\ % (self.url, repos_path, model_module), expect_stderr=True) self.assertEqual(self.run_version(repos_path), 0) self.assertEqual(self.run_db_version(self.url, repos_path), 0) # version did not get bumped yet because new version not yet created result = self.env.run('migrate compare_model_to_db %s %s %s'\ % (self.url, repos_path, model_module), expect_stderr=True) self.assertTrue("No schema diffs" in result.stdout) result = self.env.run( 'migrate drop_version_control %s %s' % (self.url, repos_path), expect_stderr=True, expect_error=True) result = self.env.run( 'migrate version_control %s %s' % (self.url, repos_path), expect_stderr=True) result = self.env.run( 'migrate create_model %s %s' % (self.url, repos_path), expect_stderr=True) temp_dict = dict() six.exec_(result.stdout, temp_dict) # TODO: breaks on SA06 and SA05 - in need of total refactor - use different approach # TODO: compare whole table self.compare_columns_equal(models.tmp_account_rundiffs.c, temp_dict['tmp_account_rundiffs'].c, ['type']) ##self.assertTrue("""tmp_account_rundiffs = Table('tmp_account_rundiffs', meta, ##Column('id', Integer(), primary_key=True, nullable=False), ##Column('login', String(length=None, convert_unicode=False, assert_unicode=None)), ##Column('passwd', String(length=None, convert_unicode=False, assert_unicode=None))""" in result.stdout) ## We're happy with db changes, make first db upgrade script to go from version 0 -> 1. #result = self.env.run('migrate make_update_script_for_model', expect_error=True, expect_stderr=True) #self.assertTrue('Not enough arguments' in result.stderr) #result_script = self.env.run('migrate make_update_script_for_model %s %s %s %s'\ #% (self.url, repos_path, old_model_module, model_module)) #self.assertEqualIgnoreWhitespace(result_script.stdout, #'''from sqlalchemy import * #from migrate import * #from migrate.changeset import schema #meta = MetaData() #tmp_account_rundiffs = Table('tmp_account_rundiffs', meta, #Column('id', Integer(), primary_key=True, nullable=False), #Column('login', Text(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False)), #Column('passwd', Text(length=None, convert_unicode=False, assert_unicode=None, unicode_error=None, _warn_on_bytestring=False)), #) #def upgrade(migrate_engine): ## Upgrade operations go here. Don't create your own engine; bind migrate_engine ## to your metadata #meta.bind = migrate_engine #tmp_account_rundiffs.create() #def downgrade(migrate_engine): ## Operations to reverse the above upgrade go here. #meta.bind = migrate_engine #tmp_account_rundiffs.drop()''') ## Save the upgrade script. #result = self.env.run('migrate script Desc %s' % repos_path) #upgrade_script_path = '%s/versions/001_Desc.py' % repos_path #open(upgrade_script_path, 'w').write(result_script.stdout) #result = self.env.run('migrate compare_model_to_db %s %s %s'\ #% (self.url, repos_path, model_module)) #self.assertTrue("No schema diffs" in result.stdout) self.meta.drop_all() # in case junk tables are lying around in the test database sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_script.py0000664000175000017500000002412213553670475025317 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import imp import os import sys import shutil import six from migrate import exceptions from migrate.versioning import version, repository from migrate.versioning.script import * from migrate.versioning.util import * from migrate.tests import fixture from migrate.tests.fixture.models import tmp_sql_table class TestBaseScript(fixture.Pathed): def test_all(self): """Testing all basic BaseScript operations""" # verify / source / run src = self.tmp() open(src, 'w').close() bscript = BaseScript(src) BaseScript.verify(src) self.assertEqual(bscript.source(), '') self.assertRaises(NotImplementedError, bscript.run, 'foobar') class TestPyScript(fixture.Pathed, fixture.DB): cls = PythonScript def test_create(self): """We can create a migration script""" path = self.tmp_py() # Creating a file that doesn't exist should succeed self.cls.create(path) self.assertTrue(os.path.exists(path)) # Created file should be a valid script (If not, raises an error) self.cls.verify(path) # Can't create it again: it already exists self.assertRaises(exceptions.PathFoundError,self.cls.create,path) @fixture.usedb(supported='sqlite') def test_run(self): script_path = self.tmp_py() pyscript = PythonScript.create(script_path) pyscript.run(self.engine, 1) pyscript.run(self.engine, -1) self.assertRaises(exceptions.ScriptError, pyscript.run, self.engine, 0) self.assertRaises(exceptions.ScriptError, pyscript._func, 'foobar') # clean pyc file if six.PY3: os.remove(imp.cache_from_source(script_path)) else: os.remove(script_path + 'c') # test deprecated upgrade/downgrade with no arguments contents = open(script_path, 'r').read() f = open(script_path, 'w') f.write(contents.replace("upgrade(migrate_engine)", "upgrade()")) f.close() pyscript = PythonScript(script_path) pyscript._module = None try: pyscript.run(self.engine, 1) pyscript.run(self.engine, -1) except exceptions.ScriptError: pass else: self.fail() def test_verify_notfound(self): """Correctly verify a python migration script: nonexistant file""" path = self.tmp_py() self.assertFalse(os.path.exists(path)) # Fails on empty path self.assertRaises(exceptions.InvalidScriptError,self.cls.verify,path) self.assertRaises(exceptions.InvalidScriptError,self.cls,path) def test_verify_invalidpy(self): """Correctly verify a python migration script: invalid python file""" path=self.tmp_py() # Create empty file f = open(path,'w') f.write("def fail") f.close() self.assertRaises(Exception,self.cls.verify_module,path) # script isn't verified on creation, but on module reference py = self.cls(path) self.assertRaises(Exception,(lambda x: x.module),py) def test_verify_nofuncs(self): """Correctly verify a python migration script: valid python file; no upgrade func""" path = self.tmp_py() # Create empty file f = open(path, 'w') f.write("def zergling():\n\tprint('rush')") f.close() self.assertRaises(exceptions.InvalidScriptError, self.cls.verify_module, path) # script isn't verified on creation, but on module reference py = self.cls(path) self.assertRaises(exceptions.InvalidScriptError,(lambda x: x.module),py) @fixture.usedb(supported='sqlite') def test_preview_sql(self): """Preview SQL abstract from ORM layer (sqlite)""" path = self.tmp_py() f = open(path, 'w') content = ''' from migrate import * from sqlalchemy import * metadata = MetaData() UserGroup = Table('Link', metadata, Column('link1ID', Integer), Column('link2ID', Integer), UniqueConstraint('link1ID', 'link2ID')) def upgrade(migrate_engine): metadata.create_all(migrate_engine) ''' f.write(content) f.close() pyscript = self.cls(path) SQL = pyscript.preview_sql(self.url, 1) self.assertEqualIgnoreWhitespace(""" CREATE TABLE "Link" ("link1ID" INTEGER, "link2ID" INTEGER, UNIQUE ("link1ID", "link2ID")) """, SQL) # TODO: test: No SQL should be executed! def test_verify_success(self): """Correctly verify a python migration script: success""" path = self.tmp_py() # Succeeds after creating self.cls.create(path) self.cls.verify(path) # test for PythonScript.make_update_script_for_model @fixture.usedb() def test_make_update_script_for_model(self): """Construct script source from differences of two models""" self.setup_model_params() self.write_file(self.first_model_path, self.base_source) self.write_file(self.second_model_path, self.base_source + self.model_source) source_script = self.pyscript.make_update_script_for_model( engine=self.engine, oldmodel=load_model('testmodel_first:meta'), model=load_model('testmodel_second:meta'), repository=self.repo_path, ) self.assertTrue("['User'].create()" in source_script) self.assertTrue("['User'].drop()" in source_script) @fixture.usedb() def test_make_update_script_for_equal_models(self): """Try to make update script from two identical models""" self.setup_model_params() self.write_file(self.first_model_path, self.base_source + self.model_source) self.write_file(self.second_model_path, self.base_source + self.model_source) source_script = self.pyscript.make_update_script_for_model( engine=self.engine, oldmodel=load_model('testmodel_first:meta'), model=load_model('testmodel_second:meta'), repository=self.repo_path, ) self.assertFalse('User.create()' in source_script) self.assertFalse('User.drop()' in source_script) @fixture.usedb() def test_make_update_script_direction(self): """Check update scripts go in the right direction""" self.setup_model_params() self.write_file(self.first_model_path, self.base_source) self.write_file(self.second_model_path, self.base_source + self.model_source) source_script = self.pyscript.make_update_script_for_model( engine=self.engine, oldmodel=load_model('testmodel_first:meta'), model=load_model('testmodel_second:meta'), repository=self.repo_path, ) self.assertTrue(0 < source_script.find('upgrade') < source_script.find("['User'].create()") < source_script.find('downgrade') < source_script.find("['User'].drop()")) def setup_model_params(self): self.script_path = self.tmp_py() self.repo_path = self.tmp() self.first_model_path = os.path.join(self.temp_usable_dir, 'testmodel_first.py') self.second_model_path = os.path.join(self.temp_usable_dir, 'testmodel_second.py') self.base_source = """from sqlalchemy import *\nmeta = MetaData()\n""" self.model_source = """ User = Table('User', meta, Column('id', Integer, primary_key=True), Column('login', Unicode(40)), Column('passwd', String(40)), )""" self.repo = repository.Repository.create(self.repo_path, 'repo') self.pyscript = PythonScript.create(self.script_path) sys.modules.pop('testmodel_first', None) sys.modules.pop('testmodel_second', None) def write_file(self, path, contents): f = open(path, 'w') f.write(contents) f.close() class TestSqlScript(fixture.Pathed, fixture.DB): @fixture.usedb() def test_error(self): """Test if exception is raised on wrong script source""" src = self.tmp() f = open(src, 'w') f.write("""foobar""") f.close() sqls = SqlScript(src) self.assertRaises(Exception, sqls.run, self.engine) @fixture.usedb() def test_success(self): """Test sucessful SQL execution""" # cleanup and prepare python script tmp_sql_table.metadata.drop_all(self.engine, checkfirst=True) script_path = self.tmp_py() pyscript = PythonScript.create(script_path) # populate python script contents = open(script_path, 'r').read() contents = contents.replace("pass", "tmp_sql_table.create(migrate_engine)") contents = 'from migrate.tests.fixture.models import tmp_sql_table\n' + contents f = open(script_path, 'w') f.write(contents) f.close() # write SQL script from python script preview pyscript = PythonScript(script_path) src = self.tmp() f = open(src, 'w') f.write(pyscript.preview_sql(self.url, 1)) f.close() # run the change sqls = SqlScript(src) sqls.run(self.engine) tmp_sql_table.metadata.drop_all(self.engine, checkfirst=True) @fixture.usedb() def test_transaction_management_statements(self): """ Test that we can successfully execute SQL scripts with transaction management statements. """ for script_pattern in ( "BEGIN TRANSACTION; %s; COMMIT;", "BEGIN; %s; END TRANSACTION;", "/* comment */BEGIN TRANSACTION; %s; /* comment */COMMIT;", "/* comment */ BEGIN TRANSACTION; %s; /* comment */ COMMIT;", """ -- comment BEGIN TRANSACTION; %s; -- comment COMMIT;""", ): test_statement = ("CREATE TABLE TEST1 (field1 int); " "DROP TABLE TEST1") script = script_pattern % test_statement src = self.tmp() with open(src, 'wt') as f: f.write(script) sqls = SqlScript(src) sqls.run(self.engine) sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_keyedinstance.py0000664000175000017500000000241213553670475026637 0ustar zuulzuul00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- from migrate.tests import fixture from migrate.versioning.util.keyedinstance import * class TestKeydInstance(fixture.Base): def test_unique(self): """UniqueInstance should produce unique object instances""" class Uniq1(KeyedInstance): @classmethod def _key(cls,key): return str(key) def __init__(self,value): self.value=value class Uniq2(KeyedInstance): @classmethod def _key(cls,key): return str(key) def __init__(self,value): self.value=value a10 = Uniq1('a') # Different key: different instance b10 = Uniq1('b') self.assertTrue(a10 is not b10) # Different class: different instance a20 = Uniq2('a') self.assertTrue(a10 is not a20) # Same key/class: same instance a11 = Uniq1('a') self.assertTrue(a10 is a11) # __init__ is called self.assertEqual(a10.value,'a') # clear() causes us to forget all existing instances Uniq1.clear() a12 = Uniq1('a') self.assertTrue(a10 is not a12) self.assertRaises(NotImplementedError, KeyedInstance._key) sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_version.py0000664000175000017500000001513513553670475025504 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- from migrate.exceptions import * from migrate.versioning.version import * from migrate.tests import fixture class TestVerNum(fixture.Base): def test_invalid(self): """Disallow invalid version numbers""" versions = ('-1', -1, 'Thirteen', '') for version in versions: self.assertRaises(ValueError, VerNum, version) def test_str(self): """Test str and repr version numbers""" self.assertEqual(str(VerNum(2)), '2') self.assertEqual(repr(VerNum(2)), '') def test_is(self): """Two version with the same number should be equal""" a = VerNum(1) b = VerNum(1) self.assertTrue(a is b) self.assertEqual(VerNum(VerNum(2)), VerNum(2)) def test_add(self): self.assertEqual(VerNum(1) + VerNum(1), VerNum(2)) self.assertEqual(VerNum(1) + 1, 2) self.assertEqual(VerNum(1) + 1, '2') self.assertTrue(isinstance(VerNum(1) + 1, VerNum)) def test_sub(self): self.assertEqual(VerNum(1) - 1, 0) self.assertTrue(isinstance(VerNum(1) - 1, VerNum)) self.assertRaises(ValueError, lambda: VerNum(0) - 1) def test_eq(self): """Two versions are equal""" self.assertEqual(VerNum(1), VerNum('1')) self.assertEqual(VerNum(1), 1) self.assertEqual(VerNum(1), '1') self.assertNotEqual(VerNum(1), 2) def test_ne(self): self.assertTrue(VerNum(1) != 2) self.assertFalse(VerNum(1) != 1) def test_lt(self): self.assertFalse(VerNum(1) < 1) self.assertTrue(VerNum(1) < 2) self.assertFalse(VerNum(2) < 1) def test_le(self): self.assertTrue(VerNum(1) <= 1) self.assertTrue(VerNum(1) <= 2) self.assertFalse(VerNum(2) <= 1) def test_gt(self): self.assertFalse(VerNum(1) > 1) self.assertFalse(VerNum(1) > 2) self.assertTrue(VerNum(2) > 1) def test_ge(self): self.assertTrue(VerNum(1) >= 1) self.assertTrue(VerNum(2) >= 1) self.assertFalse(VerNum(1) >= 2) def test_int_cast(self): ver = VerNum(3) # test __int__ self.assertEqual(int(ver), 3) # test __index__: range() doesn't call __int__ self.assertEqual(list(range(ver, ver)), []) class TestVersion(fixture.Pathed): def setUp(self): super(TestVersion, self).setUp() def test_str_to_filename(self): self.assertEqual(str_to_filename(''), '') self.assertEqual(str_to_filename('__'), '_') self.assertEqual(str_to_filename('a'), 'a') self.assertEqual(str_to_filename('Abc Def'), 'Abc_Def') self.assertEqual(str_to_filename('Abc "D" Ef'), 'Abc_D_Ef') self.assertEqual(str_to_filename("Abc's Stuff"), 'Abc_s_Stuff') self.assertEqual(str_to_filename("a b"), 'a_b') self.assertEqual(str_to_filename("a.b to c"), 'a_b_to_c') def test_collection(self): """Let's see how we handle versions collection""" coll = Collection(self.temp_usable_dir) coll.create_new_python_version("foo bar") coll.create_new_sql_version("postgres", "foo bar") coll.create_new_sql_version("sqlite", "foo bar") coll.create_new_python_version("") self.assertEqual(coll.latest, 4) self.assertEqual(len(coll.versions), 4) self.assertEqual(coll.version(4), coll.version(coll.latest)) # Check for non-existing version self.assertRaises(VersionNotFoundError, coll.version, 5) # Check for the current version self.assertEqual('4', coll.version(4).version) coll2 = Collection(self.temp_usable_dir) self.assertEqual(coll.versions, coll2.versions) Collection.clear() def test_old_repository(self): open(os.path.join(self.temp_usable_dir, '1'), 'w') self.assertRaises(Exception, Collection, self.temp_usable_dir) #TODO: def test_collection_unicode(self): # pass def test_create_new_python_version(self): coll = Collection(self.temp_usable_dir) coll.create_new_python_version("'") ver = coll.version() self.assertTrue(ver.script().source()) def test_create_new_sql_version(self): coll = Collection(self.temp_usable_dir) coll.create_new_sql_version("sqlite", "foo bar") ver = coll.version() ver_up = ver.script('sqlite', 'upgrade') ver_down = ver.script('sqlite', 'downgrade') ver_up.source() ver_down.source() def test_selection(self): """Verify right sql script is selected""" # Create empty directory. path = self.tmp_repos() os.mkdir(path) # Create files -- files must be present or you'll get an exception later. python_file = '001_initial_.py' sqlite_upgrade_file = '001_sqlite_upgrade.sql' default_upgrade_file = '001_default_upgrade.sql' for file_ in [sqlite_upgrade_file, default_upgrade_file, python_file]: filepath = '%s/%s' % (path, file_) open(filepath, 'w').close() ver = Version(1, path, [sqlite_upgrade_file]) self.assertEqual(os.path.basename(ver.script('sqlite', 'upgrade').path), sqlite_upgrade_file) ver = Version(1, path, [default_upgrade_file]) self.assertEqual(os.path.basename(ver.script('default', 'upgrade').path), default_upgrade_file) ver = Version(1, path, [sqlite_upgrade_file, default_upgrade_file]) self.assertEqual(os.path.basename(ver.script('sqlite', 'upgrade').path), sqlite_upgrade_file) ver = Version(1, path, [sqlite_upgrade_file, default_upgrade_file, python_file]) self.assertEqual(os.path.basename(ver.script('postgres', 'upgrade').path), default_upgrade_file) ver = Version(1, path, [sqlite_upgrade_file, python_file]) self.assertEqual(os.path.basename(ver.script('postgres', 'upgrade').path), python_file) def test_bad_version(self): ver = Version(1, self.temp_usable_dir, []) self.assertRaises(ScriptError, ver.add_script, '123.sql') # tests bad ibm_db_sa filename ver = Version(123, self.temp_usable_dir, []) self.assertRaises(ScriptError, ver.add_script, '123_ibm_db_sa_upgrade.sql') # tests that the name is ok but the script doesn't exist self.assertRaises(InvalidScriptError, ver.add_script, '123_test_ibm_db_sa_upgrade.sql') pyscript = os.path.join(self.temp_usable_dir, 'bla.py') open(pyscript, 'w') ver.add_script(pyscript) self.assertRaises(ScriptError, ver.add_script, 'bla.py') sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_schemadiff.py0000664000175000017500000001514313553670475026107 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- import os from sqlalchemy import * from migrate.versioning import schemadiff from migrate.tests import fixture class SchemaDiffBase(fixture.DB): level = fixture.DB.CONNECT def _make_table(self,*cols,**kw): self.table = Table('xtable', self.meta, Column('id',Integer(), primary_key=True), *cols ) if kw.get('create',True): self.table.create() def _assert_diff(self,col_A,col_B): self._make_table(col_A) self.meta.clear() self._make_table(col_B,create=False) diff = self._run_diff() # print diff self.assertTrue(diff) self.assertEqual(1,len(diff.tables_different)) td = list(diff.tables_different.values())[0] self.assertEqual(1,len(td.columns_different)) cd = list(td.columns_different.values())[0] label_width = max(len(self.name1), len(self.name2)) self.assertEqual(('Schema diffs:\n' ' table with differences: xtable\n' ' column with differences: data\n' ' %*s: %r\n' ' %*s: %r')%( label_width, self.name1, cd.col_A, label_width, self.name2, cd.col_B ),str(diff)) class Test_getDiffOfModelAgainstDatabase(SchemaDiffBase): name1 = 'model' name2 = 'database' def _run_diff(self,**kw): return schemadiff.getDiffOfModelAgainstDatabase( self.meta, self.engine, **kw ) @fixture.usedb() def test_table_missing_in_db(self): self._make_table(create=False) diff = self._run_diff() self.assertTrue(diff) self.assertEqual('Schema diffs:\n tables missing from %s: xtable' % self.name2, str(diff)) @fixture.usedb() def test_table_missing_in_model(self): self._make_table() self.meta.clear() diff = self._run_diff() self.assertTrue(diff) self.assertEqual('Schema diffs:\n tables missing from %s: xtable' % self.name1, str(diff)) @fixture.usedb() def test_column_missing_in_db(self): # db Table('xtable', self.meta, Column('id',Integer(), primary_key=True), ).create() self.meta.clear() # model self._make_table( Column('xcol',Integer()), create=False ) # run diff diff = self._run_diff() self.assertTrue(diff) self.assertEqual('Schema diffs:\n' ' table with differences: xtable\n' ' %s missing these columns: xcol' % self.name2, str(diff)) @fixture.usedb() def test_column_missing_in_model(self): # db self._make_table( Column('xcol',Integer()), ) self.meta.clear() # model self._make_table( create=False ) # run diff diff = self._run_diff() self.assertTrue(diff) self.assertEqual('Schema diffs:\n' ' table with differences: xtable\n' ' %s missing these columns: xcol' % self.name1, str(diff)) @fixture.usedb() def test_exclude_tables(self): # db Table('ytable', self.meta, Column('id',Integer(), primary_key=True), ).create() Table('ztable', self.meta, Column('id',Integer(), primary_key=True), ).create() self.meta.clear() # model self._make_table( create=False ) Table('ztable', self.meta, Column('id',Integer(), primary_key=True), ) # run diff diff = self._run_diff(excludeTables=('xtable','ytable')) # ytable only in database # xtable only in model # ztable identical on both # ...so we expect no diff! self.assertFalse(diff) self.assertEqual('No schema diffs',str(diff)) @fixture.usedb() def test_identical_just_pk(self): self._make_table() diff = self._run_diff() self.assertFalse(diff) self.assertEqual('No schema diffs',str(diff)) @fixture.usedb() def test_different_type(self): self._assert_diff( Column('data', String(10)), Column('data', Integer()), ) @fixture.usedb() def test_int_vs_float(self): self._assert_diff( Column('data', Integer()), Column('data', Float()), ) # NOTE(mriedem): The ibm_db_sa driver handles the Float() as a DOUBLE() # which extends Numeric() but isn't defined in sqlalchemy.types, so we # can't check for it as a special case like is done in schemadiff.ColDiff. @fixture.usedb(not_supported='ibm_db_sa') def test_float_vs_numeric(self): self._assert_diff( Column('data', Float()), Column('data', Numeric()), ) @fixture.usedb() def test_numeric_precision(self): self._assert_diff( Column('data', Numeric(precision=5)), Column('data', Numeric(precision=6)), ) @fixture.usedb() def test_numeric_scale(self): self._assert_diff( Column('data', Numeric(precision=6,scale=0)), Column('data', Numeric(precision=6,scale=1)), ) @fixture.usedb() def test_string_length(self): self._assert_diff( Column('data', String(10)), Column('data', String(20)), ) @fixture.usedb() def test_integer_identical(self): self._make_table( Column('data', Integer()), ) diff = self._run_diff() self.assertEqual('No schema diffs',str(diff)) self.assertFalse(diff) @fixture.usedb() def test_string_identical(self): self._make_table( Column('data', String(10)), ) diff = self._run_diff() self.assertEqual('No schema diffs',str(diff)) self.assertFalse(diff) @fixture.usedb() def test_text_identical(self): self._make_table( Column('data', Text), ) diff = self._run_diff() self.assertEqual('No schema diffs',str(diff)) self.assertFalse(diff) class Test_getDiffOfModelAgainstModel(Test_getDiffOfModelAgainstDatabase): name1 = 'metadataA' name2 = 'metadataB' def _run_diff(self,**kw): db_meta= MetaData() db_meta.reflect(self.engine) return schemadiff.getDiffOfModelAgainstModel( self.meta, db_meta, **kw ) sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_pathed.py0000664000175000017500000000327713553670475025270 0ustar zuulzuul00000000000000from migrate.tests import fixture from migrate.versioning.pathed import * class TestPathed(fixture.Base): def test_parent_path(self): """Default parent_path should behave correctly""" filepath='/fgsfds/moot.py' dirpath='/fgsfds/moot' sdirpath='/fgsfds/moot/' result='/fgsfds' self.assertTrue(result==Pathed._parent_path(filepath)) self.assertTrue(result==Pathed._parent_path(dirpath)) self.assertTrue(result==Pathed._parent_path(sdirpath)) def test_new(self): """Pathed(path) shouldn't create duplicate objects of the same path""" path='/fgsfds' class Test(Pathed): attr=None o1=Test(path) o2=Test(path) self.assertTrue(isinstance(o1,Test)) self.assertTrue(o1.path==path) self.assertTrue(o1 is o2) o1.attr='herring' self.assertTrue(o2.attr=='herring') o2.attr='shrubbery' self.assertTrue(o1.attr=='shrubbery') def test_parent(self): """Parents should be fetched correctly""" class Parent(Pathed): parent=None children=0 def _init_child(self,child,path): """Keep a tally of children. (A real class might do something more interesting here) """ self.__class__.children+=1 class Child(Pathed): parent=Parent path='/fgsfds/moot.py' parent_path='/fgsfds' object=Child(path) self.assertTrue(isinstance(object,Child)) self.assertTrue(isinstance(object.parent,Parent)) self.assertTrue(object.path==path) self.assertTrue(object.parent.path==parent_path) sqlalchemy-migrate-0.13.0/migrate/tests/versioning/__init__.py0000664000175000017500000000000013553670475024500 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_util.py0000664000175000017500000001023013553670475024763 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os from sqlalchemy import * from migrate.exceptions import MigrateDeprecationWarning from migrate.tests import fixture from migrate.tests.fixture.warnings import catch_warnings from migrate.versioning.util import * from migrate.versioning import api import warnings class TestUtil(fixture.Pathed): def test_construct_engine(self): """Construct engine the smart way""" url = 'sqlite://' engine = construct_engine(url) self.assertTrue(engine.name == 'sqlite') # keyword arg engine = construct_engine(url, engine_arg_encoding='utf-8') self.assertEqual(engine.dialect.encoding, 'utf-8') # dict engine = construct_engine(url, engine_dict={'encoding': 'utf-8'}) self.assertEqual(engine.dialect.encoding, 'utf-8') # engine parameter engine_orig = create_engine('sqlite://') engine = construct_engine(engine_orig) self.assertEqual(engine, engine_orig) # test precedance engine = construct_engine(url, engine_dict={'encoding': 'iso-8859-1'}, engine_arg_encoding='utf-8') self.assertEqual(engine.dialect.encoding, 'utf-8') # deprecated echo=True parameter try: # py 2.4 compatibility :-/ cw = catch_warnings(record=True) w = cw.__enter__() warnings.simplefilter("always") engine = construct_engine(url, echo='True') self.assertTrue(engine.echo) self.assertEqual(len(w),1) self.assertTrue(issubclass(w[-1].category, MigrateDeprecationWarning)) self.assertEqual( 'echo=True parameter is deprecated, pass ' 'engine_arg_echo=True or engine_dict={"echo": True}', str(w[-1].message)) finally: cw.__exit__() # unsupported argument self.assertRaises(ValueError, construct_engine, 1) def test_passing_engine(self): repo = self.tmp_repos() api.create(repo, 'temp') api.script('First Version', repo) engine = construct_engine('sqlite:///:memory:') api.version_control(engine, repo) api.upgrade(engine, repo) def test_asbool(self): """test asbool parsing""" result = asbool(True) self.assertEqual(result, True) result = asbool(False) self.assertEqual(result, False) result = asbool('y') self.assertEqual(result, True) result = asbool('n') self.assertEqual(result, False) self.assertRaises(ValueError, asbool, 'test') self.assertRaises(ValueError, asbool, object) def test_load_model(self): """load model from dotted name""" model_path = os.path.join(self.temp_usable_dir, 'test_load_model.py') f = open(model_path, 'w') f.write("class FakeFloat(int): pass") f.close() try: # py 2.4 compatibility :-/ cw = catch_warnings(record=True) w = cw.__enter__() warnings.simplefilter("always") # deprecated spelling FakeFloat = load_model('test_load_model.FakeFloat') self.assertTrue(isinstance(FakeFloat(), int)) self.assertEqual(len(w),1) self.assertTrue(issubclass(w[-1].category, MigrateDeprecationWarning)) self.assertEqual( 'model should be in form of module.model:User ' 'and not module.model.User', str(w[-1].message)) finally: cw.__exit__() FakeFloat = load_model('test_load_model:FakeFloat') self.assertTrue(isinstance(FakeFloat(), int)) FakeFloat = load_model(FakeFloat) self.assertTrue(isinstance(FakeFloat(), int)) def test_guess_obj_type(self): """guess object type from string""" result = guess_obj_type('7') self.assertEqual(result, 7) result = guess_obj_type('y') self.assertEqual(result, True) result = guess_obj_type('test') self.assertEqual(result, 'test') sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_repository.py0000664000175000017500000002007513553670475026235 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os import shutil from migrate import exceptions from migrate.versioning.repository import * from migrate.versioning.script import * from migrate.tests import fixture from datetime import datetime class TestRepository(fixture.Pathed): def test_create(self): """Repositories are created successfully""" path = self.tmp_repos() name = 'repository_name' # Creating a repository that doesn't exist should succeed repo = Repository.create(path, name) config_path = repo.config.path manage_path = os.path.join(repo.path, 'manage.py') self.assertTrue(repo) # Files should actually be created self.assertTrue(os.path.exists(path)) self.assertTrue(os.path.exists(config_path)) self.assertTrue(os.path.exists(manage_path)) # Can't create it again: it already exists self.assertRaises(exceptions.PathFoundError, Repository.create, path, name) return path def test_load(self): """We should be able to load information about an existing repository""" # Create a repository to load path = self.test_create() repos = Repository(path) self.assertTrue(repos) self.assertTrue(repos.config) self.assertTrue(repos.config.get('db_settings', 'version_table')) # version_table's default isn't none self.assertNotEqual(repos.config.get('db_settings', 'version_table'), 'None') def test_load_notfound(self): """Nonexistant repositories shouldn't be loaded""" path = self.tmp_repos() self.assertTrue(not os.path.exists(path)) self.assertRaises(exceptions.InvalidRepositoryError, Repository, path) def test_load_invalid(self): """Invalid repos shouldn't be loaded""" # Here, invalid=empty directory. There may be other conditions too, # but we shouldn't need to test all of them path = self.tmp_repos() os.mkdir(path) self.assertRaises(exceptions.InvalidRepositoryError, Repository, path) class TestVersionedRepository(fixture.Pathed): """Tests on an existing repository with a single python script""" def setUp(self): super(TestVersionedRepository, self).setUp() Repository.clear() self.path_repos = self.tmp_repos() Repository.create(self.path_repos, 'repository_name') def test_version(self): """We should correctly detect the version of a repository""" repos = Repository(self.path_repos) # Get latest version, or detect if a specified version exists self.assertEqual(repos.latest, 0) # repos.latest isn't an integer, but a VerNum # (so we can't just assume the following tests are correct) self.assertTrue(repos.latest >= 0) self.assertTrue(repos.latest < 1) # Create a script and test again repos.create_script('') self.assertEqual(repos.latest, 1) self.assertTrue(repos.latest >= 0) self.assertTrue(repos.latest >= 1) self.assertTrue(repos.latest < 2) # Create a new script and test again repos.create_script('') self.assertEqual(repos.latest, 2) self.assertTrue(repos.latest >= 0) self.assertTrue(repos.latest >= 1) self.assertTrue(repos.latest >= 2) self.assertTrue(repos.latest < 3) def test_timestmap_numbering_version(self): repos = Repository(self.path_repos) repos.config.set('db_settings', 'use_timestamp_numbering', 'True') # Get latest version, or detect if a specified version exists self.assertEqual(repos.latest, 0) # repos.latest isn't an integer, but a VerNum # (so we can't just assume the following tests are correct) self.assertTrue(repos.latest >= 0) self.assertTrue(repos.latest < 1) # Create a script and test again now = int(datetime.utcnow().strftime('%Y%m%d%H%M%S')) repos.create_script('') self.assertEqual(repos.latest, now) def test_source(self): """Get a script object by version number and view its source""" # Load repository and commit script repo = Repository(self.path_repos) repo.create_script('') repo.create_script_sql('postgres', 'foo bar') # Source is valid: script must have an upgrade function # (not a very thorough test, but should be plenty) source = repo.version(1).script().source() self.assertTrue(source.find('def upgrade') >= 0) import pprint; pprint.pprint(repo.version(2).sql) source = repo.version(2).script('postgres', 'upgrade').source() self.assertEqual(source.strip(), '') def test_latestversion(self): """Repository.version() (no params) returns the latest version""" repos = Repository(self.path_repos) repos.create_script('') self.assertTrue(repos.version(repos.latest) is repos.version()) self.assertTrue(repos.version() is not None) def test_changeset(self): """Repositories can create changesets properly""" # Create a nonzero-version repository of empty scripts repos = Repository(self.path_repos) for i in range(10): repos.create_script('') def check_changeset(params, length): """Creates and verifies a changeset""" changeset = repos.changeset('postgres', *params) self.assertEqual(len(changeset), length) self.assertTrue(isinstance(changeset, Changeset)) uniq = list() # Changesets are iterable for version, change in changeset: self.assertTrue(isinstance(change, BaseScript)) # Changes aren't identical self.assertTrue(id(change) not in uniq) uniq.append(id(change)) return changeset # Upgrade to a specified version... cs = check_changeset((0, 10), 10) self.assertEqual(cs.keys().pop(0),0 ) # 0 -> 1: index is starting version self.assertEqual(cs.keys().pop(), 9) # 9 -> 10: index is starting version self.assertEqual(cs.start, 0) # starting version self.assertEqual(cs.end, 10) # ending version check_changeset((0, 1), 1) check_changeset((0, 5), 5) check_changeset((0, 0), 0) check_changeset((5, 5), 0) check_changeset((10, 10), 0) check_changeset((5, 10), 5) # Can't request a changeset of higher version than this repository self.assertRaises(Exception, repos.changeset, 'postgres', 5, 11) self.assertRaises(Exception, repos.changeset, 'postgres', -1, 5) # Upgrade to the latest version... cs = check_changeset((0,), 10) self.assertEqual(cs.keys().pop(0), 0) self.assertEqual(cs.keys().pop(), 9) self.assertEqual(cs.start, 0) self.assertEqual(cs.end, 10) check_changeset((1,), 9) check_changeset((5,), 5) check_changeset((9,), 1) check_changeset((10,), 0) # run changes cs.run('postgres', 'upgrade') # Can't request a changeset of higher/lower version than this repository self.assertRaises(Exception, repos.changeset, 'postgres', 11) self.assertRaises(Exception, repos.changeset, 'postgres', -1) # Downgrade cs = check_changeset((10, 0),10) self.assertEqual(cs.keys().pop(0), 10) # 10 -> 9 self.assertEqual(cs.keys().pop(), 1) # 1 -> 0 self.assertEqual(cs.start, 10) self.assertEqual(cs.end, 0) check_changeset((10, 5), 5) check_changeset((5, 0), 5) def test_many_versions(self): """Test what happens when lots of versions are created""" repos = Repository(self.path_repos) for i in range(1001): repos.create_script('') # since we normally create 3 digit ones, let's see if we blow up self.assertTrue(os.path.exists('%s/versions/1000.py' % self.path_repos)) self.assertTrue(os.path.exists('%s/versions/1001.py' % self.path_repos)) # TODO: test manage file # TODO: test changeset sqlalchemy-migrate-0.13.0/migrate/tests/versioning/test_database.py0000664000175000017500000000057013553670475025560 0ustar zuulzuul00000000000000from sqlalchemy import select, text from migrate.tests import fixture class TestConnect(fixture.DB): level=fixture.DB.TXN @fixture.usedb() def test_connect(self): """Connect to the database successfully""" # Connection is done in fixture.DB setup; make sure we can do stuff self.engine.execute( select([text('42')]) ) sqlalchemy-migrate-0.13.0/migrate/tests/__init__.py0000664000175000017500000000070713553670475022333 0ustar zuulzuul00000000000000# make this package available during imports as long as we support 0) sqlalchemy-migrate-0.13.0/migrate/tests/changeset/0000775000175000017500000000000013553670602022147 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/changeset/databases/0000775000175000017500000000000013553670602024076 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/changeset/databases/__init__.py0000664000175000017500000000000013553670475026205 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/changeset/databases/test_ibmdb2.py0000664000175000017500000000160613553670475026661 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import mock import six from migrate.changeset.databases import ibmdb2 from migrate.tests import fixture class TestIBMDBDialect(fixture.Base): """ Test class for ibmdb2 dialect unit tests which do not require a live backend database connection. """ def test_is_unique_constraint_with_null_cols_supported(self): test_values = { '10.1': False, '10.4.99': False, '10.5': True, '10.5.1': True } for version, supported in six.iteritems(test_values): mock_dialect = mock.MagicMock() mock_dialect.dbms_ver = version self.assertEqual( supported, ibmdb2.is_unique_constraint_with_null_columns_supported( mock_dialect), 'Assertion failed on version: %s' % version) sqlalchemy-migrate-0.13.0/migrate/tests/changeset/test_changeset.py0000664000175000017500000011050713553670475025535 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import sqlalchemy import warnings from sqlalchemy import * from migrate import changeset, exceptions from migrate.changeset import * from migrate.changeset import constraint from migrate.changeset.schema import ColumnDelta from migrate.tests import fixture from migrate.tests.fixture.warnings import catch_warnings import six class TestAddDropColumn(fixture.DB): """Test add/drop column through all possible interfaces also test for constraints """ level = fixture.DB.CONNECT table_name = 'tmp_adddropcol' table_name_idx = 'tmp_adddropcol_idx' table_int = 0 def _setup(self, url): super(TestAddDropColumn, self)._setup(url) self.meta = MetaData() self.table = Table(self.table_name, self.meta, Column('id', Integer, unique=True), ) self.table_idx = Table( self.table_name_idx, self.meta, Column('id', Integer, primary_key=True), Column('a', Integer), Column('b', Integer), Index('test_idx', 'a', 'b') ) self.meta.bind = self.engine if self.engine.has_table(self.table.name): self.table.drop() if self.engine.has_table(self.table_idx.name): self.table_idx.drop() self.table.create() self.table_idx.create() def _teardown(self): if self.engine.has_table(self.table.name): self.table.drop() if self.engine.has_table(self.table_idx.name): self.table_idx.drop() self.meta.clear() super(TestAddDropColumn,self)._teardown() def run_(self, create_column_func, drop_column_func, *col_p, **col_k): col_name = 'data' def assert_numcols(num_of_expected_cols): # number of cols should be correct in table object and in database self.refresh_table(self.table_name) result = len(self.table.c) self.assertEqual(result, num_of_expected_cols), if col_k.get('primary_key', None): # new primary key: check its length too result = len(self.table.primary_key) self.assertEqual(result, num_of_expected_cols) # we have 1 columns and there is no data column assert_numcols(1) self.assertTrue(getattr(self.table.c, 'data', None) is None) if len(col_p) == 0: col_p = [String(40)] col = Column(col_name, *col_p, **col_k) create_column_func(col) assert_numcols(2) # data column exists self.assertTrue(self.table.c.data.type.length, 40) col2 = self.table.c.data drop_column_func(col2) assert_numcols(1) @fixture.usedb() def test_undefined(self): """Add/drop columns not yet defined in the table""" def add_func(col): return create_column(col, self.table) def drop_func(col): return drop_column(col, self.table) return self.run_(add_func, drop_func) @fixture.usedb() def test_defined(self): """Add/drop columns already defined in the table""" def add_func(col): self.meta.clear() self.table = Table(self.table_name, self.meta, Column('id', Integer, primary_key=True), col, ) return create_column(col) def drop_func(col): return drop_column(col) return self.run_(add_func, drop_func) @fixture.usedb() def test_method_bound(self): """Add/drop columns via column methods; columns bound to a table ie. no table parameter passed to function """ def add_func(col): self.assertTrue(col.table is None, col.table) self.table.append_column(col) return col.create() def drop_func(col): #self.assertTrue(col.table is None,col.table) #self.table.append_column(col) return col.drop() return self.run_(add_func, drop_func) @fixture.usedb() def test_method_notbound(self): """Add/drop columns via column methods; columns not bound to a table""" def add_func(col): return col.create(self.table) def drop_func(col): return col.drop(self.table) return self.run_(add_func, drop_func) @fixture.usedb() def test_tablemethod_obj(self): """Add/drop columns via table methods; by column object""" def add_func(col): return self.table.create_column(col) def drop_func(col): return self.table.drop_column(col) return self.run_(add_func, drop_func) @fixture.usedb() def test_tablemethod_name(self): """Add/drop columns via table methods; by column name""" def add_func(col): # must be bound to table self.table.append_column(col) return self.table.create_column(col.name) def drop_func(col): # Not necessarily bound to table return self.table.drop_column(col.name) return self.run_(add_func, drop_func) @fixture.usedb() def test_byname(self): """Add/drop columns via functions; by table object and column name""" def add_func(col): self.table.append_column(col) return create_column(col.name, self.table) def drop_func(col): return drop_column(col.name, self.table) return self.run_(add_func, drop_func) @fixture.usedb() def test_drop_column_not_in_table(self): """Drop column by name""" def add_func(col): return self.table.create_column(col) def drop_func(col): if SQLA_07: self.table._columns.remove(col) else: self.table.c.remove(col) return self.table.drop_column(col.name) self.run_(add_func, drop_func) @fixture.usedb() def test_fk(self): """Can create columns with foreign keys""" # create FK's target reftable = Table('tmp_ref', self.meta, Column('id', Integer, primary_key=True), ) if self.engine.has_table(reftable.name): reftable.drop() reftable.create() # create column with fk col = Column('data', Integer, ForeignKey(reftable.c.id, name='testfk')) col.create(self.table) # check if constraint is added for cons in self.table.constraints: if isinstance(cons, sqlalchemy.schema.ForeignKeyConstraint): break else: self.fail('No constraint found') # TODO: test on db level if constraints work if SQLA_07: self.assertEqual(reftable.c.id.name, list(col.foreign_keys)[0].column.name) else: self.assertEqual(reftable.c.id.name, col.foreign_keys[0].column.name) if self.engine.name == 'mysql': constraint.ForeignKeyConstraint([self.table.c.data], [reftable.c.id], name='testfk').drop() col.drop(self.table) if self.engine.has_table(reftable.name): reftable.drop() @fixture.usedb(not_supported='sqlite') def test_pk(self): """Can create columns with primary key""" col = Column('data', Integer, nullable=False) self.assertRaises(exceptions.InvalidConstraintError, col.create, self.table, primary_key_name=True) col.create(self.table, primary_key_name='data_pkey') # check if constraint was added (cannot test on objects) self.table.insert(values={'data': 4}).execute() try: self.table.insert(values={'data': 4}).execute() except (sqlalchemy.exc.IntegrityError, sqlalchemy.exc.ProgrammingError): pass else: self.fail() col.drop() @fixture.usedb(not_supported=['mysql']) def test_check(self): """Can create columns with check constraint""" col = Column('foo', Integer, sqlalchemy.schema.CheckConstraint('foo > 4')) col.create(self.table) # check if constraint was added (cannot test on objects) self.table.insert(values={'foo': 5}).execute() try: self.table.insert(values={'foo': 3}).execute() except (sqlalchemy.exc.IntegrityError, sqlalchemy.exc.ProgrammingError): pass else: self.fail() col.drop() @fixture.usedb() def test_unique_constraint(self): self.assertRaises(exceptions.InvalidConstraintError, Column('data', Integer, unique=True).create, self.table) col = Column('data', Integer) col.create(self.table, unique_name='data_unique') # check if constraint was added (cannot test on objects) self.table.insert(values={'data': 5}).execute() try: self.table.insert(values={'data': 5}).execute() except (sqlalchemy.exc.IntegrityError, sqlalchemy.exc.ProgrammingError): pass else: self.fail() col.drop(self.table) # TODO: remove already attached columns with uniques, pks, fks .. @fixture.usedb(not_supported=['ibm_db_sa', 'postgresql']) def test_drop_column_of_composite_index(self): # NOTE(rpodolyaka): postgresql automatically drops a composite index # if one of its columns is dropped # NOTE(mriedem): DB2 does the same. self.table_idx.c.b.drop() reflected = Table(self.table_idx.name, MetaData(), autoload=True, autoload_with=self.engine) index = next(iter(reflected.indexes)) self.assertEquals(['a'], [c.name for c in index.columns]) @fixture.usedb() def test_drop_all_columns_of_composite_index(self): self.table_idx.c.a.drop() self.table_idx.c.b.drop() reflected = Table(self.table_idx.name, MetaData(), autoload=True, autoload_with=self.engine) self.assertEquals(0, len(reflected.indexes)) def _check_index(self,expected): if 'mysql' in self.engine.name or 'postgres' in self.engine.name: for index in tuple( Table(self.table.name, MetaData(), autoload=True, autoload_with=self.engine).indexes ): if index.name=='ix_data': break self.assertEqual(expected,index.unique) @fixture.usedb() def test_index(self): col = Column('data', Integer) col.create(self.table, index_name='ix_data') self._check_index(False) col.drop() @fixture.usedb() def test_index_unique(self): # shows how to create a unique index col = Column('data', Integer) col.create(self.table) Index('ix_data', col, unique=True).create(bind=self.engine) # check if index was added self.table.insert(values={'data': 5}).execute() try: self.table.insert(values={'data': 5}).execute() except (sqlalchemy.exc.IntegrityError, sqlalchemy.exc.ProgrammingError): pass else: self.fail() self._check_index(True) col.drop() @fixture.usedb() def test_server_defaults(self): """Can create columns with server_default values""" col = Column('data', String(244), server_default='foobar') col.create(self.table) self.table.insert(values={'id': 10}).execute() row = self._select_row() self.assertEqual(u'foobar', row['data']) col.drop() @fixture.usedb() def test_populate_default(self): """Test populate_default=True""" def default(): return 'foobar' col = Column('data', String(244), default=default) col.create(self.table, populate_default=True) self.table.insert(values={'id': 10}).execute() row = self._select_row() self.assertEqual(u'foobar', row['data']) col.drop() # TODO: test sequence # TODO: test quoting # TODO: test non-autoname constraints @fixture.usedb() def test_drop_doesnt_delete_other_indexes(self): # add two indexed columns self.table.drop() self.meta.clear() self.table = Table( self.table_name, self.meta, Column('id', Integer, primary_key=True), Column('d1', String(10), index=True), Column('d2', String(10), index=True), ) self.table.create() # paranoid check self.refresh_table() self.assertEqual( sorted([i.name for i in self.table.indexes]), [u'ix_tmp_adddropcol_d1', u'ix_tmp_adddropcol_d2'] ) # delete one self.table.c.d2.drop() # ensure the other index is still there self.refresh_table() self.assertEqual( sorted([i.name for i in self.table.indexes]), [u'ix_tmp_adddropcol_d1'] ) def _actual_foreign_keys(self): from sqlalchemy.schema import ForeignKeyConstraint result = [] for cons in self.table.constraints: if isinstance(cons,ForeignKeyConstraint): col_names = [] for col_name in cons.columns: if not isinstance(col_name,six.string_types): col_name = col_name.name col_names.append(col_name) result.append(col_names) result.sort() return result @fixture.usedb() def test_drop_with_foreign_keys(self): self.table.drop() self.meta.clear() # create FK's target reftable = Table('tmp_ref', self.meta, Column('id', Integer, primary_key=True), ) if self.engine.has_table(reftable.name): reftable.drop() reftable.create() # add a table with two foreign key columns self.table = Table( self.table_name, self.meta, Column('id', Integer, primary_key=True), Column('r1', Integer, ForeignKey('tmp_ref.id', name='test_fk1')), Column('r2', Integer, ForeignKey('tmp_ref.id', name='test_fk2')), ) self.table.create() # paranoid check self.assertEqual([['r1'],['r2']], self._actual_foreign_keys()) # delete one if self.engine.name == 'mysql': constraint.ForeignKeyConstraint([self.table.c.r2], [reftable.c.id], name='test_fk2').drop() self.table.c.r2.drop() # check remaining foreign key is there self.assertEqual([['r1']], self._actual_foreign_keys()) @fixture.usedb() def test_drop_with_complex_foreign_keys(self): from sqlalchemy.schema import ForeignKeyConstraint from sqlalchemy.schema import UniqueConstraint self.table.drop() self.meta.clear() # NOTE(mriedem): DB2 does not currently support unique constraints # on nullable columns, so the columns that are used to create the # foreign keys here need to be non-nullable for testing with DB2 # to work. # create FK's target reftable = Table('tmp_ref', self.meta, Column('id', Integer, primary_key=True), Column('jd', Integer, nullable=False), UniqueConstraint('id','jd') ) if self.engine.has_table(reftable.name): reftable.drop() reftable.create() # add a table with a complex foreign key constraint self.table = Table( self.table_name, self.meta, Column('id', Integer, primary_key=True), Column('r1', Integer, nullable=False), Column('r2', Integer, nullable=False), ForeignKeyConstraint(['r1','r2'], [reftable.c.id,reftable.c.jd], name='test_fk') ) self.table.create() # paranoid check self.assertEqual([['r1','r2']], self._actual_foreign_keys()) # delete one if self.engine.name == 'mysql': constraint.ForeignKeyConstraint([self.table.c.r1, self.table.c.r2], [reftable.c.id, reftable.c.jd], name='test_fk').drop() self.table.c.r2.drop() # check the constraint is gone, since part of it # is no longer there - if people hit this, # they may be confused, maybe we should raise an error # and insist that the constraint is deleted first, separately? self.assertEqual([], self._actual_foreign_keys()) class TestRename(fixture.DB): """Tests for table and index rename methods""" level = fixture.DB.CONNECT meta = MetaData() def _setup(self, url): super(TestRename, self)._setup(url) self.meta.bind = self.engine @fixture.usedb(not_supported='firebird') def test_rename_table(self): """Tables can be renamed""" c_name = 'col_1' table_name1 = 'name_one' table_name2 = 'name_two' index_name1 = 'x' + table_name1 index_name2 = 'x' + table_name2 self.meta.clear() self.column = Column(c_name, Integer) self.table = Table(table_name1, self.meta, self.column) self.index = Index(index_name1, self.column, unique=False) if self.engine.has_table(self.table.name): self.table.drop() if self.engine.has_table(table_name2): tmp = Table(table_name2, self.meta, autoload=True) tmp.drop() tmp.deregister() del tmp self.table.create() def assert_table_name(expected, skip_object_check=False): """Refresh a table via autoload SA has changed some since this test was written; we now need to do meta.clear() upon reloading a table - clear all rather than a select few. So, this works only if we're working with one table at a time (else, others will vanish too). """ if not skip_object_check: # Table object check self.assertEqual(self.table.name,expected) newname = self.table.name else: # we know the object's name isn't consistent: just assign it newname = expected # Table DB check self.meta.clear() self.table = Table(newname, self.meta, autoload=True) self.assertEqual(self.table.name, expected) def assert_index_name(expected, skip_object_check=False): if not skip_object_check: # Index object check self.assertEqual(self.index.name, expected) else: # object is inconsistent self.index.name = expected # TODO: Index DB check def add_table_to_meta(name): # trigger the case where table_name2 needs to be # removed from the metadata in ChangesetTable.deregister() tmp = Table(name, self.meta, Column(c_name, Integer)) tmp.create() tmp.drop() try: # Table renames assert_table_name(table_name1) add_table_to_meta(table_name2) rename_table(self.table, table_name2) assert_table_name(table_name2) self.table.rename(table_name1) assert_table_name(table_name1) # test by just the string rename_table(table_name1, table_name2, engine=self.engine) assert_table_name(table_name2, True) # object not updated # Index renames if self.url.startswith('sqlite') or self.url.startswith('mysql'): self.assertRaises(exceptions.NotSupportedError, self.index.rename, index_name2) else: assert_index_name(index_name1) rename_index(self.index, index_name2, engine=self.engine) assert_index_name(index_name2) self.index.rename(index_name1) assert_index_name(index_name1) # test by just the string rename_index(index_name1, index_name2, engine=self.engine) assert_index_name(index_name2, True) finally: if self.table.exists(): self.table.drop() class TestColumnChange(fixture.DB): level = fixture.DB.CONNECT table_name = 'tmp_colchange' def _setup(self, url): super(TestColumnChange, self)._setup(url) self.meta = MetaData(self.engine) self.table = Table(self.table_name, self.meta, Column('id', Integer, primary_key=True), Column('data', String(40), server_default=DefaultClause("tluafed"), nullable=True), ) if self.table.exists(): self.table.drop() try: self.table.create() except sqlalchemy.exc.SQLError: # SQLite: database schema has changed if not self.url.startswith('sqlite://'): raise def _teardown(self): if self.table.exists(): try: self.table.drop(self.engine) except sqlalchemy.exc.SQLError: # SQLite: database schema has changed if not self.url.startswith('sqlite://'): raise super(TestColumnChange, self)._teardown() @fixture.usedb() def test_rename(self): """Can rename a column""" def num_rows(col, content): return len(list(self.table.select(col == content).execute())) # Table content should be preserved in changed columns content = "fgsfds" self.engine.execute(self.table.insert(), data=content, id=42) self.assertEqual(num_rows(self.table.c.data, content), 1) # ...as a function, given a column object and the new name alter_column('data', name='data2', table=self.table) self.refresh_table() alter_column(self.table.c.data2, name='atad') self.refresh_table(self.table.name) self.assertTrue('data' not in self.table.c.keys()) self.assertTrue('atad' in self.table.c.keys()) self.assertEqual(num_rows(self.table.c.atad, content), 1) # ...as a method, given a new name self.table.c.atad.alter(name='data') self.refresh_table(self.table.name) self.assertTrue('atad' not in self.table.c.keys()) self.table.c.data # Should not raise exception self.assertEqual(num_rows(self.table.c.data, content), 1) # ...as a function, given a new object alter_column(self.table.c.data, name = 'atad', type=String(40), server_default=self.table.c.data.server_default) self.refresh_table(self.table.name) self.assertTrue('data' not in self.table.c.keys()) self.table.c.atad # Should not raise exception self.assertEqual(num_rows(self.table.c.atad, content), 1) # ...as a method, given a new object self.table.c.atad.alter( name='data',type=String(40), server_default=self.table.c.atad.server_default ) self.refresh_table(self.table.name) self.assertTrue('atad' not in self.table.c.keys()) self.table.c.data # Should not raise exception self.assertEqual(num_rows(self.table.c.data,content), 1) @fixture.usedb() def test_type(self): # Test we can change a column's type # Just the new type self.table.c.data.alter(type=String(43)) self.refresh_table(self.table.name) self.assertTrue(isinstance(self.table.c.data.type, String)) self.assertEqual(self.table.c.data.type.length, 43) # Different type self.assertTrue(isinstance(self.table.c.id.type, Integer)) self.assertEqual(self.table.c.id.nullable, False) # SQLAlchemy 1.1 adds a third state to "autoincrement" called # "auto". self.assertTrue(self.table.c.id.autoincrement in ('auto', True)) if not self.engine.name == 'firebird': self.table.c.id.alter(type=String(20)) self.assertEqual(self.table.c.id.nullable, False) # a rule makes sure that autoincrement is set to False # when we change off of Integer self.assertEqual(self.table.c.id.autoincrement, False) self.refresh_table(self.table.name) self.assertTrue(isinstance(self.table.c.id.type, String)) # note that after reflection, "autoincrement" is likely # to change back to a database-generated value. Should be # False or "auto". if True, it's a bug; at least one of these # exists prior to SQLAlchemy 1.1.3 @fixture.usedb() def test_default(self): """Can change a column's server_default value (DefaultClauses only) Only DefaultClauses are changed here: others are managed by the application / by SA """ self.assertEqual(self.table.c.data.server_default.arg, 'tluafed') # Just the new default default = 'my_default' self.table.c.data.alter(server_default=DefaultClause(default)) self.refresh_table(self.table.name) #self.assertEqual(self.table.c.data.server_default.arg,default) # TextClause returned by autoload self.assertTrue(default in str(self.table.c.data.server_default.arg)) self.engine.execute(self.table.insert(), id=12) row = self._select_row() self.assertEqual(row['data'], default) # Column object default = 'your_default' self.table.c.data.alter(type=String(40), server_default=DefaultClause(default)) self.refresh_table(self.table.name) self.assertTrue(default in str(self.table.c.data.server_default.arg)) # Drop/remove default self.table.c.data.alter(server_default=None) self.assertEqual(self.table.c.data.server_default, None) self.refresh_table(self.table.name) # server_default isn't necessarily None for Oracle #self.assertTrue(self.table.c.data.server_default is None,self.table.c.data.server_default) self.engine.execute(self.table.insert(), id=11) row = self.table.select(self.table.c.id == 11).execution_options(autocommit=True).execute().fetchone() self.assertTrue(row['data'] is None, row['data']) @fixture.usedb(not_supported='firebird') def test_null(self): """Can change a column's null constraint""" self.assertEqual(self.table.c.data.nullable, True) # Full column self.table.c.data.alter(type=String(40), nullable=False) self.table.nullable = None self.refresh_table(self.table.name) self.assertEqual(self.table.c.data.nullable, False) # Just the new status self.table.c.data.alter(nullable=True) self.refresh_table(self.table.name) self.assertEqual(self.table.c.data.nullable, True) @fixture.usedb() def test_alter_deprecated(self): try: # py 2.4 compatibility :-/ cw = catch_warnings(record=True) w = cw.__enter__() warnings.simplefilter("always") self.table.c.data.alter(Column('data', String(100))) self.assertEqual(len(w),1) self.assertTrue(issubclass(w[-1].category, MigrateDeprecationWarning)) self.assertEqual( 'Passing a Column object to alter_column is deprecated. ' 'Just pass in keyword parameters instead.', str(w[-1].message)) finally: cw.__exit__() @fixture.usedb() def test_alter_returns_delta(self): """Test if alter constructs return delta""" delta = self.table.c.data.alter(type=String(100)) self.assertTrue('type' in delta) @fixture.usedb() def test_alter_all(self): """Tests all alter changes at one time""" # test for each db separately # since currently some dont support everything # test pre settings self.assertEqual(self.table.c.data.nullable, True) self.assertEqual(self.table.c.data.server_default.arg, 'tluafed') self.assertEqual(self.table.c.data.name, 'data') self.assertTrue(isinstance(self.table.c.data.type, String)) self.assertTrue(self.table.c.data.type.length, 40) kw = dict(nullable=False, server_default='foobar', name='data_new', type=String(50)) if self.engine.name == 'firebird': del kw['nullable'] self.table.c.data.alter(**kw) # test altered objects self.assertEqual(self.table.c.data.server_default.arg, 'foobar') if not self.engine.name == 'firebird': self.assertEqual(self.table.c.data.nullable, False) self.assertEqual(self.table.c.data.name, 'data_new') self.assertEqual(self.table.c.data.type.length, 50) self.refresh_table(self.table.name) # test post settings if not self.engine.name == 'firebird': self.assertEqual(self.table.c.data_new.nullable, False) self.assertEqual(self.table.c.data_new.name, 'data_new') self.assertTrue(isinstance(self.table.c.data_new.type, String)) self.assertTrue(self.table.c.data_new.type.length, 50) # insert data and assert default self.table.insert(values={'id': 10}).execute() row = self._select_row() self.assertEqual(u'foobar', row['data_new']) class TestColumnDelta(fixture.DB): """Tests ColumnDelta class""" level = fixture.DB.CONNECT table_name = 'tmp_coldelta' table_int = 0 def _setup(self, url): super(TestColumnDelta, self)._setup(url) self.meta = MetaData() self.table = Table(self.table_name, self.meta, Column('ids', String(10)), ) self.meta.bind = self.engine if self.engine.has_table(self.table.name): self.table.drop() self.table.create() def _teardown(self): if self.engine.has_table(self.table.name): self.table.drop() self.meta.clear() super(TestColumnDelta,self)._teardown() def mkcol(self, name='id', type=String, *p, **k): return Column(name, type, *p, **k) def verify(self, expected, original, *p, **k): self.delta = ColumnDelta(original, *p, **k) result = list(self.delta.keys()) result.sort() self.assertEqual(expected, result) return self.delta def test_deltas_two_columns(self): """Testing ColumnDelta with two columns""" col_orig = self.mkcol(primary_key=True) col_new = self.mkcol(name='ids', primary_key=True) self.verify([], col_orig, col_orig) self.verify(['name'], col_orig, col_orig, 'ids') self.verify(['name'], col_orig, col_orig, name='ids') self.verify(['name'], col_orig, col_new) self.verify(['name', 'type'], col_orig, col_new, type=String) # Type comparisons self.verify([], self.mkcol(type=String), self.mkcol(type=String)) self.verify(['type'], self.mkcol(type=String), self.mkcol(type=Integer)) self.verify(['type'], self.mkcol(type=String), self.mkcol(type=String(42))) self.verify([], self.mkcol(type=String(42)), self.mkcol(type=String(42))) self.verify(['type'], self.mkcol(type=String(24)), self.mkcol(type=String(42))) self.verify(['type'], self.mkcol(type=String(24)), self.mkcol(type=Text(24))) # Other comparisons self.verify(['primary_key'], self.mkcol(nullable=False), self.mkcol(primary_key=True)) # PK implies nullable=False self.verify(['nullable', 'primary_key'], self.mkcol(nullable=True), self.mkcol(primary_key=True)) self.verify([], self.mkcol(primary_key=True), self.mkcol(primary_key=True)) self.verify(['nullable'], self.mkcol(nullable=True), self.mkcol(nullable=False)) self.verify([], self.mkcol(nullable=True), self.mkcol(nullable=True)) self.verify([], self.mkcol(server_default=None), self.mkcol(server_default=None)) self.verify([], self.mkcol(server_default='42'), self.mkcol(server_default='42')) # test server default delta = self.verify(['server_default'], self.mkcol(), self.mkcol('id', String, DefaultClause('foobar'))) self.assertEqual(delta['server_default'].arg, 'foobar') self.verify([], self.mkcol(server_default='foobar'), self.mkcol('id', String, DefaultClause('foobar'))) self.verify(['type'], self.mkcol(server_default='foobar'), self.mkcol('id', Text, DefaultClause('foobar'))) col = self.mkcol(server_default='foobar') self.verify(['type'], col, self.mkcol('id', Text, DefaultClause('foobar')), alter_metadata=True) self.assertTrue(isinstance(col.type, Text)) col = self.mkcol() self.verify(['name', 'server_default', 'type'], col, self.mkcol('beep', Text, DefaultClause('foobar')), alter_metadata=True) self.assertTrue(isinstance(col.type, Text)) self.assertEqual(col.name, 'beep') self.assertEqual(col.server_default.arg, 'foobar') @fixture.usedb() def test_deltas_zero_columns(self): """Testing ColumnDelta with zero columns""" self.verify(['name'], 'ids', table=self.table, name='hey') # test reflection self.verify(['type'], 'ids', table=self.table.name, type=String(80), engine=self.engine) self.verify(['type'], 'ids', table=self.table.name, type=String(80), metadata=self.meta) self.meta.clear() delta = self.verify(['type'], 'ids', table=self.table.name, type=String(80), metadata=self.meta, alter_metadata=True) self.assertTrue(self.table.name in self.meta) self.assertEqual(delta.result_column.type.length, 80) self.assertEqual(self.meta.tables.get(self.table.name).c.ids.type.length, 80) # test defaults self.meta.clear() self.verify(['server_default'], 'ids', table=self.table.name, server_default='foobar', metadata=self.meta, alter_metadata=True) self.meta.tables.get(self.table.name).c.ids.server_default.arg == 'foobar' # test missing parameters self.assertRaises(ValueError, ColumnDelta, table=self.table.name) self.assertRaises(ValueError, ColumnDelta, 'ids', table=self.table.name, alter_metadata=True) self.assertRaises(ValueError, ColumnDelta, 'ids', table=self.table.name, alter_metadata=False) def test_deltas_one_column(self): """Testing ColumnDelta with one column""" col_orig = self.mkcol(primary_key=True) self.verify([], col_orig) self.verify(['name'], col_orig, 'ids') # Parameters are always executed, even if they're 'unchanged' # (We can't assume given column is up-to-date) self.verify(['name', 'primary_key', 'type'], col_orig, 'id', Integer, primary_key=True) self.verify(['name', 'primary_key', 'type'], col_orig, name='id', type=Integer, primary_key=True) # Change name, given an up-to-date definition and the current name delta = self.verify(['name'], col_orig, name='blah') self.assertEqual(delta.get('name'), 'blah') self.assertEqual(delta.current_name, 'id') col_orig = self.mkcol(primary_key=True) self.verify(['name', 'type'], col_orig, name='id12', type=Text, alter_metadata=True) self.assertTrue(isinstance(col_orig.type, Text)) self.assertEqual(col_orig.name, 'id12') # test server default col_orig = self.mkcol(primary_key=True) delta = self.verify(['server_default'], col_orig, DefaultClause('foobar')) self.assertEqual(delta['server_default'].arg, 'foobar') delta = self.verify(['server_default'], col_orig, server_default=DefaultClause('foobar')) self.assertEqual(delta['server_default'].arg, 'foobar') # no change col_orig = self.mkcol(server_default=DefaultClause('foobar')) delta = self.verify(['type'], col_orig, DefaultClause('foobar'), type=PickleType) self.assertTrue(isinstance(delta.result_column.type, PickleType)) # TODO: test server on update # TODO: test bind metadata sqlalchemy-migrate-0.13.0/migrate/tests/changeset/__init__.py0000664000175000017500000000000013553670475024256 0ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/changeset/test_constraint.py0000664000175000017500000002530113553670475025755 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- from sqlalchemy import * from sqlalchemy.util import * from sqlalchemy.exc import * from migrate.changeset.util import fk_column_names from migrate.exceptions import * from migrate.changeset import * from migrate.tests import fixture class CommonTestConstraint(fixture.DB): """helper functions to test constraints. we just create a fresh new table and make sure everything is as required. """ def _setup(self, url): super(CommonTestConstraint, self)._setup(url) self._create_table() def _teardown(self): if hasattr(self, 'table') and self.engine.has_table(self.table.name): self.table.drop() super(CommonTestConstraint, self)._teardown() def _create_table(self): self._connect(self.url) self.meta = MetaData(self.engine) self.tablename = 'mytable' self.table = Table(self.tablename, self.meta, Column(u'id', Integer, nullable=False), Column(u'fkey', Integer, nullable=False), mysql_engine='InnoDB') if self.engine.has_table(self.table.name): self.table.drop() self.table.create() # make sure we start at zero self.assertEqual(len(self.table.primary_key), 0) self.assertTrue(isinstance(self.table.primary_key, schema.PrimaryKeyConstraint), self.table.primary_key.__class__) class TestConstraint(CommonTestConstraint): level = fixture.DB.CONNECT def _define_pk(self, *cols): # Add a pk by creating a PK constraint if (self.engine.name in ('oracle', 'firebird')): # Can't drop Oracle PKs without an explicit name pk = PrimaryKeyConstraint(table=self.table, name='temp_pk_key', *cols) else: pk = PrimaryKeyConstraint(table=self.table, *cols) self.compare_columns_equal(pk.columns, cols) pk.create() self.refresh_table() if not self.url.startswith('sqlite'): self.compare_columns_equal(self.table.primary_key, cols, ['type', 'autoincrement']) # Drop the PK constraint #if (self.engine.name in ('oracle', 'firebird')): # # Apparently Oracle PK names aren't introspected # pk.name = self.table.primary_key.name pk.drop() self.refresh_table() self.assertEqual(len(self.table.primary_key), 0) self.assertTrue(isinstance(self.table.primary_key, schema.PrimaryKeyConstraint)) return pk @fixture.usedb() def test_define_fk(self): """FK constraints can be defined, created, and dropped""" # FK target must be unique pk = PrimaryKeyConstraint(self.table.c.id, table=self.table, name="pkid") pk.create() # Add a FK by creating a FK constraint if SQLA_07: self.assertEqual(list(self.table.c.fkey.foreign_keys), []) else: self.assertEqual(self.table.c.fkey.foreign_keys._list, []) fk = ForeignKeyConstraint([self.table.c.fkey], [self.table.c.id], name="fk_id_fkey", ondelete="CASCADE") if SQLA_07: self.assertTrue(list(self.table.c.fkey.foreign_keys) is not []) else: self.assertTrue(self.table.c.fkey.foreign_keys._list is not []) for key in fk_column_names(fk): self.assertEqual(key, self.table.c.fkey.name) self.assertEqual([e.column for e in fk.elements], [self.table.c.id]) self.assertEqual(list(fk.referenced), [self.table.c.id]) if self.url.startswith('mysql'): # MySQL FKs need an index index = Index('index_name', self.table.c.fkey) index.create() fk.create() # test for ondelete/onupdate if SQLA_07: fkey = list(self.table.c.fkey.foreign_keys)[0] else: fkey = self.table.c.fkey.foreign_keys._list[0] self.assertEqual(fkey.ondelete, "CASCADE") # TODO: test on real db if it was set self.refresh_table() if SQLA_07: self.assertTrue(list(self.table.c.fkey.foreign_keys) is not []) else: self.assertTrue(self.table.c.fkey.foreign_keys._list is not []) fk.drop() self.refresh_table() if SQLA_07: self.assertEqual(list(self.table.c.fkey.foreign_keys), []) else: self.assertEqual(self.table.c.fkey.foreign_keys._list, []) @fixture.usedb() def test_define_pk(self): """PK constraints can be defined, created, and dropped""" self._define_pk(self.table.c.fkey) @fixture.usedb() def test_define_pk_multi(self): """Multicolumn PK constraints can be defined, created, and dropped""" self._define_pk(self.table.c.id, self.table.c.fkey) @fixture.usedb(not_supported=['firebird']) def test_drop_cascade(self): """Drop constraint cascaded""" pk = PrimaryKeyConstraint('fkey', table=self.table, name="id_pkey") pk.create() self.refresh_table() # Drop the PK constraint forcing cascade pk.drop(cascade=True) # TODO: add real assertion if it was added @fixture.usedb(supported=['mysql']) def test_fail_mysql_check_constraints(self): """Check constraints raise NotSupported for mysql on drop""" cons = CheckConstraint('id > 3', name="id_check", table=self.table) cons.create() self.refresh_table() try: cons.drop() except NotSupportedError: pass else: self.fail() @fixture.usedb(not_supported=['sqlite', 'mysql']) def test_named_check_constraints(self): """Check constraints can be defined, created, and dropped""" self.assertRaises(InvalidConstraintError, CheckConstraint, 'id > 3') cons = CheckConstraint('id > 3', name="id_check", table=self.table) cons.create() self.refresh_table() self.table.insert(values={'id': 4, 'fkey': 1}).execute() try: self.table.insert(values={'id': 1, 'fkey': 1}).execute() except (IntegrityError, ProgrammingError): pass else: self.fail() # Remove the name, drop the constraint; it should succeed cons.drop() self.refresh_table() self.table.insert(values={'id': 2, 'fkey': 2}).execute() self.table.insert(values={'id': 1, 'fkey': 2}).execute() class TestAutoname(CommonTestConstraint): """Every method tests for a type of constraint wether it can autoname itself and if you can pass object instance and names to classes. """ level = fixture.DB.CONNECT @fixture.usedb(not_supported=['oracle', 'firebird']) def test_autoname_pk(self): """PrimaryKeyConstraints can guess their name if None is given""" # Don't supply a name; it should create one cons = PrimaryKeyConstraint(self.table.c.id) cons.create() self.refresh_table() if not self.url.startswith('sqlite'): # TODO: test for index for sqlite self.compare_columns_equal(cons.columns, self.table.primary_key, ['autoincrement', 'type']) # Remove the name, drop the constraint; it should succeed cons.name = None cons.drop() self.refresh_table() self.assertEqual(list(), list(self.table.primary_key)) # test string names cons = PrimaryKeyConstraint('id', table=self.table) cons.create() self.refresh_table() if not self.url.startswith('sqlite'): # TODO: test for index for sqlite self.compare_columns_equal(cons.columns, self.table.primary_key) cons.name = None cons.drop() @fixture.usedb(not_supported=['oracle', 'sqlite', 'firebird']) def test_autoname_fk(self): """ForeignKeyConstraints can guess their name if None is given""" cons = PrimaryKeyConstraint(self.table.c.id) cons.create() cons = ForeignKeyConstraint([self.table.c.fkey], [self.table.c.id]) cons.create() self.refresh_table() if SQLA_07: list(self.table.c.fkey.foreign_keys)[0].column is self.table.c.id else: self.table.c.fkey.foreign_keys[0].column is self.table.c.id # Remove the name, drop the constraint; it should succeed cons.name = None cons.drop() self.refresh_table() if SQLA_07: self.assertEqual(list(self.table.c.fkey.foreign_keys), list()) else: self.assertEqual(self.table.c.fkey.foreign_keys._list, list()) # test string names cons = ForeignKeyConstraint(['fkey'], ['%s.id' % self.tablename], table=self.table) cons.create() self.refresh_table() if SQLA_07: list(self.table.c.fkey.foreign_keys)[0].column is self.table.c.id else: self.table.c.fkey.foreign_keys[0].column is self.table.c.id # Remove the name, drop the constraint; it should succeed cons.name = None cons.drop() @fixture.usedb(not_supported=['oracle', 'sqlite', 'mysql']) def test_autoname_check(self): """CheckConstraints can guess their name if None is given""" cons = CheckConstraint('id > 3', columns=[self.table.c.id]) cons.create() self.refresh_table() if not self.engine.name == 'mysql': self.table.insert(values={'id': 4, 'fkey': 1}).execute() try: self.table.insert(values={'id': 1, 'fkey': 2}).execute() except (IntegrityError, ProgrammingError): pass else: self.fail() # Remove the name, drop the constraint; it should succeed cons.name = None cons.drop() self.refresh_table() self.table.insert(values={'id': 2, 'fkey': 2}).execute() self.table.insert(values={'id': 1, 'fkey': 3}).execute() @fixture.usedb(not_supported=['oracle']) def test_autoname_unique(self): """UniqueConstraints can guess their name if None is given""" cons = UniqueConstraint(self.table.c.fkey) cons.create() self.refresh_table() self.table.insert(values={'fkey': 4, 'id': 1}).execute() try: self.table.insert(values={'fkey': 4, 'id': 2}).execute() except (sqlalchemy.exc.IntegrityError, sqlalchemy.exc.ProgrammingError): pass else: self.fail() # Remove the name, drop the constraint; it should succeed cons.name = None cons.drop() self.refresh_table() self.table.insert(values={'fkey': 4, 'id': 2}).execute() self.table.insert(values={'fkey': 4, 'id': 1}).execute() sqlalchemy-migrate-0.13.0/migrate/tests/fixture/0000775000175000017500000000000013553670602021674 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/migrate/tests/fixture/base.py0000664000175000017500000000117713553670475023176 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import re import testtools class Base(testtools.TestCase): def assertEqualIgnoreWhitespace(self, v1, v2): """Compares two strings that should be\ identical except for whitespace """ def strip_whitespace(s): return re.sub(r'\s', '', s) line1 = strip_whitespace(v1) line2 = strip_whitespace(v2) self.assertEqual(line1, line2, "%s != %s" % (v1, v2)) def ignoreErrors(self, func, *p,**k): """Call a function, ignoring any exceptions""" try: func(*p,**k) except: pass sqlalchemy-migrate-0.13.0/migrate/tests/fixture/shell.py0000664000175000017500000000163613553670475023373 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os import sys import logging from scripttest import TestFileEnvironment from migrate.tests.fixture.pathed import * log = logging.getLogger(__name__) class Shell(Pathed): """Base class for command line tests""" def setUp(self): super(Shell, self).setUp() migrate_path = os.path.dirname(sys.executable) # PATH to migrate development script folder log.debug('PATH for ScriptTest: %s', migrate_path) self.env = TestFileEnvironment( base_path=os.path.join(self.temp_usable_dir, 'env'), ) def run_version(self, repos_path): result = self.env.run('migrate version %s' % repos_path) return int(result.stdout.strip()) def run_db_version(self, url, repos_path): result = self.env.run('migrate db_version %s %s' % (url, repos_path)) return int(result.stdout.strip()) sqlalchemy-migrate-0.13.0/migrate/tests/fixture/models.py0000664000175000017500000000056313553670475023545 0ustar zuulzuul00000000000000from sqlalchemy import * # test rundiffs in shell meta_old_rundiffs = MetaData() meta_rundiffs = MetaData() meta = MetaData() tmp_account_rundiffs = Table('tmp_account_rundiffs', meta_rundiffs, Column('id', Integer, primary_key=True), Column('login', Text()), Column('passwd', Text()), ) tmp_sql_table = Table('tmp_sql_table', meta, Column('id', Integer)) sqlalchemy-migrate-0.13.0/migrate/tests/fixture/pathed.py0000664000175000017500000000414213553670475023524 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os import sys import shutil import tempfile from migrate.tests.fixture import base class Pathed(base.Base): # Temporary files _tmpdir = tempfile.mkdtemp() def setUp(self): super(Pathed, self).setUp() self.temp_usable_dir = tempfile.mkdtemp() sys.path.append(self.temp_usable_dir) def tearDown(self): super(Pathed, self).tearDown() try: sys.path.remove(self.temp_usable_dir) except: pass # w00t? Pathed.purge(self.temp_usable_dir) @classmethod def _tmp(cls, prefix='', suffix=''): """Generate a temporary file name that doesn't exist All filenames are generated inside a temporary directory created by tempfile.mkdtemp(); only the creating user has access to this directory. It should be secure to return a nonexistant temp filename in this directory, unless the user is messing with their own files. """ file, ret = tempfile.mkstemp(suffix,prefix,cls._tmpdir) os.close(file) os.remove(ret) return ret @classmethod def tmp(cls, *p, **k): return cls._tmp(*p, **k) @classmethod def tmp_py(cls, *p, **k): return cls._tmp(suffix='.py', *p, **k) @classmethod def tmp_sql(cls, *p, **k): return cls._tmp(suffix='.sql', *p, **k) @classmethod def tmp_named(cls, name): return os.path.join(cls._tmpdir, name) @classmethod def tmp_repos(cls, *p, **k): return cls._tmp(*p, **k) @classmethod def purge(cls, path): """Removes this path if it exists, in preparation for tests Careful - all tests should take place in /tmp. We don't want to accidentally wipe stuff out... """ if os.path.exists(path): if os.path.isdir(path): shutil.rmtree(path) else: os.remove(path) if path.endswith('.py'): pyc = path + 'c' if os.path.exists(pyc): os.remove(pyc) sqlalchemy-migrate-0.13.0/migrate/tests/fixture/__init__.py0000664000175000017500000000061113553670475024013 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import testtools def main(imports=None): if imports: global suite suite = suite(imports) defaultTest='fixture.suite' else: defaultTest=None return testtools.TestProgram(defaultTest=defaultTest) from .base import Base from .pathed import Pathed from .shell import Shell from .database import DB,usedb sqlalchemy-migrate-0.13.0/migrate/tests/fixture/warnings.py0000664000175000017500000000612513553670475024112 0ustar zuulzuul00000000000000# lifted from Python 2.6, so we can use it in Python 2.5 import sys class WarningMessage(object): """Holds the result of a single showwarning() call.""" _WARNING_DETAILS = ("message", "category", "filename", "lineno", "file", "line") def __init__(self, message, category, filename, lineno, file=None, line=None): local_values = locals() for attr in self._WARNING_DETAILS: setattr(self, attr, local_values[attr]) if category: self._category_name = category.__name__ else: self._category_name = None def __str__(self): return ("{message : %r, category : %r, filename : %r, lineno : %s, " "line : %r}" % (self.message, self._category_name, self.filename, self.lineno, self.line)) class catch_warnings(object): """A context manager that copies and restores the warnings filter upon exiting the context. The 'record' argument specifies whether warnings should be captured by a custom implementation of warnings.showwarning() and be appended to a list returned by the context manager. Otherwise None is returned by the context manager. The objects appended to the list are arguments whose attributes mirror the arguments to showwarning(). The 'module' argument is to specify an alternative module to the module named 'warnings' and imported under that name. This argument is only useful when testing the warnings module itself. """ def __init__(self, record=False, module=None): """Specify whether to record warnings and if an alternative module should be used other than sys.modules['warnings']. For compatibility with Python 3.0, please consider all arguments to be keyword-only. """ self._record = record if module is None: self._module = sys.modules['warnings'] else: self._module = module self._entered = False def __repr__(self): args = [] if self._record: args.append("record=True") if self._module is not sys.modules['warnings']: args.append("module=%r" % self._module) name = type(self).__name__ return "%s(%s)" % (name, ", ".join(args)) def __enter__(self): if self._entered: raise RuntimeError("Cannot enter %r twice" % self) self._entered = True self._filters = self._module.filters self._module.filters = self._filters[:] self._showwarning = self._module.showwarning if self._record: log = [] def showwarning(*args, **kwargs): log.append(WarningMessage(*args, **kwargs)) self._module.showwarning = showwarning return log else: return None def __exit__(self, *exc_info): if not self._entered: raise RuntimeError("Cannot exit %r without entering first" % self) self._module.filters = self._filters self._module.showwarning = self._showwarning sqlalchemy-migrate-0.13.0/migrate/tests/fixture/database.py0000664000175000017500000001456713553670475024037 0ustar zuulzuul00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import os import logging import sys import six from decorator import decorator from sqlalchemy import create_engine, Table, MetaData from sqlalchemy import exc as sa_exc from sqlalchemy.orm import create_session from sqlalchemy.pool import StaticPool from migrate.changeset.schema import ColumnDelta from migrate.versioning.util import Memoize from migrate.tests.fixture.base import Base from migrate.tests.fixture.pathed import Pathed log = logging.getLogger(__name__) @Memoize def readurls(): """read URLs from config file return a list""" # TODO: remove tmpfile since sqlite can store db in memory filename = 'test_db.cfg' if six.PY2 else "test_db_py3.cfg" ret = list() tmpfile = Pathed.tmp() fullpath = os.path.join(os.curdir, filename) try: fd = open(fullpath) except IOError: raise IOError("""You must specify the databases to use for testing! Copy %(filename)s.tmpl to %(filename)s and edit your database URLs.""" % locals()) for line in fd: if line.startswith('#'): continue line = line.replace('__tmp__', tmpfile).strip() ret.append(line) fd.close() return ret def is_supported(url, supported, not_supported): db = url.split(':', 1)[0] if supported is not None: if isinstance(supported, six.string_types): return supported == db else: return db in supported elif not_supported is not None: if isinstance(not_supported, six.string_types): return not_supported != db else: return not (db in not_supported) return True def usedb(supported=None, not_supported=None): """Decorates tests to be run with a database connection These tests are run once for each available database @param supported: run tests for ONLY these databases @param not_supported: run tests for all databases EXCEPT these If both supported and not_supported are empty, all dbs are assumed to be supported """ if supported is not None and not_supported is not None: raise AssertionError("Can't specify both supported and not_supported in fixture.db()") urls = readurls() my_urls = [url for url in urls if is_supported(url, supported, not_supported)] @decorator def dec(f, self, *a, **kw): failed_for = [] fail = False for url in my_urls: try: log.debug("Running test with engine %s", url) try: self._setup(url) except sa_exc.OperationalError: log.info('Backend %s is not available, skip it', url) continue except Exception as e: raise RuntimeError('Exception during _setup(): %r' % e) try: f(self, *a, **kw) finally: try: self._teardown() except Exception as e: raise RuntimeError('Exception during _teardown(): %r' % e) except Exception: failed_for.append(url) fail = sys.exc_info() for url in failed_for: log.error('Failed for %s', url) if fail: # cause the failure :-) six.reraise(*fail) return dec class DB(Base): # Constants: connection level NONE = 0 # No connection; just set self.url CONNECT = 1 # Connect; no transaction TXN = 2 # Everything in a transaction level = TXN def _engineInfo(self, url=None): if url is None: url = self.url return url def _setup(self, url): self._connect(url) # make sure there are no tables lying around meta = MetaData(self.engine) meta.reflect() meta.drop_all() def _teardown(self): self._disconnect() def _connect(self, url): self.url = url # TODO: seems like 0.5.x branch does not work with engine.dispose and staticpool #self.engine = create_engine(url, echo=True, poolclass=StaticPool) self.engine = create_engine(url, echo=True) # silence the logger added by SA, nose adds its own! logging.getLogger('sqlalchemy').handlers=[] self.meta = MetaData(bind=self.engine) if self.level < self.CONNECT: return #self.session = create_session(bind=self.engine) if self.level < self.TXN: return #self.txn = self.session.begin() def _disconnect(self): if hasattr(self, 'txn'): self.txn.rollback() if hasattr(self, 'session'): self.session.close() #if hasattr(self,'conn'): # self.conn.close() self.engine.dispose() def _supported(self, url): db = url.split(':',1)[0] func = getattr(self, self._TestCase__testMethodName) if hasattr(func, 'supported'): return db in func.supported if hasattr(func, 'not_supported'): return not (db in func.not_supported) # Neither list assigned; assume all are supported return True def _not_supported(self, url): return not self._supported(url) def _select_row(self): """Select rows, used in multiple tests""" return self.table.select().execution_options( autocommit=True).execute().fetchone() def refresh_table(self, name=None): """Reload the table from the database Assumes we're working with only a single table, self.table, and metadata self.meta Working w/ multiple tables is not possible, as tables can only be reloaded with meta.clear() """ if name is None: name = self.table.name self.meta.clear() self.table = Table(name, self.meta, autoload=True) def compare_columns_equal(self, columns1, columns2, ignore=None): """Loop through all columns and compare them""" def key(column): return column.name for c1, c2 in zip(sorted(columns1, key=key), sorted(columns2, key=key)): diffs = ColumnDelta(c1, c2).diffs if ignore: for key in ignore: diffs.pop(key, None) if diffs: self.fail("Comparing %s to %s failed: %s" % (columns1, columns2, diffs)) # TODO: document engine.dispose and write tests sqlalchemy-migrate-0.13.0/README.rst0000664000175000017500000000305113553670475017112 0ustar zuulzuul00000000000000SQLAlchemy Migrate ================== Fork from http://code.google.com/p/sqlalchemy-migrate/ to get it working with SQLAlchemy 0.8. Inspired by Ruby on Rails' migrations, Migrate provides a way to deal with database schema changes in `SQLAlchemy `_ projects. Migrate extends SQLAlchemy to have database changeset handling. It provides a database change repository mechanism which can be used from the command line as well as from inside python code. Help ---- Sphinx documentation is available at the project page `readthedocs.org `_. Users and developers can be found at #openstack-dev on Freenode IRC network and at the public users mailing list `migrate-users `_. New releases and major changes are announced at the public announce mailing list `openstack-dev `_ and at the Python package index `sqlalchemy-migrate `_. Homepage is located at `stackforge `_ You can also clone a current `development version `_ Tests and Bugs -------------- To run automated tests: * install tox: ``pip install -U tox`` * run tox: ``tox`` * to test only a specific Python version: ``tox -e py27`` (Python 2.7) Please report any issues with sqlalchemy-migrate to the issue tracker at `Launchpad issues `_ sqlalchemy-migrate-0.13.0/tools/0000775000175000017500000000000013553670602016554 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/tools/pretty_tox.sh0000775000175000017500000000072713553670475021352 0ustar zuulzuul00000000000000#!/usr/bin/env bash # return nonzero exit status of rightmost command, so that we # get nonzero exit on test failure without halting subunit-trace set -o pipefail TESTRARGS=$1 python setup.py testr --testr-args="--subunit $TESTRARGS --concurrency=1" | subunit-trace -f retval=$? # NOTE(mtreinish) The pipe above would eat the slowest display from pbr's testr # wrapper so just manually print the slowest tests echo -e "\nSlowest Tests:\n" testr slowest exit $retval sqlalchemy-migrate-0.13.0/tools/test-setup.sh0000775000175000017500000000350413553670475021242 0ustar zuulzuul00000000000000#!/bin/bash -xe # This script will be run by OpenStack CI before unit tests are run, # it sets up the test system as needed. # Developers should setup their test systems in a similar way. # This setup needs to be run as a user that can run sudo. # The root password for the MySQL database; pass it in via # MYSQL_ROOT_PW. DB_ROOT_PW=${MYSQL_ROOT_PW:-insecure_slave} # This user and its password are used by the tests, if you change it, # your tests might fail. DB_USER=openstack_citest DB_PW=openstack_citest sudo -H mysqladmin -u root password $DB_ROOT_PW # It's best practice to remove anonymous users from the database. If # an anonymous user exists, then it matches first for connections and # other connections from that host will not work. sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e " DELETE FROM mysql.user WHERE User=''; FLUSH PRIVILEGES; GRANT ALL PRIVILEGES ON *.* TO '$DB_USER'@'%' identified by '$DB_PW' WITH GRANT OPTION;" # Now create our database. mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e " SET default_storage_engine=MYISAM; DROP DATABASE IF EXISTS openstack_citest; CREATE DATABASE openstack_citest CHARACTER SET utf8;" # Same for PostgreSQL # Setup user root_roles=$(sudo -H -u postgres psql -t -c " SELECT 'HERE' from pg_roles where rolname='$DB_USER'") if [[ ${root_roles} == *HERE ]];then sudo -H -u postgres psql -c "ALTER ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" else sudo -H -u postgres psql -c "CREATE ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" fi # Store password for tests cat << EOF > $HOME/.pgpass *:*:*:$DB_USER:$DB_PW EOF chmod 0600 $HOME/.pgpass # Now create our database psql -h 127.0.0.1 -U $DB_USER -d template1 -c "DROP DATABASE IF EXISTS openstack_citest" createdb -h 127.0.0.1 -U $DB_USER -l C -T template0 -E utf8 openstack_citest sqlalchemy-migrate-0.13.0/test-requirements.txt0000664000175000017500000000116513553670475021670 0ustar zuulzuul00000000000000# Install bounded pep8/pyflakes first, then let flake8 install pep8==1.5.7 pyflakes==0.8.1 flake8>=2.2.4,<=2.4.1 hacking>=0.10.0,<0.11 coverage>=3.6 discover feedparser fixtures>=0.3.14 mock>=1.2 mox>=0.5.3 mysqlclient psycopg2 python-subunit>=0.0.18 sphinx>=1.1.2,<1.2 sphinxcontrib_issuetracker testrepository>=0.0.17 testtools>=0.9.34,<0.9.36 tempest-lib>=0.1.0 # db2 support ibm_db_sa>=0.3.0;python_version<'3.0' ibm-db-sa-py3;python_version>='3.0' scripttest # NOTE(rpodolyaka): This version identifier is currently necessary as # pytz otherwise does not install on pip 1.4 or higher pylint pytz>=2010h sqlalchemy-migrate-0.13.0/doc/0000775000175000017500000000000013553670602016161 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/doc/requirements.txt0000664000175000017500000000003513553670475021453 0ustar zuulzuul00000000000000sphinx>=1.6.2,!=1.6.6 # BSD sqlalchemy-migrate-0.13.0/doc/source/0000775000175000017500000000000013553670602017461 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/doc/source/historical/0000775000175000017500000000000013553670602021622 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/doc/source/historical/ProjectDesignDecisionsVersioning.trac0000664000175000017500000000757613553670475031171 0ustar zuulzuul00000000000000An important aspect of this project is database versioning. For migration scripts to be most useful, we need to know what version the database is: that is, has a particular migration script already been run? An option not discussed below is "no versioning"; that is, simply apply any script we're given, and rely on the user to ensure it's valid. This is entirely too error-prone to seriously consider, and takes a lot of the usefulness out of the proposed tool. === Database-wide version numbers === A single integer version number would specify the version of each database. This is stored in the database in a table, let's call it "schema"; each migration script is associated with a certain database version number. + Simple implementation[[br]] Of the 3 solutions presented here, this one is by far the simplest. + Past success[[br]] Used in [http://www.rubyonrails.org/ Ruby on Rails' migrations]. ~ Can detect corrupt schemas, but requires some extra work and a *complete* set of migrations.[[br]] If we have a set of database migration scripts that build the database from the ground up, we can apply them in sequence to a 'dummy' database, dump a diff of the real and dummy schemas, and expect a valid schema to match the dummy schema. - Requires changes to the database schema.[[br]] Not a tremendous change - a single table with a single column and a single row - but a change nonetheless. === Table/object-specific version numbers === Each database "object" - usually tables, though we might also deal with other database objects, such as stored procedures or Postgres' sequences - would have a version associated with it, initially 1. These versions are stored in a table, let's call it "schema". This table has two columns: the name of the database object and its current version number. + Allows us to write migration scripts for a subset of the database.[[br]] If we have multiple people working on a very large database, we may want to write migration scripts for a section of the database without stepping on another person's work. This allows unrelated to - Requires changes to the database schema. Similar to the database-wide version number; the contents of the new table are more complex, but still shouldn't conflict with anything. - More difficult to implement than a database-wide version number. - Determining the version of database-specific objects (ie. stored procedures, functions) is difficult. - Ultimately gains nothing over the previous solution.[[br]] The intent here was to allow multiple people to write scripts for a single database, but if database-wide version numbers aren't assigned until the script is placed in the repository, we could already do this. === Version determined via introspection === Each script has a schema associated with it, rather than a version number. The database schema is loaded, analyzed, and compared to the schema expected by the script. + No modifications to the database are necessary for this versioning system.[[br]] The primary advantage here is that no changes to the database are required. - Most difficult solution to implement, by far.[[br]] Comparing the state of every schema object in the database is much more complex than simply comparing a version number, especially since we need to do it in a database-independent way (ie. we can't just diff the dump of each schema). SQLAlchemy's reflection would certainly be very helpful, but this remains the most complex solution. + "Automatically" detects corrupt schemas.[[br]] A corrupt schema won't match any migration script. - Difficult to deal with corrupt schemas.[[br]] When version numbers are stored in the database, you have some idea of where an error occurred. Without this, we have no idea what version the database was in before corruption. - Potential ambiguity: what if two database migration scripts expect the same schema? ---- '''Conclusion''': database-wide version numbers are the best way to go.sqlalchemy-migrate-0.13.0/doc/source/historical/RepositoryFormat2.trac0000664000175000017500000000467113553670475026127 0ustar zuulzuul00000000000000My original plan for Migrate's RepositoryFormat had several problems: * Bind parameters: We needed to bind parameters into statements to get something suitable for an .sql file. For some types of parameters, there's no clean way to do this without writing an entire parser - too great a cost for this project. There's a reason why SQLAlchemy's logs display the statement and its parameters separately: the binding is done at a lower level than we have access to. * Failure: Discussed in #17, the old format had no easy way to find the Python statements associated with an SQL error. This makes it difficult to debug scripts. A new format will be used to solve this problem instead. Similar to our previous solution, where one .sql file was created per version/operation/DBMS (version_1.upgrade.postgres.sql, for example), one file will be created per version/operation/DBMS here. These files will contain the following information: * The dialect used to perform the logging. Particularly, * The paramstyle expected by the dbapi * The DBMS this log applies to * Information on each logged SQL statement, each of which contains: * The text of the statement * Parameters to be bound to the statement * A Python stack trace at the point the statement was logged - this allows us to tell what Python statements are associated with an SQL statement when there's an error These files will be created by pickling a Python object with the above information. Such files may be executed by loading the log and having SQLAlchemy execute them as it might have before. Good: * Since the statements and bind parameters are stored separately and executed as SQLAlchemy would normally execute them, one problem discussed above is eliminated. * Storing the stack trace at the point each statement was logged allows us to identify what Python statements are responsible for an SQL error. This makes it much easier for users to debug their scripts. Bad: * It's less trivial to commit .sql scripts to our repository, since they're no longer used internally. This isn't a huge loss, and .sql commits can still be implemented later if need be. * There's some danger of script behavior changing if changes are made to the dbapi the script is associated with. The primary place where problems would occur is during parameter binding, but the chance of this changing significantly isn't large. The danger of changes in behavior due to changes in the user's application is not affected.sqlalchemy-migrate-0.13.0/doc/source/historical/ProjectDesignDecisionsAutomation.trac0000664000175000017500000000462513553670475031156 0ustar zuulzuul00000000000000There are many migrations that don't require a lot of thought - for example, if we add a column to a table definition, we probably want to have an "ALTER TABLE...ADD COLUMN" statement show up in our migration. The difficulty lies in the automation of changes where the requirements aren't obvious. What happens when you add a unique constraint to a column whose data is not already unique? What happens when we split an existing table in two? Completely automating database migrations is not possible. That said - we shouldn't have to hunt down and handwrite the ALTER TABLE statements for every new column; this is often just tedious. Many other common migration tasks require little serious thought; such tasks are ripe for automation. Any automation attempted, however, should not interfere with our ability to write scripts by hand if we so choose; our tool should ''not'' be centered around automation. Automatically generating the code for this sort of task seems like a good solution: * It does not obstruct us from writing changes by hand; if we don't like the autogenerated code, delete it or don't generate it to begin with * We can easily add other migration tasks to the autogenerated code * We can see right away if the code is what we're expecting, or if it's wrong * If the generated code is wrong, it is easily modified; we can use parts of the generated code, rather than being required to use either 100% or 0% * Maintence, usually a problem with auto-generated code, is not an issue: old database migration scripts are not the subject of maintenance; the correct solution is usually a new migration script. Implementation is a problem: finding the 'diff' of two databases to determine what columns to add is not trivial. Fortunately, there exist tools that claim to do this for us: [http://sqlfairy.sourceforge.net/ SQL::Translator] and [http://xml2ddl.berlios.de/ XML to DDL] both claim to have this capability. ... All that said, this is ''not'' something I'm going to attempt during the Summer of Code. * I'd have to rely tremendously on a tool I'm not at all familiar with * Creates a risk of the project itself relying too much on the automation, a Bad Thing * The project has a deadline and I have plenty else to do already * Lots of people with more experience than me say this would take more time than it's worth It's something that might be considered for future work if this project is successful, though.sqlalchemy-migrate-0.13.0/doc/source/historical/ProjectDesignDecisionsScriptFormat.trac0000664000175000017500000002037113553670475031447 0ustar zuulzuul00000000000000Important to our system is the API used for making database changes. === Raw SQL; .sql script === Require users to write raw SQL. Migration scripts are .sql scripts (with database version information in a header comment). + Familiar interface for experienced DBAs. + No new API to learn[[br]] SQL is used elsewhere; many people know SQL already. Those who are still learning SQL will gain expertise not in the API of a specific tool, but in a language which will help them elsewhere. (On the other hand, those who are familiar with Python with no desire to learn SQL might find a Python API more intuitive.) - Difficult to extend when necessary[[br]] .sql scripts mean that we can't write new functions specific to our migration system when necessary. (We can't always assume that the DBMS supports functions/procedures.) - Lose the power of Python[[br]] Some things are possible in Python that aren't in SQL - for example, suppose we want to use some functions from our application in a migration script. (The user might also simply prefer Python.) - Loss of database independence.[[br]] There isn't much we can do to specify different actions for a particular DBMS besides copying the .sql file, which is obviously bad form. === Raw SQL; Python script === Require users to write raw SQL. Migration scripts are python scripts whose API does little beyond specifying what DBMS(es) a particular statement should apply to. For example, {{{ run("CREATE TABLE test[...]") # runs for all databases run("ALTER TABLE test ADD COLUMN varchar2[...]",oracle) # runs for Oracle only run("ALTER TABLE test ADD COLUMN varchar[...]",postgres|mysql) # runs for Postgres or MySQL only }}} We could also allow parts of a single statement to apply to a specific DBMS: {{{ run("ALTER TABLE test ADD COLUMN"+sql("varchar",postgres|mysql)+sql("varchar2",oracle)) }}} or, the same thing: {{{ run("ALTER TABLE test ADD COLUMN"+sql("varchar",postgres|mysql,"varchar2",oracle)) }}} + Allows the user to write migration scripts for multiple DBMSes. - The user must manage the conflicts between different databases themselves. [[br]] The user can write scripts to deal with conflicts between databases, but they're not really database-independent: the user has to deal with conflicts between databases; our system doesn't help them. + Minimal new API to learn. [[br]] There is a new API to learn, but it is extremely small, depending mostly on SQL DDL. This has the advantages of "no new API" in our first solution. - More verbose than .sql scripts. === Raw SQL; automatic translation between each dialect === Same as the above suggestion, but allow the user to specify a 'default' dialect of SQL that we'll interpret and whose quirks we'll deal with. That is, write everything in SQL and try to automatically resolve the conflicts of different DBMSes. For example, take the following script: {{{ engine=postgres run(""" CREATE TABLE test ( id serial ) """) }}} Running this on a Postgres database, surprisingly enough, would generate exactly what we typed: {{{ CREATE TABLE test ( id serial ) }}} Running it on a MySQL database, however, would generate something like {{{ CREATE TABLE test ( id integer auto_increment ) }}} + Database-independence issues of the above SQL solutions are resolved.[[br]] Ideally, this solution would be as database-independent as a Python API for database changes (discussed next), but with all the advantages of writing SQL (no new API). - Difficult implementation[[br]] Obviously, this is not easy to implement - there is a great deal of parsing logic and a great many things that need to be accounted for. In addition, this is a complex operation; any implementation will likely have errors somewhere. It seems tools for this already exist; an effective tool would trivialize this implementation. I experimented a bit with [http://sqlfairy.sourceforge.net/ SQL::Translator] and [http://xml2ddl.berlios.de/ XML to DDL]; however, I had difficulties with both. - Database-specific features ensure that this cannot possibly be "complete". [[br]] For example, Postgres has an 'interval' type to represent times and (AFAIK) MySQL does not. === Database-independent Python API === Create a Python API through which we may manage database changes. Scripts would be based on the existing SQLAlchemy API when possible. Scripts would look something like {{{ # Create a table test_table = table('test' ,Column('id',Integer,notNull=True) ) table.create() # Add a column to an existing table test_table.add_column('id',Integer,notNull=True) # Or, use a column object instead of its parameters test_table.add_column(Column('id',Integer,notNull=True)) # Or, don't use a table object at all add_column('test','id',Integer,notNull=True) }}} This would use engines, similar to SQLAlchemy's, to deal with database-independence issues. We would, of course, allow users to write raw SQL if they wish. This would be done in the manner outlined in the second solution above; this allows us to write our entire script in SQL and ignore the Python API if we wish, or write parts of our solution in SQL to deal with specific databases. + Deals with database-independence thoroughly and with minimal user effort.[[br]] SQLAlchemy-style engines would be used for this; issues of different DBMS syntax are resolved with minimal user effort. (Database-specific features would still need handwritten SQL.) + Familiar interface for SQLAlchemy users.[[br]] In addition, we can often cut-and-paste column definitions from SQLAlchemy tables, easing one particular task. - Requires that the user learn a new API. [[br]] SQL already exists; people know it. SQL newbies might be more comfortable with a Python interface, but folks who already know SQL must learn a whole new API. (On the other hand, the user *can* write things in SQL if they wish, learning only the most minimal of APIs, if they are willing to resolve issues of database-independence themself.) - More difficult to implement than pure SQL solutions. [[br]] SQL already exists/has been tested. A new Python API does not/has not, and much of the work seems to consist of little more than reinventing the wheel. - Script behavior might change under different versions of the project.[[br]] ...where .sql scripts behave the same regardless of the project's version. === Generate .sql scripts from a Python API === Attempts to take the best of the first and last solutions. An API similar to the previous solution would be used, but rather than immediately being applied to the database, .sql scripts are generated for each type of database we're interested in. These .sql scripts are what's actually applied to the database. This would essentially allow users to skip the Python script step entirely if they wished, and write migration scripts in SQL instead, as in solution 1. + Database-independence is an option, when needed. + A familiar interface/an interface that can interact with other tools is an option, when needed. + Easy to inspect the SQL generated by a script, to ensure it's what we're expecting. + Migration scripts won't change behavior across different versions of the project. [[br]] Once a Python script is translated to a .sql script, its behavior is consistent across different versions of the project, unlike a pure Python solution. - Multiple ways to do a single task: not Pythonic.[[br]] I never really liked that word - "Pythonic" - but it does apply here. Multiple ways to do a single task has the potential to cause confusion, especially in a large project if many people do the same task different ways. We have to support both ways of doing things, as well. ---- '''Conclusion''': The last solution, generating .sql scripts from a Python API, seems to be best. The first solution (.sql scripts) suffers from a lack of database-independence, but is familiar to experienced database developers, useful with other tools, and shows exactly what will be done to the database. The Python API solution has no trouble with database-independence, but suffers from other problems that the .sql solution doesn't. The last solution resolves both reasonably well. Multiple ways to do a single task might be called "not Pythonic", but IMO, the trade-off is worth this cost. Automatic translation between different dialects of SQL might have potential for use in a solution, but existing tools for this aren't reliable enough, as far as I can tell.sqlalchemy-migrate-0.13.0/doc/source/historical/ProjectGoals.trac0000664000175000017500000001202713553670475025103 0ustar zuulzuul00000000000000== Goals == === DBMS-independent schema changes === Many projects need to run on more than one DBMS. Similar changes need to be applied to both types of databases upon a schema change. The usual solution to database changes - .sql scripts with ALTER statements - runs into problems since different DBMSes have different dialects of SQL; we end up having to create a different script for each DBMS. This project will simplify this by providing an API, similar to the table definition API that already exists in SQLAlchemy, to alter a table independent of the DBMS being used, where possible. This project will support all DBMSes currently supported by SQLAlchemy: SQLite, Postgres, MySQL, Oracle, and MS SQL. Adding support for more should be as possible as it is in SQLAlchemy. Many are already used to writing .sql scripts for database changes, aren't interested in learning a new API, and have projects where DBMS-independence isn't an issue. Writing SQL statements as part of a (Python) change script must be an option, of course. Writing change scripts as .sql scripts, eliminating Python scripts from the picture entirely, would be nice too, although this is a lower-priority goal. === Database versioning and change script organization === Once we've accumulated a set of change scripts, it's important to know which ones have been applied/need to be applied to a particular database: suppose we need to upgrade a database that's extremenly out-of-date; figuring out the scripts to run by hand is tedious. Applying changes in the wrong order, or applying changes when they shouldn't be applied, is bad; attempting to manage all of this by hand inevitably leads to an accident. This project will be able to detect the version of a particular database and apply the scripts required to bring it up to the latest version, or up to any specified version number (given all change scripts required to reach that version number). Sometimes we need to be able to revert a schema to an older version. There's no automatic way to do this without rebuilding the database from scratch, so our project will allow one to write scripts to downgrade the database as well as upgrade it. If such scripts have been written, we should be able to apply them in the correct order, just like upgrading. Large projects inevitably accumulate a large number of database change scripts; it's important that we have a place to keep them. Once a script has been written, this project will deal with organizing it among existing change scripts, and the user will never have to look at it again. === Change testing === It's important to test one's database changes before applying them to a production database (unless you happen to like disasters). Much testing is up to the user and can't be automated, but there's a few places we can help ensure at least a minimal level of schema integrity. A few examples are below; we could add more later. Given an obsolete schema, a database change script, and an up-to-date schema known to be correct, this project will be able to ensure that applying the change script to the obsolete schema will result in an up-to-date schema - all without actually changing the obsolete database. Folks who have SQLAlchemy create their database using table.create() might find this useful; this is also useful for ensuring database downgrade scripts are correct. Given a schema of a known version and a complete set of change scripts up to that version, this project will be able to detect if the schema matches its version. If a schema has gone through changes not present in migration scripts, this test will fail; if applying all scripts in sequence up to the specified version creates an identical schema, this test will succeed. Identifying that a schema is corrupt is sufficient; it would be nice if we could give a clue as to what's wrong, but this is lower priority. (Implementation: we'll probably show a diff of two schema dumps; this should be enough to tell the user what's gone wrong.) == Non-Goals == ie. things we will '''not''' try to do (at least, during the Summer of Code) === Automatic generation of schema changes === For example, one might define a table: {{{ CREATE TABLE person ( id integer, name varchar(80) ); }}} Later, we might add additional columns to the definition: {{{ CREATE TABLE person ( id integer, name varchar(80), profile text ); }}} It might be nice if a tool could look at both table definitions and spit out a change script; something like {{{ ALTER TABLE person ADD COLUMN profile text; }}} This is a difficult problem for a number of reasons. I have no intention of tackling this problem as part of the Summer of Code. This project aims to give you a better way to write that ALTER statement and make sure it's applied correctly, not to write it for you. (Using an [http://sqlfairy.sourceforge.net/ existing] [http://xml2ddl.berlios.de/ tool] to add this sort of thing later might be worth looking into, but it will not be done during the Summer of Code. Among other reasons, methinks it's best to start with a system that isn't dependent on this sort of automation.)sqlalchemy-migrate-0.13.0/doc/source/historical/RepositoryFormat.trac0000664000175000017500000000466513553670475026050 0ustar zuulzuul00000000000000This plan has several problems and has been modified; new plan is discussed in wiki:RepositoryFormat2 ---- One problem with [http://www.rubyonrails.org/ Ruby on Rails'] (very good) schema migration system is the behavior of scripts that depend on outside sources; ie. the application. If those change, there's no guarantee that such scripts will behave as they did before, and you'll get strange results. For example, suppose one defines a SQLAlchemy table: {{{ users = Table('users', metadata, Column('user_id', Integer, primary_key = True), Column('user_name', String(16), nullable = False), Column('password', String(20), nullable = False) ) }}} and creates it in a change script: {{{ from project import table def upgrade(): table.users.create() }}} Suppose we later add a column to this table. We write an appropriate change script: {{{ from project import table def upgrade(): # This syntax isn't set in stone yet table.users.add_column('email_address', String(60), key='email') }}} ...and change our application's table definition: {{{ users = Table('users', metadata, Column('user_id', Integer, primary_key = True), Column('user_name', String(16), nullable = False), Column('password', String(20), nullable = False), Column('email_address', String(60), key='email') #new column ) }}} Modifying the table definition changes how our first script behaves - it will create the table with the new column. This might work if we only apply change scripts to a few database which are always kept up to date (or very close), but we'll run into errors eventually if our migration scripts' behavior isn't consistent. ---- One solution is to generate .sql files from a Python change script at the time it's added to a repository. The sql generated by the script for each database is set in stone at this point; changes to outside files won't affect it. This limits what change scripts are capable of - we can't write dynamic SQL; ie., we can't do something like this: {{{ for row in db.execute("select id from table1"): db.execute("insert into table2 (table1_id, value) values (:id,42)",**row) }}} But SQL is usually powerful enough to where the above is rarely necessary in a migration script: {{{ db.execute("insert into table2 select id,42 from table1") }}} This is a reasonable solution. The limitations aren't serious (everything possible in a traditional .sql script is still possible), and change scripts are much less prone to error. sqlalchemy-migrate-0.13.0/doc/source/historical/ProjectDetailedDesign.trac0000664000175000017500000000413713553670475026706 0ustar zuulzuul00000000000000This is very much a draft/brainstorm right now. It should be made prettier and thought about in more detail later, but it at least gives some idea of the direction we're headed right now. ---- * Two distinct tools; should not be coupled (can work independently): * Versioning tool * Command line tool; let's call it "samigrate" * Organizes old migration scripts into repositories * Runs groups of migration scripts on a database, updating it to a specified version/latest version * Helps run various tests * usage * "samigrate create PATH": Create project migration-script repository * We shouldn't have to enter the path for every other command. Use a hidden file * (This means we can't move the repository after it's created. Oh well) * "samigrate add SCRIPT [VERSION]": Add script to this project's repository; latest version * If a .sql script: how to determine engine, operation (up/down)? Options: * specify at the command line: "samigrate add SCRIPT UP_OR_DOWN ENGINE" * naming convention: SCRIPT is named something like NAME.postgres.up.sql * "samigrate upgrade CONNECTION_STRING [VERSION] [SCRIPT...]": connect to the specified database and upgrade (or downgrade) it to the specified version (default latest) * If SCRIPT... specified: act like these scripts are in the repository (useful for testing?) * "samigrate dump CONNECTION_STRING [VERSION] [SCRIPT...]": like update, but sends all sql to stdout instead of the db * (Later: some more commands, to be used for script testing tools) * Alchemy API extensions for altering schema * Operations here are DB-independent * Each database modification is a script that may use this API * Can handwrite SQL for all databases or a single database * upgrade()/downgrade() functions: need only one file for both operations * sql scripts reqire either (2 files, *.up.sql;*.down.sql) or (don't use downgrade) * usage * "python NAME.py ENGINE up": upgrade sql > stdout * "python NAME.py ENGINE down": downgrade sql > stdoutsqlalchemy-migrate-0.13.0/doc/source/historical/ProjectProposal.txt0000664000175000017500000001652113553670475025526 0ustar zuulzuul00000000000000Evan Rosson Project --- SQLAlchemy Schema Migration Synopsis --- SQLAlchemy is an excellent object-relational database mapper for Python projects. Currently, it does a fine job of creating a database from scratch, but provides no tool to assist the user in modifying an existing database. This project aims to provide such a tool. Benefits --- Application requirements change; a database schema must be able to change with them. It's possible to write SQL scripts that make the proper modifications without any special tools, but this setup quickly becomes difficult to manage - when we need to apply multiple updates to a database, organize old migration scripts, or have a single application support more than one DBMS, a tool to support database changes becomes necessary. This tool will aid the creation of organizing migration scripts, applying multiple updates or removing updates to revert to an old version, and creating DBMS-independent migration scripts. Writing one's schema migration scripts by hand often results in problems when dealing with multiple obsolete database instances - we must figure out what scripts are necessary to bring the database up-to-date. Database versioning tools are helpful for this task; this project will track the version of a particular database to determine what scripts are necessary to update an old schema. Description --- The migration system used by Ruby on Rails has had much success, and for good reason - the system is easy to understand, generally database-independent, as powerful as the application itself, and capable of dealing nicely with a schema with multiple instances of different versions. A migration system similar to that of Rails is a fine place to begin this project. Each instance of the schema will have a version associated with it; this version is tracked using a single table with a single row and a single integer column. A set of changes to the database schema will increment the schema's version number; each migration script will be associated with a schema version. A migration script will be written by the user, and consist of two functions: - upgrade(): brings an old database up-to-date, from version n-1 to version n - downgrade(): reverts an up-to-date database to the previous schema; an 'undo' for upgrade() When applying multiple updates to an old schema instance, migration scripts are applied in sequence: when updating a schema to version n from version n-2, two migration scripts are run; n-2 => n-1 => n. A command-line tool will create empty migration scripts (empty upgrade()/downgrade() functions), display the SQL that will be generated by a migration script for a particular DBMS, and apply migration scripts to a specified database. This project will implement the command-line tool that manages the above functionality. This project will also extend SQLAlchemy with the functions necessary to construct DBMS-independent migration scripts: in particular, column creation/deletion/alteration and the ability to rename existing tables/indexes/columns will be implemented. We'll also need a way to write raw SQL for a specific DBMS/set of DBMSes for situations where our abstraction doesn't fit a script's requirements. The creation/deletion of existing tables and indexes are operations already provided by SQLAlchemy. On DBMS support - I intend to support MySQL, Postgres, SQLite, Oracle, and MS-SQL by the end of the project. (Update: I previously omitted support for Oracle and MS-SQL because I don't have access to the full version of each; I wasn't aware Oracle Lite and MS-SQL Express were available for free.) The system will be abstracted in such a way that adding support for other databases will not be any more difficult than adding support for them in SQLAlchemy. Schedule --- This project will be my primary activity this summer. Unfortunately, I am in school when things begin, until June 9, but I can still begin the project during that period. I have no other commitments this summer - I can easily make up any lost time. I will be spending my spare time this summer further developing my online game (discussed below), but this has no deadline and will not interfere with the project proposed here. I'll begin by familiarizing myself with the internals of SQLAlchemy and creating a detailed plan for the project. This plan will be reviewed by the current SQLAlchemy developers and other potential users, and will be modified based on their feedback. This will be completed no later than May 30, one week after SoC begins. Development will follow, in this order: - The database versioning system. This will manage the creation and application of (initially empty) migration scripts. Complete by June 16. - Access the database; read/update the schema's version number - Apply a single (empty) script to the database - Apply a set of (empty) scripts to upgrade/downgrade the database to a specified version; examine all migration scripts and apply all to update the database to the latest version available - An API for table/column alterations, to make the above system useful. Complete by August 11. - Implement an empty API - does nothing at this point, but written in such a way that syntax for each supported DBMS may be added as a module. Completed June 26-30, the mid-project review deadline. - Implement/test the above API for a single DBMS (probably Postgres, as I'm familiar with it). Users should be able to test the 'complete' application with this DBMS. - Implement the database modification API for other supported databases All development will have unit tests written where appropriate. Unit testing the SQL generated for each DBMS will be particularly important. The project will finish with various wrap-up activities, documentation, and some final tests, to be completed by the project deadline. About me --- I am a 3rd year BS Computer Science student; Cal Poly, San Luis Obispo, California, USA; currently applying for a Master's degree in CS from the same school. I've taken several classes dealing with databases, though much of what I know on the subject is self-taught. Outside of class, I've developed a browser-based online game, Zeal, at http://zealgame.com ; it has been running for well over a year and gone through many changes. It has taught me firsthand the importance of using appropriate tools and designing one's application well early on (largely through the pain that follows when you don't); I've learned a great many other things from the experience as well. One recurring problem I've had with this project is dealing with changes to the database schema. I've thought much about how I'd like to see this solved, but hadn't done much to implement it. I'm now working on another project that will be making use of SQLAlchemy: it fits many of my project's requirements, but lacks a migration tool that will be much needed. This presents an opportunity for me to make my first contribution to open source - I've long been interested in open source software and use it regularly, but haven't contributed to any until now. I'm particularly interested in the application of this tool with the TurboGears framework, as this project was inspired by a suggestion the TurboGears mailing list and I'm working on a project using TurboGears - but there is no reason to couple an SQLAlchemy enhancement with TurboGears; this project may be used by anyone who uses SQLAlchemy. Further information: http://evan.zealgame.com/soc sqlalchemy-migrate-0.13.0/doc/source/conf.py0000664000175000017500000001477113553670475021002 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # # SQLAlchemy Migrate documentation build configuration file, created by # sphinx-quickstart on Fri Feb 13 12:58:57 2009. # # This file is execfile()d with the current directory set to its containing dir. # # The contents of this file are pickled, so don't put values in the namespace # that aren't pickleable (module imports are okay, they're removed automatically). # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If your extensions are in another directory, add it here. If the directory # is relative to the documentation root, use os.path.abspath to make it # absolute, like shown here. #sys.path.append(os.path.abspath('.')) # Allow module docs to build without having sqlalchemy-migrate installed: sys.path.append(os.path.dirname(os.path.abspath('.'))) # General configuration # --------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx'] # link to sqlalchemy docs intersphinx_mapping = { 'sqlalchemy': ('http://www.sqlalchemy.org/docs/', None), 'python': ('http://docs.python.org/2.7', None)} # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'SQLAlchemy Migrate' copyright = u'2011, Evan Rosson, Jan Dittberner, Domen Kožar, Chris Withers' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '0.7.3' # The full version, including alpha/beta/rc tags. release = '0.7.3.dev' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # Options for sphinxcontrib.issuetracker # -------------------------------------- issuetracker = 'google code' issuetracker_project = 'sqlalchemy-migrate' # Options for HTML output # ----------------------- # The style sheet to use for HTML and HTML Help pages. A file of that name # must exist either in Sphinx' static/ path, or in one of the custom paths # given in html_static_path. html_style = 'default.css' # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". # html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, the reST sources are included in the HTML build as _sources/. #html_copy_source = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'SQLAlchemyMigratedoc' # Options for LaTeX output # ------------------------ # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, document class [howto/manual]). latex_documents = [ ('index', 'SQLAlchemyMigrate.tex', ur'SQLAlchemy Migrate Documentation', ur'Evan Rosson, Jan Dittberner, Domen Kožar', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True sqlalchemy-migrate-0.13.0/doc/source/api.rst0000664000175000017500000001462713553670475021006 0ustar zuulzuul00000000000000Module :mod:`migrate.changeset` -- Schema changes ================================================= Module :mod:`migrate.changeset` -- Schema migration API ------------------------------------------------------- .. automodule:: migrate.changeset :members: :synopsis: Database changeset management Module :mod:`ansisql ` -- Standard SQL implementation ------------------------------------------------------------------------------------ .. automodule:: migrate.changeset.ansisql :members: :member-order: groupwise :synopsis: Standard SQL implementation for altering database schemas Module :mod:`constraint ` -- Constraint schema migration API --------------------------------------------------------------------------------------------- .. automodule:: migrate.changeset.constraint :members: :inherited-members: :show-inheritance: :member-order: groupwise :synopsis: Standalone schema constraint objects Module :mod:`databases ` -- Database specific schema migration ----------------------------------------------------------------------------------------------- .. automodule:: migrate.changeset.databases :members: :synopsis: Database specific changeset implementations .. _mysql-d: Module :mod:`mysql ` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: migrate.changeset.databases.mysql :members: :synopsis: MySQL database specific changeset implementations .. _firebird-d: Module :mod:`firebird ` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: migrate.changeset.databases.firebird :members: :synopsis: Firebird database specific changeset implementations .. _oracle-d: Module :mod:`oracle ` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: migrate.changeset.databases.oracle :members: :synopsis: Oracle database specific changeset implementations .. _postgres-d: Module :mod:`postgres ` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: migrate.changeset.databases.postgres :members: :synopsis: PostgreSQL database specific changeset implementations .. _sqlite-d: Module :mod:`sqlite ` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: migrate.changeset.databases.sqlite :members: :synopsis: SQLite database specific changeset implementations Module :mod:`visitor ` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: migrate.changeset.databases.visitor :members: Module :mod:`schema ` -- Additional API to SQLAlchemy for migrations ---------------------------------------------------------------------------------------------- .. automodule:: migrate.changeset.schema :members: :synopsis: Schema changeset handling functions Module :mod:`migrate.versioning` -- Database versioning and repository management ================================================================================== .. automodule:: migrate.versioning :members: :synopsis: Database version and repository management .. _versioning-api: Module :mod:`api ` -- Python API commands ----------------------------------------------------------------- .. automodule:: migrate.versioning.api :members: :synopsis: External API for :mod:`migrate.versioning` Module :mod:`genmodel ` -- ORM Model generator ------------------------------------------------------------------------------------- .. automodule:: migrate.versioning.genmodel :members: :synopsis: Python database model generator and differencer Module :mod:`pathed ` -- Path utilities ---------------------------------------------------------------------------- .. automodule:: migrate.versioning.pathed :members: :synopsis: File/Directory handling class Module :mod:`repository ` -- Repository management ------------------------------------------------------------------------------------- .. automodule:: migrate.versioning.repository :members: :synopsis: SQLAlchemy migrate repository management :member-order: groupwise Module :mod:`schema ` -- Migration upgrade/downgrade ---------------------------------------------------------------------------------- .. automodule:: migrate.versioning.schema :members: :member-order: groupwise :synopsis: Database schema management Module :mod:`schemadiff ` -- ORM Model differencing ------------------------------------------------------------------------------------- .. automodule:: migrate.versioning.schemadiff :members: :synopsis: Database schema and model differencing Module :mod:`script ` -- Script actions -------------------------------------------------------------------- .. automodule:: migrate.versioning.script.base :synopsis: Script utilities :member-order: groupwise :members: .. automodule:: migrate.versioning.script.py :members: :member-order: groupwise :inherited-members: :show-inheritance: .. automodule:: migrate.versioning.script.sql :members: :member-order: groupwise :show-inheritance: :inherited-members: Module :mod:`shell ` -- CLI interface ------------------------------------------------------------------ .. automodule:: migrate.versioning.shell :members: :synopsis: Shell commands Module :mod:`util ` -- Various utility functions -------------------------------------------------------------------------- .. automodule:: migrate.versioning.util :members: :synopsis: Utility functions Module :mod:`version ` -- Versioning management ----------------------------------------------------------------------------- .. automodule:: migrate.versioning.version :members: :member-order: groupwise :synopsis: Version management Module :mod:`exceptions ` -- Exception definitions ====================================================================== .. automodule:: migrate.exceptions :members: :synopsis: Migrate exception classes sqlalchemy-migrate-0.13.0/doc/source/glossary.rst0000664000175000017500000000143613553670475022072 0ustar zuulzuul00000000000000.. _glossary: ******** Glossary ******** .. glossary:: :sorted: repository A migration repository contains :command:`manage.py`, a configuration file (:file:`migrate.cfg`) and the database :term:`changeset` scripts which can be Python scripts or SQL files. changeset A set of instructions how upgrades and downgrades to or from a specific version of a database schema should be performed. ORM Abbreviation for "object relational mapper". An ORM is a tool that maps object hierarchies to database relations. version A version in SQLAlchemy migrate is defined by a :term:`changeset`. Versions may be numbered using ascending numbers or using timestamps (as of SQLAlchemy migrate release 0.7.2) sqlalchemy-migrate-0.13.0/doc/source/faq.rst0000664000175000017500000000132213553670475020770 0ustar zuulzuul00000000000000FAQ === Q: Adding a **nullable=False** column ************************************** A: Your table probably already contains data. That means if you add column, it's contents will be NULL. Thus adding NOT NULL column restriction will trigger IntegrityError on database level. You have basically two options: #. Add the column with a default value and then, after it is created, remove the default value property. This does not work for column types that do not allow default values at all (such as 'text' and 'blob' on MySQL). #. Add the column without NOT NULL so all rows get a NULL value, UPDATE the column to set a value for all rows, then add the NOT NULL property to the column. This works for all column types. sqlalchemy-migrate-0.13.0/doc/source/download.rst0000664000175000017500000000267413553670475022043 0ustar zuulzuul00000000000000Download -------- You can get the latest version of SQLAlchemy Migrate from the the `cheese shop`_, pip_ or via easy_install_:: $ easy_install sqlalchemy-migrate or:: $ pip install sqlalchemy-migrate You should now be able to use the :command:`migrate` command from the command line:: $ migrate This should list all available commands. To get more information regarding a command use:: $ migrate help COMMAND If you'd like to be notified when new versions of SQLAlchemy Migrate are released, subscribe to `openstack-dev`_. .. _pip: http://pip.openplans.org/ .. _easy_install: http://peak.telecommunity.com/DevCenter/EasyInstall#installing-easy-install .. _sqlalchemy: http://www.sqlalchemy.org/download.html .. _`cheese shop`: http://pypi.python.org/pypi/sqlalchemy-migrate .. _`openstack-dev`: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev .. _development: Development ----------- If you would like to contribute to the development of OpenStack, you must follow the steps in this page: http://docs.openstack.org/infra/manual/developers.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at: http://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. Bugs should be filed on Launchpad, not GitHub: https://bugs.launchpad.net/sqlalchemy-migrate sqlalchemy-migrate-0.13.0/doc/source/versioning.rst0000664000175000017500000005124013553670475022410 0ustar zuulzuul00000000000000.. _versioning-system: .. currentmodule:: migrate.versioning .. highlight:: console *********************************** Database schema versioning workflow *********************************** SQLAlchemy migrate provides the :mod:`migrate.versioning` API that is also available as the :ref:`migrate ` command. Purpose of this package is frontend for migrations. It provides commands to manage migrate :term:`repository` and database selection as well as script versioning. Project setup ============= .. _create_change_repository: Create a change repository -------------------------- To begin, we'll need to create a :term:`repository` for our project. All work with repositories is done using the :ref:`migrate ` command. Let's create our project's repository:: $ migrate create my_repository "Example project" This creates an initially empty :term:`repository` relative to current directory at :file:`my_repository/` named `Example project`. The :term:`repository` directory contains a sub directory :file:`versions` that will store the :ref:`schema versions `, a configuration file :file:`migrate.cfg` that contains :ref:`repository configuration ` and a script :ref:`manage.py ` that has the same functionality as the :ref:`migrate ` command but is preconfigured with repository specific parameters. .. note:: Repositories are associated with a single database schema, and store collections of change scripts to manage that schema. The scripts in a :term:`repository` may be applied to any number of databases. Each :term:`repository` has an unique name. This name is used to identify the :term:`repository` we're working with. Version control a database -------------------------- Next we need to declare database to be under version control. Information on a database's version is stored in the database itself; declaring a database to be under version control creates a table named **migrate_version** and associates it with your :term:`repository`. The database is specified as a `SQLAlchemy database url`_. .. _`sqlalchemy database url`: http://www.sqlalchemy.org/docs/core/engines.html#database-urls The :option:`version_control` command assigns a specified database with a :term:`repository`:: $ python my_repository/manage.py version_control sqlite:///project.db my_repository We can have any number of databases under this :term:`repository's ` version control. Each schema has a :term:`version` that SQLAlchemy Migrate manages. Each change script applied to the database increments this version number. You can retrieve a database's current :term:`version`:: $ python my_repository/manage.py db_version sqlite:///project.db my_repository 0 A freshly versioned database begins at version 0 by default. This assumes the database is empty or does only contain schema elements (tables, views, constraints, indices, ...) that will not be affected by the changes in the :term:`repository`. (If this is a bad assumption, you can specify the :term:`version` at the time the database is put under version control, with the :option:`version_control` command.) We'll see that creating and applying change scripts changes the database's :term:`version` number. Similarly, we can also see the latest :term:`version` available in a :term:`repository` with the command:: $ python my_repository/manage.py version my_repository 0 We've entered no changes so far, so our :term:`repository` cannot upgrade a database past version 0. Project management script ------------------------- .. _project_management_script: Many commands need to know our project's database url and :term:`repository` path - typing them each time is tedious. We can create a script for our project that remembers the database and :term:`repository` we're using, and use it to perform commands:: $ migrate manage manage.py --repository=my_repository --url=sqlite:///project.db $ python manage.py db_version 0 The script :file:`manage.py` was created. All commands we perform with it are the same as those performed with the :ref:`migrate ` tool, using the :term:`repository` and database connection entered above. The difference between the script :file:`manage.py` in the current directory and the script inside the repository is, that the one in the current directory has the database URL preconfigured. .. note:: Parameters specified in manage.py should be the same as in :ref:`versioning api `. Preconfigured parameter should just be omitted from :ref:`migrate ` command. Making schema changes ===================== All changes to a database schema under version control should be done via change scripts - you should avoid schema modifications (creating tables, etc.) outside of change scripts. This allows you to determine what the schema looks like based on the version number alone, and helps ensure multiple databases you're working with are consistent. Create a change script ---------------------- Our first change script will create a simple table .. code-block:: python account = Table( 'account', meta, Column('id', Integer, primary_key=True), Column('login', String(40)), Column('passwd', String(40)), ) This table should be created in a change script. Let's create one:: $ python manage.py script "Add account table" This creates an empty change script at :file:`my_repository/versions/001_Add_account_table.py`. Next, we'll edit this script to create our table. Edit the change script ---------------------- Our change script predefines two functions, currently empty: :py:func:`upgrade` and :py:func:`downgrade`. We'll fill those in: .. code-block:: python from sqlalchemy import Table, Column, Integer, String, MetaData meta = MetaData() account = Table( 'account', meta, Column('id', Integer, primary_key=True), Column('login', String(40)), Column('passwd', String(40)), ) def upgrade(migrate_engine): meta.bind = migrate_engine account.create() def downgrade(migrate_engine): meta.bind = migrate_engine account.drop() .. note:: The generated script contains * imports from sqlalchemy and migrate. You should tailor the imports to fit your actual demand. As you might have guessed, :py:func:`upgrade` upgrades the database to the next version. This function should contain the :ref:`schema changes ` we want to perform (in our example we're creating a table). :py:func:`downgrade` should reverse changes made by :py:func:`upgrade`. You'll need to write both functions for every change script. (Well, you don't *have* to write downgrade, but you won't be able to revert to an older version of the database or test your scripts without it.) If you really don't want to support downgrades it is a good idea to raise a :py:class:`NotImplementedError` or some equivalent custom exception. If you let :py:func:`downgrade` pass silently you might observe undesired behaviour for subsequent downgrade operations if downgrading multiple :term:`versions `. .. note:: As you can see, **migrate_engine** is passed to both functions. You should use this in your change scripts, rather than creating your own engine. .. warning:: You should be very careful about importing files from the rest of your application, as your change scripts might break when your application changes. Read more about `writing scripts with consistent behavior`_. Test the change script ------------------------ Change scripts should be tested before they are committed. Testing a script will run its :func:`upgrade` and :func:`downgrade` functions on a specified database; you can ensure the script runs without error. You should be testing on a test database - if something goes wrong here, you'll need to correct it by hand. If the test is successful, the database should appear unchanged after :func:`upgrade` and :func:`downgrade` run. To test the script:: $ python manage.py test Upgrading... done Downgrading... done Success Our script runs on our database (:file:`sqlite:///project.db`, as specified in :file:`manage.py`) without any errors. Our :term:`repository's ` :term:`version` is:: $ python manage.py version 1 .. note:: Due to #41 the database must be exactly one :term:`version` behind the :term:`repository` :term:`version`. .. _production testing warning: .. warning:: The :option:`test` command executes actual scripts, be sure you are *NOT* doing this on production database. If you need to test production changes you should: #. get a dump of your production database #. import the dump into an empty database #. run :option:`test` or :option:`upgrade` on that copy Upgrade the database -------------------- Now, we can apply this change script to our database:: $ python manage.py upgrade 0 -> 1... done This upgrades the database (:file:`sqlite:///project.db`, as specified when we created :file:`manage.py` above) to the latest available :term:`version`. (We could also specify a version number if we wished, using the :option:`--version` option.) We can see the database's :term:`version` number has changed, and our table has been created:: $ python manage.py db_version 1 $ sqlite3 project.db sqlite> .tables account migrate_version sqlite> .schema account CREATE TABLE account ( id INTEGER NOT NULL, login VARCHAR(40), passwd VARCHAR(40), PRIMARY KEY (id) ); Our account table was created - success! Modifying existing tables ------------------------- After we have initialized the database schema we now want to add another Column to the `account` table that we already have in our schema. First start a new :term:`changeset` by the commands learned above:: $ python manage.py script "Add email column" This creates a new :term:`changeset` template. Edit the resulting script :file:`my_repository/versions/002_Add_email_column.py`: .. code-block:: python from sqlalchemy import Table, MetaData, String, Column def upgrade(migrate_engine): meta = MetaData(bind=migrate_engine) account = Table('account', meta, autoload=True) emailc = Column('email', String(128)) emailc.create(account) def downgrade(migrate_engine): meta = MetaData(bind=migrate_engine) account = Table('account', meta, autoload=True) account.c.email.drop() As we can see in this example we can (and should) use SQLAlchemy's schema reflection (autoload) mechanism to reference existing schema objects. We could have defined the table objects as they are expected before upgrade or downgrade as well but this would have been more work and is not as convenient. We can now apply the changeset to :file:`sqlite:///project.db`:: $ python manage.py upgrade 1 -> 2... done and get the following expected result:: $ sqlite3 project.db sqlite> .schema account CREATE TABLE account ( id INTEGER NOT NULL, login VARCHAR(40), passwd VARCHAR(40), email VARCHAR(128), PRIMARY KEY (id) ); Writing change scripts ====================== As our application evolves, we can create more change scripts using a similar process. By default, change scripts may do anything any other SQLAlchemy program can do. SQLAlchemy Migrate extends SQLAlchemy with several operations used to change existing schemas - ie. ``ALTER TABLE`` stuff. See :ref:`changeset ` documentation for details. Writing scripts with consistent behavior ---------------------------------------- Normally, it's important to write change scripts in a way that's independent of your application - the same SQL should be generated every time, despite any changes to your app's source code. You don't want your change scripts' behavior changing when your source code does. .. warning:: **Consider the following example of what NOT to do** Let's say your application defines a table in the :file:`model.py` file: .. code-block:: python from sqlalchemy import * meta = MetaData() table = Table('mytable', meta, Column('id', Integer, primary_key=True), ) ... and uses this file to create a table in a change script: .. code-block:: python from sqlalchemy import * from migrate import * import model def upgrade(migrate_engine): model.meta.bind = migrate_engine def downgrade(migrate_engine): model.meta.bind = migrate_engine model.table.drop() This runs successfully the first time. But what happens if we change the table definition in :file:`model.py`? .. code-block:: python from sqlalchemy import * meta = MetaData() table = Table('mytable', meta, Column('id', Integer, primary_key=True), Column('data', String(42)), ) We'll create a new column with a matching change script .. code-block:: python from sqlalchemy import * from migrate import * import model def upgrade(migrate_engine): model.meta.bind = migrate_engine model.table.create() def downgrade(migrate_engine): model.meta.bind = migrate_engine model.table.drop() This appears to run fine when upgrading an existing database - but the first script's behavior changed! Running all our change scripts on a new database will result in an error - the first script creates the table based on the new definition, with both columns; the second cannot add the column because it already exists. To avoid the above problem, you should use SQLAlchemy schema reflection as shown above or copy-paste your table definition into each change script rather than importing parts of your application. .. note:: Sometimes it is enough to just reflect tables with SQLAlchemy instead of copy-pasting - but remember, explicit is better than implicit! Writing for a specific database ------------------------------- Sometimes you need to write code for a specific database. Migrate scripts can run under any database, however - the engine you're given might belong to any database. Use engine.name to get the name of the database you're working with .. code-block:: python >>> from sqlalchemy import * >>> from migrate import * >>> >>> engine = create_engine('sqlite:///:memory:') >>> engine.name 'sqlite' Writings .sql scripts --------------------- You might prefer to write your change scripts in SQL, as .sql files, rather than as Python scripts. SQLAlchemy-migrate can work with that:: $ python manage.py version 1 $ python manage.py script_sql postgresql This creates two scripts :file:`my_repository/versions/002_postgresql_upgrade.sql` and :file:`my_repository/versions/002_postgresql_downgrade.sql`, one for each *operation*, or function defined in a Python change script - upgrade and downgrade. Both are specified to run with PostgreSQL databases - we can add more for different databases if we like. Any database defined by SQLAlchemy may be used here - ex. sqlite, postgresql, oracle, mysql... .. _command-line-usage: Command line usage ================== .. currentmodule:: migrate.versioning.shell :command:`migrate` command is used for API interface. For list of commands and help use:: $ migrate --help :command:`migrate` command executes :func:`main` function. For ease of usage, generate your own :ref:`project management script `, which calls :func:`main ` function with keywords arguments. You may want to specify `url` and `repository` arguments which almost all API functions require. If api command looks like:: $ migrate downgrade URL REPOSITORY VERSION [--preview_sql|--preview_py] and you have a project management script that looks like .. code-block:: python from migrate.versioning.shell import main main(url='sqlite://', repository='./project/migrations/') you have first two slots filed, and command line usage would look like:: # preview Python script $ migrate downgrade 2 --preview_py # downgrade to version 2 $ migrate downgrade 2 .. versionchanged:: 0.5.4 Command line parsing refactored: positional parameters usage Whole command line parsing was rewriten from scratch with use of OptionParser. Options passed as kwargs to :func:`~migrate.versioning.shell.main` are now parsed correctly. Options are passed to commands in the following priority (starting from highest): - optional (given by :option:`--some_option` in commandline) - positional arguments - kwargs passed to :func:`migrate.versioning.shell.main` Python API ========== .. currentmodule:: migrate.versioning.api All commands available from the command line are also available for your Python scripts by importing :mod:`migrate.versioning.api`. See the :mod:`migrate.versioning.api` documentation for a list of functions; function names match equivalent shell commands. You can use this to help integrate SQLAlchemy Migrate with your existing update process. For example, the following commands are similar: *From the command line*:: $ migrate help help /usr/bin/migrate help COMMAND Displays help on a given command. *From Python* .. code-block:: python import migrate.versioning.api migrate.versioning.api.help('help') # Output: # %prog help COMMAND # # Displays help on a given command. .. _migrate.versioning.api: module-migrate.versioning.api.html .. _repository_configuration: Experimental commands ===================== Some interesting new features to create SQLAlchemy db models from existing databases and vice versa were developed by Christian Simms during the development of SQLAlchemy-migrate 0.4.5. These features are roughly documented in a `thread in migrate-users`_. .. _`thread in migrate-users`: http://groups.google.com/group/migrate-users/browse_thread/thread/a5605184e08abf33#msg_85c803b71b29993f Here are the commands' descriptions as given by ``migrate help ``: - ``compare_model_to_db``: Compare the current model (assumed to be a module level variable of type sqlalchemy.MetaData) against the current database. - ``create_model``: Dump the current database as a Python model to stdout. - ``make_update_script_for_model``: Create a script changing the old Python model to the new (current) Python model, sending to stdout. As this sections headline says: These features are *EXPERIMENTAL*. Take the necessary arguments to the commands from the output of ``migrate help ``. Repository configuration ======================== SQLAlchemy-migrate :term:`repositories ` can be configured in their :file:`migrate.cfg` files. The initial configuration is performed by the `migrate create` call explained in :ref:`Create a change repository `. The following options are available currently: - :option:`repository_id` Used to identify which repository this database is versioned under. You can use the name of your project. - :option:`version_table` The name of the database table used to track the schema version. This name shouldn't already be used by your project. If this is changed once a database is under version control, you'll need to change the table name in each database too. - :option:`required_dbs` When committing a change script, SQLAlchemy-migrate will attempt to generate the sql for all supported databases; normally, if one of them fails - probably because you don't have that database installed - it is ignored and the commit continues, perhaps ending successfully. Databases in this list MUST compile successfully during a commit, or the entire commit will fail. List the databases your application will actually be using to ensure your updates to that database work properly. This must be a list; example: `['postgres', 'sqlite']` - :option:`use_timestamp_numbering` When creating new change scripts, Migrate will stamp the new script with a version number. By default this is latest_version + 1. You can set this to 'true' to tell Migrate to use the UTC timestamp instead. .. versionadded:: 0.7.2 .. _custom-templates: Customize templates =================== Users can pass ``templates_path`` to API functions to provide customized templates path. Path should be a collection of templates, like ``migrate.versioning.templates`` package directory. One may also want to specify custom themes. API functions accept ``templates_theme`` for this purpose (which defaults to `default`) Example:: /home/user/templates/manage $ ls default.py_tmpl pylons.py_tmpl /home/user/templates/manage $ migrate manage manage.py --templates_path=/home/user/templates --templates_theme=pylons .. versionadded:: 0.6.0 sqlalchemy-migrate-0.13.0/doc/source/Makefile0000664000175000017500000000447713553670475021145 0ustar zuulzuul00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html web pickle htmlhelp latex changes linkcheck help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview over all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" clean: -rm -rf _build/* html: mkdir -p _build/html _build/doctrees $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) _build/html @echo @echo "Build finished. The HTML pages are in _build/html." pickle: mkdir -p _build/pickle _build/doctrees $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) _build/pickle @echo @echo "Build finished; now you can process the pickle files." web: pickle json: mkdir -p _build/json _build/doctrees $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) _build/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: mkdir -p _build/htmlhelp _build/doctrees $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) _build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in _build/htmlhelp." latex: mkdir -p _build/latex _build/doctrees $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) _build/latex @echo @echo "Build finished; the LaTeX files are in _build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: mkdir -p _build/changes _build/doctrees $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) _build/changes @echo @echo "The overview file is in _build/changes." linkcheck: mkdir -p _build/linkcheck _build/doctrees $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) _build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in _build/linkcheck/output.txt." sqlalchemy-migrate-0.13.0/doc/source/theme/0000775000175000017500000000000013553670602020563 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/doc/source/theme/almodovar.css0000664000175000017500000001123213553670475023270 0ustar zuulzuul00000000000000/* * Original theme modified by Evan Rosson * http://erosson.com/migrate * --- * * Theme Name: Almodovar * Theme URI: http://blog.ratterobert.com/archiv/2005/03/09/almodovar/ * Description: Das Theme basiert im Ursprung auf Michael Heilemanns Kubrick-Template und ist von dem einen oder anderen Gimmick anderer sehr guter Templates inspiriert worden. * Version: 0.7 * Author: ratte / robert * Author URI: http://blog.ratterobert.com/ * */ /* Begin Typography & Colors */ body { font-size: 75%; font-family: 'Lucida Grande', 'Trebuchet MS', 'Bitstream Vera Sans', Sans-Serif; background-color: #CCF; color: #333; text-align: center; } #page { background-color: #fff; border: 1px solid #88f; text-align: left; } #content { font-size: 1.2em; margin: 1em; } #content p, #content ul, #content blockquote { line-height: 1.6em; } #footer { border-top: 1px solid #006; margin-top: 2em; } small { font-family: 'Trebuchet MS', Arial, Helvetica, Sans-Serif; font-size: 0.9em; line-height: 1.5em; } h1, h2, h3 { font-family: 'Trebuchet MS', 'Lucida Grande', Verdana, Arial, Sans-Serif; font-weight: bold; margin-top: .7em; margin-bottom: .7em; } h1 { font-size: 2.5em; } h2 { font-size: 2em; } h3 { font-size: 1.5em; } h1, h2, h3 { color: #33a; } h1 a, h2 a, h3 a { color: #33a; } h1, h1 a, h1 a:hover, h1 a:visited, h2, h2 a, h2 a:hover, h2 a:visited, h3, h3 a, h3 a:hover, h3 a:visited, cite { text-decoration: none; } #content p a:visited { color: #004099; /*font-weight: normal;*/ } small, blockquote, strike { color: #33a; } #links ul ul li, #links li { list-style: none; } code { font: 1.1em 'Courier', 'Courier New', Fixed; } acronym, abbr, span.caps { font-size: 0.9em; letter-spacing: .07em; } a { color: #0050FF; /*text-decoration: none;*/ text-decoration:underline; /*font-weight: bold;*/ } a:hover { color: #0080FF; } /* Special case doc-title */ h1.doc-title { text-transform: lowercase; font-size: 4em; margin: 0; } h1.doc-title a { display: block; padding-left: 0.8em; padding-bottom: .5em; padding-top: .5em; margin: 0; border-bottom: 1px #fff solid; } h1.doc-title, h1.doc-title a, h1.doc-title a:visited, h1.doc-title a:hover { text-decoration: none; color: #0050FF; } /* End Typography & Colors */ /* Begin Structure */ body { margin: 0; padding: 0; } #page { background-color: white; margin: 0 auto 0 9em; padding: 0; max-width: 60em; border: 1px solid #555596; } * html #page { * width: 60em; * } * * #content { * margin: 0 1em 0 3em; * } * * #content h1 { * margin-left: 0; * } * * #footer { * padding: 0 0 0 1px; * margin: 0; * margin-top: 1.5em; * clear: both; * } * * #footer p { * margin: 1em; * } * */* End Structure */ /* Begin Headers */ .description { text-align: center; } /* End Headers */ /* Begin Form Elements */ #searchform { margin: 1em auto; text-align: right; } #searchform #s { width: 100px; padding: 2px; } #searchsubmit { padding: 1px; } /* End Form Elements */ /* Begin Various Tags & Classes */ acronym, abbr, span.caps { cursor: help; } acronym, abbr { border-bottom: 1px dashed #999; } blockquote { margin: 15px 30px 0 10px; padding-left: 20px; border-left: 5px solid #CCC; } blockquote cite { margin: 5px 0 0; display: block; } hr { display: none; } a img { border: none; } .navigation { display: block; text-align: center; margin-top: 10px; margin-bottom: 60px; } /* End Various Tags & Classes*/ span a { color: #CCC; } span a:hover { color: #0050FF; } #navcontainer { margin-top: 0px; padding-top: 0px; width: 100%; background-color: #AAF; text-align: right; } #navlist ul { margin-left: 0; margin-right: 5px; padding-left: 0; white-space: nowrap; } #navlist li { display: inline; list-style-type: none; } #navlist a { padding: 3px 10px; color: #fff; background-color: #339; text-decoration: none; border: 1px solid #44F; font-weight: normal; } #navlist a:hover { color: #000; background-color: #FFF; text-decoration: none; font-weight: normal; } #navlist a:active, #navlist a.selected { padding: 3px 10px; color: #000; background-color: #EEF; text-decoration: none; border: 1px solid #CCF; font-weight: normal; } sqlalchemy-migrate-0.13.0/doc/source/theme/layout.html0000664000175000017500000000543113553670475023001 0ustar zuulzuul00000000000000 ${title}

${home_title}


sqlalchemy-migrate-0.13.0/doc/source/theme/layout.css0000664000175000017500000000404713553670475022627 0ustar zuulzuul00000000000000@import url("pudge.css"); @import url("almodovar.css"); /* Basic Style ----------------------------------- */ h1.pudge-member-page-heading { font-size: 300%; } h4.pudge-member-page-subheading { font-size: 130%; font-style: italic; margin-top: -2.0em; margin-left: 2em; margin-bottom: .3em; color: #0050CC; } p.pudge-member-blurb { font-style: italic; font-weight: bold; font-size: 120%; margin-top: 0.2em; color: #999; } p.pudge-member-parent-link { margin-top: 0; } /*div.pudge-module-doc { max-width: 45em; }*/ div.pudge-section { margin-left: 2em; max-width: 45em; } /* Section Navigation ----------------------------------- */ div#pudge-section-nav { margin: 1em 0 1.5em 0; padding: 0; height: 20px; } div#pudge-section-nav ul { border: 0; margin: 0; padding: 0; list-style-type: none; text-align: center; border-right: 1px solid #aaa; } div#pudge-section-nav ul li { display: block; float: left; text-align: center; padding: 0; margin: 0; } div#pudge-section-nav ul li .pudge-section-link, div#pudge-section-nav ul li .pudge-missing-section-link { background: #aaa; width: 9em; height: 1.8em; border: 1px solid #bbb; padding: 0; margin: 0 0 10px 0; color: #ddd; text-decoration: none; display: block; text-align: center; font: 11px/20px "Verdana", "Lucida Grande"; cursor: hand; text-transform: lowercase; } div#pudge-section-nav ul li a:hover { color: #000; background: #fff; } div#pudge-section-nav ul li .pudge-section-link { background: #888; color: #eee; border: 1px solid #bbb; } /* Module Lists ----------------------------------- */ dl.pudge-module-list dt { font-style: normal; font-size: 110%; } dl.pudge-module-list dd { color: #555; } /* Misc Overrides */ .rst-doc p.topic-title a { color: #777; } .rst-doc ul.auto-toc a, .rst-doc div.contents a { color: #333; } pre { background: #eee; } .rst-doc dl dt { color: #444; margin-top: 1em; font-weight: bold; } .rst-doc dl dd { margin-top: .2em; } .rst-doc hr { display: block; margin: 2em 0; } sqlalchemy-migrate-0.13.0/doc/source/index.rst0000664000175000017500000001124313553670475021333 0ustar zuulzuul00000000000000:mod:`migrate` - SQLAlchemy Migrate (schema change management) ============================================================== .. module:: migrate .. moduleauthor:: Evan Rosson :Author: Evan Rosson :Maintainer: Domen Kožar :Maintainer: Jan Dittberner :Source Code: https://github.com/stackforge/sqlalchemy-migrate :Documentation: https://sqlalchemy-migrate.readthedocs.org/ :Issues: https://bugs.launchpad.net/sqlalchemy-migrate :Generated: |today| :License: MIT :Version: |release| .. topic:: Overview Inspired by Ruby on Rails' migrations, SQLAlchemy Migrate provides a way to deal with database schema changes in SQLAlchemy_ projects. Migrate was started as part of `Google's Summer of Code`_ by Evan Rosson, mentored by Jonathan LaCour. The project was taken over by a small group of volunteers when Evan had no free time for the project. It is now hosted as a `Github project`_. During the hosting change the project was renamed to SQLAlchemy Migrate. Currently, sqlalchemy-migrate supports Python versions from 2.6 to 2.7. SQLAlchemy Migrate 0.7.2 supports SQLAlchemy 0.6.x and 0.7.x branches. Support for Python 2.4 and 2.5 as well as SQLAlchemy 0.5.x has been dropped after sqlalchemy-migrate 0.7.1. .. warning:: Version **0.6** broke backward compatibility, please read :ref:`changelog ` for more info. Download and Development ------------------------ .. toctree:: download credits .. _dialect-support: Dialect support --------------- .. list-table:: :header-rows: 1 :widths: 25 10 10 10 10 10 11 10 * - Operation / Dialect - :ref:`sqlite ` - :ref:`postgres ` - :ref:`mysql ` - :ref:`oracle ` - :ref:`firebird ` - mssql - DB2 * - :ref:`ALTER TABLE RENAME TABLE ` - yes - yes - yes - yes - no - not supported - unknown * - :ref:`ALTER TABLE RENAME COLUMN ` - yes (workaround) [#1]_ - yes - yes - yes - yes - not supported - unknown * - :ref:`ALTER TABLE ADD COLUMN ` - yes (workaround) [#2]_ - yes - yes - yes - yes - not supported - unknown * - :ref:`ALTER TABLE DROP COLUMN ` - yes (workaround) [#1]_ - yes - yes - yes - yes - not supported - unknown * - :ref:`ALTER TABLE ALTER COLUMN ` - yes (workaround) [#1]_ - yes - yes - yes (with limitations) [#3]_ - yes [#4]_ - not supported - unknown * - :ref:`ALTER TABLE ADD CONSTRAINT ` - partial (workaround) [#1]_ - yes - yes - yes - yes - not supported - unknown * - :ref:`ALTER TABLE DROP CONSTRAINT ` - partial (workaround) [#1]_ - yes - yes - yes - yes - not supported - unknown * - :ref:`RENAME INDEX ` - no - yes - no - yes - yes - not supported - unknown .. [#1] Table is renamed to temporary table, new table is created followed by INSERT statements. .. [#2] See http://www.sqlite.org/lang_altertable.html for more information. In cases not supported by sqlite, table is renamed to temporary table, new table is created followed by INSERT statements. .. [#3] You can not change datatype or rename column if table has NOT NULL data, see http://blogs.x2line.com/al/archive/2005/08/30/1231.aspx for more information. .. [#4] Changing nullable is not supported Tutorials -------------- List of useful tutorials: * `Using migrate with Elixir `_ * `Developing with migrations `_ User guide ------------- SQLAlchemy Migrate is split into two parts, database schema versioning (:mod:`migrate.versioning`) and database migration management (:mod:`migrate.changeset`). The versioning API is available as the :ref:`migrate ` command. .. toctree:: versioning changeset tools faq glossary .. _`google's summer of code`: http://code.google.com/soc .. _`Github project`: https://github.com/stackforge/sqlalchemy-migrate .. _sqlalchemy: http://www.sqlalchemy.org API Documentation ------------------ .. toctree:: api Changelog --------- .. toctree:: changelog Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` sqlalchemy-migrate-0.13.0/doc/source/changeset.rst0000664000175000017500000001724113553670475022171 0ustar zuulzuul00000000000000.. _changeset-system: .. highlight:: python ************************** Database schema migrations ************************** .. currentmodule:: migrate.changeset.schema Importing :mod:`migrate.changeset` adds some new methods to existing SQLAlchemy objects, as well as creating functions of its own. Most operations can be done either by a method or a function. Methods match SQLAlchemy's existing API and are more intuitive when the object is available; functions allow one to make changes when only the name of an object is available (for example, adding a column to a table in the database without having to load that table into Python). Changeset operations can be used independently of SQLAlchemy Migrate's :ref:`versioning `. For more information, see the API documentation for :mod:`migrate.changeset`. .. _summary-changeset-api: Here are some direct links to the relevent sections of the API documentations: * :meth:`Create a column ` * :meth:`Drop a column ` * :meth:`Alter a column ` (follow a link for list of supported changes) * :meth:`Rename a table ` * :meth:`Rename an index ` * :meth:`Create primary key constraint ` * :meth:`Drop primary key constraint ` * :meth:`Create foreign key contraint ` * :meth:`Drop foreign key constraint ` * :meth:`Create unique key contraint ` * :meth:`Drop unique key constraint ` * :meth:`Create check key contraint ` * :meth:`Drop check key constraint ` .. note:: Many of the schema modification methods above take an ``alter_metadata`` keyword parameter. This parameter defaults to `True`. The following sections give examples of how to make various kinds of schema changes. Column ====== Given a standard SQLAlchemy table: .. code-block:: python table = Table('mytable', meta, Column('id', Integer, primary_key=True), ) table.create() .. _column-create: You can create a column with :meth:`~ChangesetColumn.create`: .. code-block:: python col = Column('col1', String, default='foobar') col.create(table, populate_default=True) # Column is added to table based on its name assert col is table.c.col1 # col1 is populated with 'foobar' because of `populate_default` .. _column-drop: .. note:: You can pass `primary_key_name`, `index_name` and `unique_name` to the :meth:`~ChangesetColumn.create` method to issue ``ALTER TABLE ADD CONSTRAINT`` after changing the column. For multi columns constraints and other advanced configuration, check the :ref:`constraint tutorial `. .. versionadded:: 0.6.0 You can drop a column with :meth:`~ChangesetColumn.drop`: .. code-block:: python col.drop() .. _column-alter: You can alter a column with :meth:`~ChangesetColumn.alter`: .. code-block:: python col.alter(name='col2') # Renaming a column affects how it's accessed by the table object assert col is table.c.col2 # Other properties can be modified as well col.alter(type=String(42), default="life, the universe, and everything", nullable=False) # Given another column object, col1.alter(col2), col1 will be changed to match col2 col.alter(Column('col3', String(77), nullable=True)) assert col.nullable assert table.c.col3 is col .. deprecated:: 0.6.0 Passing a :class:`~sqlalchemy.schema.Column` to :meth:`ChangesetColumn.alter` is deprecated. Pass in explicit parameters, such as `name` for a new column name and `type` for a new column type, instead. Do **not** include any parameters that are not changed. .. _table-rename: Table ===== SQLAlchemy includes support for `creating and dropping`__ tables.. Tables can be renamed with :meth:`~ChangesetTable.rename`: .. code-block:: python table.rename('newtablename') .. __: http://www.sqlalchemy.org/docs/core/schema.html#creating-and-dropping-database-tables .. currentmodule:: migrate.changeset.constraint .. _index-rename: Index ===== SQLAlchemy supports `creating and dropping`__ indexes. Indexes can be renamed using :meth:`~migrate.changeset.schema.ChangesetIndex.rename`: .. code-block:: python index.rename('newindexname') .. __: http://www.sqlalchemy.org/docs/core/schema.html#indexes .. _constraint-tutorial: Constraint ========== .. currentmodule:: migrate.changeset.constraint SQLAlchemy supports creating or dropping constraints at the same time a table is created or dropped. SQLAlchemy Migrate adds support for creating and dropping :class:`~sqlalchemy.schema.PrimaryKeyConstraint`, :class:`~sqlalchemy.schema.ForeignKeyConstraint`, :class:`~sqlalchemy.schema.CheckConstraint` and :class:`~sqlalchemy.schema.UniqueConstraint` constraints independently using ``ALTER TABLE`` statements. The following rundowns are true for all constraints classes: #. Make sure you import the relevant constraint class from :mod:`migrate` and not from :mod:`sqlalchemy`, for example: .. code-block:: python from migrate.changeset.constraint import ForeignKeyConstraint The classes in that module have the extra :meth:`~ConstraintChangeset.create` and :meth:`~ConstraintChangeset.drop` methods. #. You can also use constraints as in SQLAlchemy. In this case passing table argument explicitly is required: .. code-block:: python cons = PrimaryKeyConstraint('id', 'num', table=self.table) # Create the constraint cons.create() # Drop the constraint cons.drop() You can also pass in :class:`~sqlalchemy.schema.Column` objects (and table argument can be left out): .. code-block:: python cons = PrimaryKeyConstraint(col1, col2) #. Some dialects support ``CASCADE`` option when dropping constraints: .. code-block:: python cons = PrimaryKeyConstraint(col1, col2) # Create the constraint cons.create() # Drop the constraint cons.drop(cascade=True) .. note:: SQLAlchemy Migrate will try to guess the name of the constraints for databases, but if it's something other than the default, you'll need to give its name. Best practice is to always name your constraints. Note that Oracle requires that you state the name of the constraint to be created or dropped. Examples --------- Primary key constraints: .. code-block:: python from migrate.changeset.constraint import PrimaryKeyConstraint cons = PrimaryKeyConstraint(col1, col2) # Create the constraint cons.create() # Drop the constraint cons.drop() Foreign key constraints: .. code-block:: python from migrate.changeset.constraint import ForeignKeyConstraint cons = ForeignKeyConstraint([table.c.fkey], [othertable.c.id]) # Create the constraint cons.create() # Drop the constraint cons.drop() Check constraints: .. code-block:: python from migrate.changeset.constraint import CheckConstraint cons = CheckConstraint('id > 3', columns=[table.c.id]) # Create the constraint cons.create() # Drop the constraint cons.drop() Unique constraints: .. code-block:: python from migrate.changeset.constraint import UniqueConstraint cons = UniqueConstraint('id', 'age', table=self.table) # Create the constraint cons.create() # Drop the constraint cons.drop() sqlalchemy-migrate-0.13.0/doc/source/credits.rst0000664000175000017500000000466413553670475021672 0ustar zuulzuul00000000000000.. _credits: Credits ------- sqlalchemy-migrate has been created by: - Evan Rosson Thanks to Google for sponsoring Evan's initial Summer of Code project. The project is maintained by the following people: - Domen Kožar - Jan Dittberner The following people contributed patches, advice or bug reports that helped improve sqlalchemy-migrate: .. hlist:: :columns: 3 - Adam Lowry - Adomas Paltanavicius - Alexander Artemenko - Alex Favaro - Andrew Bialecki - Andrew Grossman - Andrew Lenards - Andrew Svetlov - Andrey Gladilin - Andronikos Nedos - Antoine Pitrou - Ben Hesketh - Ben Keroack - Benjamin Johnson - Branko Vukelic - Bruno Lopes - Ches Martin - Chris Percious - Chris Withers - Christian Simms - Christophe de Vienne - Christopher Grebs - Christopher Lee - Dan Getelman - David Kang - Dustin J. Mitchell - Emil Kroymann - Eyal Sorek - Florian Apolloner - Fred Lin - Gabriel de Perthuis - Graham Higgins - Ilya Shabalin - James Mills - Jarrod Chesney - Jason Michalski - Jason R. Coombs - Jason Yamada-Hanff - Jay Pipes - Jayson Vantuyl - Jeremy Cantrell - Jeremy Slade - Jeroen Ruigrok van der Werven - Joe Heck - Jonas Baumann - Jonathan Ellis - Jorge Vargas - Joshua Ginsberg - Jude Nagurney - Juliusz Gonera - Kevin Dangoor - Kristaps Rāts - Kristian Kvilekval - Kumar McMillan - Landon J. Fuller - Lev Shamardin - Lorin Hochstein - Luca Barbato - Lukasz Zukowski - Mahmoud Abdelkader - Marica Odagaki - Marius Gedminas - Mark Friedenbach - Mark McLoughlin - Martin Andrews - Mathieu Leduc-Hamel - Michael Bayer - Michael Elsdörfer - Mikael Lepistö - Nathan Wright - Nevare Stark - Nicholas Retallack - Nick Barendt - Patrick Shields - Paul Bonser - Paul Johnston - Pawel Bylina - Pedro Algarvio - Peter Strömberg - Poli García - Pradeep Kumar - Rafał Kos - Robert Forkel - Robert Schiele - Robert Sudwarts - Romy Maxwell - Ryan Wilcox - Sami Dalouche - Sergiu Toarca - Simon Engledew - Stephen Emslie - Sylvain Prat - Toshio Kuratomi - Trey Stout - Vasiliy Astanin - Yeeland Chen - Yuen Ho Wong - asuffield (at) gmail (dot) com If you helped us in the past and miss your name please tell us about your contribution and we will add you to the list. sqlalchemy-migrate-0.13.0/doc/source/tools.rst0000664000175000017500000000077313553670475021372 0ustar zuulzuul00000000000000Repository migration (0.4.5 -> 0.5.4) ================================================ .. index:: repository migration :command:`migrate_repository.py` should be used to migrate your repository from a version before 0.4.5 of SQLAlchemy migrate to the current version. .. module:: migrate.versioning.migrate_repository :synopsis: Tool for migrating pre 0.4.5 repositories to current layout Running :command:`migrate_repository.py` is as easy as: :samp:`migrate_repository.py {repository_directory}` sqlalchemy-migrate-0.13.0/doc/source/changelog.rst0000664000175000017500000002662713553670475022167 0ustar zuulzuul000000000000000.7.3 (201x-xx-xx) --------------------------- Changes ****************** - Documentation ****************** - Features ****************** - Fixed Bugs ****************** - #140: excludeTablesgetDiffOfModelAgainstModel is not passing excludeTables correctly (patch by Jason Michalski) - #72: Regression against issue #38, migrate drops engine reference (patch by asuffield@gmail.com) - #154: versioning/schema.py imports deprecated sqlalchemy.exceptions (patch by Alex Favaro) - fix deprecation warning using MetaData.reflect instead of reflect=True constructor argument - fix test failure by removing unsupported length argument for Text column 0.7.2 (2011-11-01) --------------------------- Changes ****************** - support for SQLAlchemy 0.5.x has been dropped - Python 2.6 is the minimum supported Python version Documentation ****************** - add :ref:`credits ` for contributors - add :ref:`glossary ` - improve :ref:`advice on testing production changes ` - improve Sphinx markup - refine :ref:`Database Schema Versioning ` texts, add example for adding/droping columns (#104) - add more developer related information to :ref:`development` section - use sphinxcontrib.issuetracker to link to Google Code issue tracker Features ****************** - improved :pep:`8` compliance (#122) - optionally number versions with timestamps instead of sequences (partly pulled from Pete Keen) - allow descriptions in SQL change script filenames (by Pete Keen) - improved model generation Fixed Bugs ****************** - #83: api test downgrade/upgrade does not work with sql scripts (pulled from Yuen Ho Wong) - #105: passing a unicode string as the migrate repository fails (add regression test) - #113: make_update_script_for_model fails with AttributeError: 'SchemaDiff' object has no attribute 'colDiffs' (patch by Jeremy Cantrell) - #118: upgrade and downgrade functions are reversed when using the command "make_update_script_for_model" (patch by Jeremy Cantrell) - #121: manage.py should use the "if __name__=='__main__'" trick - #123: column creation in make_update_script_for_model and required API change (by Gabriel de Perthuis) - #124: compare_model_to_db gets confused by sqlite_sequence (pulled from Dustin J. Mitchell) - #125: drop column does not work on persistent sqlite databases (pulled from Benoît Allard) - #128: table rename failure with sqlalchemy 0.7.x (patch by Mark McLoughlin) - #129: update documentation and help text (pulled from Yuen Ho Wong) 0.7.1 (2011-05-27) --------------------------- Fixed Bugs ****************** - docs/_build is excluded from source tarball builds - use table.append_column() instead of column._set_parent() in ChangesetColumn.add_to_table() - fix source and issue tracking URLs in documentation 0.7 (2011-05-27) --------------------------- Features ****************** - compatibility with SQLAlchemy 0.7 - add :py:data:`migrate.__version__` Fixed bugs ****************** - fix compatibility issues with SQLAlchemy 0.7 0.6.1 (2011-02-11) --------------------------- Features ****************** - implemented column adding when foreign keys are present for sqlite - implemented columns adding with unique constraints for sqlite - implemented adding unique and foreign key constraints to columns for sqlite - remove experimental `alter_metadata` parameter Fixed bugs ****************** - updated tests for Python 2.7 - repository keyword in :py:func:`migrate.versioning.api.version_control` can also be unicode - added if main condition for manage.py script - make :py:func:`migrate.changeset.constraint.ForeignKeyConstraint.autoname` work with SQLAlchemy 0.5 and 0.6 - fixed case sensitivity in setup.py dependencies - moved :py:mod:`migrate.changeset.exceptions` and :py:mod:`migrate.versioning.exceptions` to :py:mod:`migrate.exceptions` - cleared up test output and improved testing of deprecation warnings. - some documentation fixes - #107: fixed syntax error in genmodel.py - #96: fixed bug with column dropping in sqlite - #94: fixed bug that prevented non-unique indexes being created - fixed bug with column dropping involving foreign keys - fixed bug when dropping columns with unique constraints in sqlite - rewrite of the schema diff internals, now supporting column differences in additon to missing columns and tables. - fixed bug when passing empty list in :py:func:`migrate.versioning.shell.main` failed - #108: Fixed issues with firebird support. 0.6 (11.07.2010) --------------------------- .. _backwards-06: .. warning:: **Backward incompatible changes**: - :py:func:`migrate.versioning.api.test` and schema comparison functions now all accept `url` as first parameter and `repository` as second. - python upgrade/downgrade scripts do not import `migrate_engine` magically, but recieve engine as the only parameter to function (eg. ``def upgrade(migrate_engine):``) - :py:meth:`Column.alter ` does not accept `current_name` anymore, it extracts name from the old column. Features ************** - added support for :ref:`firebird ` - added option to define custom templates through option ``--templates_path`` and ``--templates_theme``, read more in :ref:`tutorial section ` - use Python logging for output, can be shut down by passing ``--disable_logging`` to :py:func:`migrate.versioning.shell.main` - deprecated `alter_column` comparing of columns. Just use explicit parameter change. - added support for SQLAlchemy 0.6.x by Michael Bayer - Constraint classes have `cascade=True` keyword argument to issue ``DROP CASCADE`` where supported - added :py:class:`~migrate.changeset.constraint.UniqueConstraint`/ :py:class:`~migrate.changeset.constraint.CheckConstraint` and corresponding create/drop methods - API `url` parameter can also be an :py:class:`Engine` instance (this usage is discouraged though sometimes necessary) - code coverage is up to 80% with more than 100 tests - alter, create, drop column / rename table / rename index constructs now accept `alter_metadata` parameter. If True, it will modify Column/Table objects according to changes. Otherwise, everything will be untouched. - added `populate_default` bool argument to :py:meth:`Column.create ` which issues corresponding UPDATE statements to set defaults after column creation - :py:meth:`Column.create ` accepts `primary_key_name`, `unique_name` and `index_name` as string value which is used as contraint name when adding a column Fixed bugs ***************** - :term:`ORM` methods now accept `connection` parameter commonly used for transactions - `server_defaults` passed to :py:meth:`Column.create ` are now issued correctly - use SQLAlchemy quoting system to avoid name conflicts (#32) - complete refactoring of :py:class:`~migrate.changeset.schema.ColumnDelta` (#23) - partial refactoring of :py:mod:`migrate.changeset` package - fixed bug when :py:meth:`Column.alter `\(server_default='string') was not properly set - constraints passed to :py:meth:`Column.create ` are correctly interpreted (``ALTER TABLE ADD CONSTRAINT`` is issued after ``ATLER TABLE ADD COLUMN``) - script names don't break with dot in the name Documentation ********************* - :ref:`dialect support ` table was added to documentation - major update to documentation 0.5.4 ----- - fixed preview_sql parameter for downgrade/upgrade. Now it prints SQL if the step is SQL script and runs step with mocked engine to only print SQL statements if ORM is used. [Domen Kozar] - use entrypoints terminology to specify dotted model names (module.model:User) [Domen Kozar] - added engine_dict and engine_arg_* parameters to all api functions (deprecated echo) [Domen Kozar] - make --echo parameter a bit more forgivable (better Python API support) [Domen Kozar] - apply patch to refactor cmd line parsing for Issue 54 by Domen Kozar 0.5.3 ----- - apply patch for Issue 29 by Jonathan Ellis - fix Issue 52 by removing needless parameters from object.__new__ calls 0.5.2 ----- - move sphinx and nose dependencies to extras_require and tests_require - integrate patch for Issue 36 by Kumar McMillan - fix unit tests - mark ALTER TABLE ADD COLUMN with FOREIGN KEY as not supported by SQLite 0.5.1.2 ------- - corrected build 0.5.1.1 ------- - add documentation in tarball - add a MANIFEST.in 0.5.1 ----- - SA 0.5.x support. SQLAlchemy < 0.5.1 not supported anymore. - use nose instead of py.test for testing - Added --echo=True option for all commands, which will make the sqlalchemy connection echo SQL statements. - Better PostgreSQL support, especially for schemas. - modification to the downgrade command to simplify the calling (old way still works just fine) - improved support for SQLite - add support for check constraints (EXPERIMENTAL) - print statements removed from APIs - improved sphinx based documentation - removal of old commented code - :pep:`8` clean code 0.4.5 ----- - work by Christian Simms to compare metadata against databases - new repository format - a repository format migration tool is in migrate/versioning/migrate_repository.py - support for default SQL scripts - EXPERIMENTAL support for dumping database to model 0.4.4 ----- - patch by pwannygoodness for Issue #15 - fixed unit tests to work with py.test 0.9.1 - fix for a SQLAlchemy deprecation warning 0.4.3 ----- - patch by Kevin Dangoor to handle database versions as packages and ignore their __init__.py files in version.py - fixed unit tests and Oracle changeset support by Christian Simms 0.4.2 ----- - package name is sqlalchemy-migrate again to make pypi work - make import of sqlalchemy's SchemaGenerator work regardless of previous imports 0.4.1 ----- - setuptools patch by Kevin Dangoor - re-rename module to migrate 0.4.0 ----- - SA 0.4.0 compatibility thanks to Christian Simms - all unit tests are working now (with sqlalchemy >= 0.3.10) 0.3 --- - SA 0.3.10 compatibility 0.2.3 ----- - Removed lots of SA monkeypatching in Migrate's internals - SA 0.3.3 compatibility - Removed logsql (trac issue 75) - Updated py.test version from 0.8 to 0.9; added a download link to setup.py - Fixed incorrect "function not defined" error (trac issue 88) - Fixed SQLite and .sql scripts (trac issue 87) 0.2.2 ----- - Deprecated driver(engine) in favor of engine.name (trac issue 80) - Deprecated logsql (trac issue 75) - Comments in .sql scripts don't make things fail silently now (trac issue 74) - Errors while downgrading (and probably other places) are shown on their own line - Created mailing list and announcements list, updated documentation accordingly - Automated tests now require py.test (trac issue 66) - Documentation fix to .sql script commits (trac issue 72) - Fixed a pretty major bug involving logengine, dealing with commits/tests (trac issue 64) - Fixes to the online docs - default DB versioning table name (trac issue 68) - Fixed the engine name in the scripts created by the command 'migrate script' (trac issue 69) - Added Evan's email to the online docs 0.2.1 ----- - Created this changelog - Now requires (and is now compatible with) SA 0.3 - Commits across filesystems now allowed (shutil.move instead of os.rename) (trac issue 62) sqlalchemy-migrate-0.13.0/MANIFEST.in0000664000175000017500000000026413553670475017164 0ustar zuulzuul00000000000000include AUTHORS include ChangeLog include README recursive-include docs * recursive-include migrate * recursive-include tests * global-exclude *pyc recursive-exclude docs/_build * sqlalchemy-migrate-0.13.0/test_db.cfg0000664000175000017500000000134013553670475017527 0ustar zuulzuul00000000000000# test_db.cfg # # This file contains a list of connection strings which will be used by # database tests. Tests will be executed once for each string in this file. # You should be sure that the database used for the test doesn't contain any # important data. See README for more information. # # The string '__tmp__' is substituted for a temporary file in each connection # string. This is useful for sqlite tests. sqlite:///__tmp__ postgresql://openstack_citest:openstack_citest@localhost/openstack_citest mysql://openstack_citest:openstack_citest@localhost/openstack_citest #oracle://scott:tiger@localhost #firebird://scott:tiger@localhost//var/lib/firebird/databases/test_migrate #ibm_db_sa://migrate:migrate@localhost:50000/migrate sqlalchemy-migrate-0.13.0/setup.cfg0000664000175000017500000000201213553670602017230 0ustar zuulzuul00000000000000[metadata] name = sqlalchemy-migrate summary = Database schema migration for SQLAlchemy description-file = README.rst author = OpenStack author-email = openstack-discuss@lists.openstack.org home-page = http://www.openstack.org/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.3 Programming Language :: Python :: 3.4 Programming Language :: Python :: 3.5 Programming Language :: Python :: 3.6 [files] packages = migrate [entry_points] console_scripts = migrate = migrate.versioning.shell:main migrate-repository = migrate.versioning.migrate_repository:main [build_sphinx] all_files = 1 build-dir = doc/build source-dir = doc/source [egg_info] tag_build = tag_date = 0 sqlalchemy-migrate-0.13.0/requirements.txt0000664000175000017500000000072413553670475020713 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr>=1.8 # never put a cap on this, *ever*, sqla versions are handled via # tox, and if SQLA is capped it will only make it so we aren't testing # against all the versions we are compatible with. SQLAlchemy>=0.9.6 decorator six>=1.7.0 sqlparse Tempita>=0.4 sqlalchemy-migrate-0.13.0/.zuul.yaml0000664000175000017500000000165113553670475017370 0ustar zuulzuul00000000000000- project: templates: - docs-on-readthedocs - openstack-python-jobs - openstack-python35-jobs - openstack-python36-jobs vars: rtd_webhook_id: '61274' check: jobs: - sqlalchemy-migrate-tox-py27sa07 - sqlalchemy-migrate-devstack: voting: false gate: jobs: - sqlalchemy-migrate-tox-py27sa07 - job: name: sqlalchemy-migrate-tox-py27sa07 parent: tox description: | Run tests for sqlalchemy-migrate project. Uses tox with the ``py27sa07`` environment. vars: tox_envlist: py27sa07 - job: name: sqlalchemy-migrate-devstack parent: legacy-dsvm-base run: playbooks/sqlalchemy-migrate-devstack-dsvm/run.yaml post-run: playbooks/sqlalchemy-migrate-devstack-dsvm/post.yaml timeout: 10800 required-projects: - openstack/devstack - openstack/devstack-gate - x/sqlalchemy-migrate sqlalchemy-migrate-0.13.0/COPYING0000664000175000017500000000211613553670475016457 0ustar zuulzuul00000000000000The MIT License Copyright (c) 2009 Evan Rosson, Jan Dittberner, Domen Kožar Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. sqlalchemy-migrate-0.13.0/test_db_py3.cfg0000664000175000017500000000134213553670475020324 0ustar zuulzuul00000000000000# test_db.cfg # # This file contains a list of connection strings which will be used by # database tests. Tests will be executed once for each string in this file. # You should be sure that the database used for the test doesn't contain any # important data. See README for more information. # # The string '__tmp__' is substituted for a temporary file in each connection # string. This is useful for sqlite tests. sqlite:///__tmp__ #postgresql://openstack_citest:openstack_citest@localhost/openstack_citest #mysql://openstack_citest:openstack_citest@localhost/openstack_citest #oracle://scott:tiger@localhost #firebird://scott:tiger@localhost//var/lib/firebird/databases/test_migrate #ibm_db_sa://migrate:migrate@localhost:50000/migrate sqlalchemy-migrate-0.13.0/TODO0000664000175000017500000000221413553670475016113 0ustar zuulzuul00000000000000- better SQL scripts support (testing, source viewing) make_update_script_for_model: - calculated differences between models are actually differences between metas - columns are not compared? - even if two "models" are equal, it doesn't yield so - controlledschema.drop() drops whole migrate table, maybe there are some other repositories bound to it! Unknown milestone - update repository migration script - required_dbs tests - dot instead of semicolumn in dotted module name should have better deprecation warning - change "model" in "metadata" for all highlevel api stuff - bash autocompletion - clarify ImportError in load_model - implement debug flag to jump into the error - sqlautocode support - serializers (json, yaml, etc) 0.6.1 - transaction support - verbose output on migration failures - interactive migration script resolution? - backend for versioning management Documentation updates in 0.6.1 - glossary - add story to changeset tutorial - write documentation how to test all databases Transaction support in 0.6.1 - script.run should call engine.transaction() - API should support engine and connection as well - tests for transactions sqlalchemy-migrate-0.13.0/tox.ini0000664000175000017500000001144413553670475016743 0ustar zuulzuul00000000000000[tox] minversion = 1.6 skipsdist = True envlist = py27,py27sa07,py27sa08,py27sa09,py33,py34,py35,py36,pep8 [testenv] usedevelop = True whitelist_externals = bash # Avoid psycopg2 wheel package rename warnings by not using the binary. install_command = pip install --no-binary psycopg2 {opts} {packages} setenv = VIRTUAL_ENV={envdir} deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = bash tools/pretty_tox.sh '{posargs}' [testenv:py27] deps = sqlalchemy>=0.9 -r{toxinidir}/test-requirements.txt [testenv:py27sa07] basepython = python2.7 deps = sqlalchemy>=0.7,<=0.7.99 -r{toxinidir}/test-requirements.txt [testenv:py27sa08] basepython = python2.7 deps = sqlalchemy>=0.8,<=0.8.99 -r{toxinidir}/test-requirements.txt [testenv:py27sa09] basepython = python2.7 deps = sqlalchemy>=0.9,<=0.9.99 -r{toxinidir}/test-requirements.txt [testenv:py33] deps = sqlalchemy>=0.9 -r{toxinidir}/test-requirements.txt [testenv:py34] deps = sqlalchemy>=0.9 -r{toxinidir}/test-requirements.txt [testenv:py35] deps = sqlalchemy>=0.9 -r{toxinidir}/test-requirements.txt [testenv:py36] deps = sqlalchemy>=0.9 -r{toxinidir}/test-requirements.txt [testenv:pep8] commands = flake8 [testenv:docs] basepython = python3 deps = -r{toxinidir}/doc/requirements.txt commands = sphinx-build doc/source doc/build/html [testenv:venv] commands = {posargs} [testenv:cover] setenv = VIRTUAL_ENV={envdir} commands = python setup.py testr --slowest --testr-args='{posargs}' [flake8] # E111 indentation is not a multiple of four # E113 unexpected indentation # E121 continuation line indentation is not a multiple of four # E122 continuation line missing indentation or outdented # E123 closing bracket does not match indentation of opening bracket's line # E124 closing bracket does not match visual indentation # E125 continuation line does not distinguish itself from next logical line # E126 continuation line over-indented for hanging indent # E127 continuation line over-indented for visual indent # E128 continuation line under-indented for visual indent # E129 visually indented line with same indent as next logical line # E131 continuation line unaligned for hanging indent # E202 whitespace before ')' # E203 whitespace before ',' # E225 missing whitespace around operator # E226 missing whitespace around arithmetic operator # E228 missing whitespace around modulo operator # E231 missing whitespace after ',' # E265 block comment should start with '# ' # H234 assertEquals is deprecated, use assertEqual # E251 unexpected spaces around keyword / parameter equals # E261 at least two spaces before inline comment # E272 multiple spaces before keyword # E301 expected 1 blank line, found 0 # E302 expected 2 blank lines, found 1 # E303 too many blank lines (3) # E401 multiple imports on one line # E501 line too long ( > 79 characters) # E502 the backslash is redundant between brackets # E702 multiple statements on one line (semicolon) # E712 comparison to True should be 'if cond is True:' or 'if cond:' # F401 '' imported but unused # F403 'from migrate.exceptions import *' used; unable to detect undefined names # F811 redefinition of unused '' from line # F821 undefined name '' # F841 local variable '' is assigned to but never used # H101 Use TODO(NAME) # H201 no 'except:' at least use 'except Exception:' # H202 assertRaises Exception too broad # H233 Python 3.x incompatible use of print operator # H301 one import per line # H302 import only modules. '' does not import a module # H306 imports not in alphabetical order # H401 docstring should not start with a space # H402 one line docstring needs punctuation. # H403 multi line docstring end on new line # H404 multi line docstring should start with a summary # H405 multi line docstring summary not separated with an empty line # H501 Do not use locals() for string formatting # W391 blank line at end of file ignore = E111,E113,E121,E122,E123,E124,E125,E126,E127,E128,E129,E131,E202,E203,E225,E226,E228,E231,E251,E261,E265,E272,E301,E302,E303,E401,E501,E502,E702,E712,F401,F403,F811,F821,F841,H101,H201,H202,H233,H234,H301,H302,H306,H401,H402,H403,H404,H405,H501,W391 show-source = true builtins = _ exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,tools,build [testenv:bindep] # Do not install any requirements. We want this to be fast and work even if # system dependencies are missing, since it's used to tell you what system # dependencies are missing! This also means that bindep must be installed # separately, outside of the requirements files, and develop mode disabled # explicitly to avoid unnecessarily installing the checked-out repo too (this # further relies on "tox.skipsdist = True" above). usedevelop = False deps = bindep commands = bindep test sqlalchemy-migrate-0.13.0/doc-requirements.txt0000664000175000017500000000005513553670475021453 0ustar zuulzuul00000000000000-r requirements.txt -r test-requirements.txt sqlalchemy-migrate-0.13.0/sqlalchemy_migrate.egg-info/0000775000175000017500000000000013553670602022760 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/sqlalchemy_migrate.egg-info/PKG-INFO0000664000175000017500000000547613553670602024071 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: sqlalchemy-migrate Version: 0.13.0 Summary: Database schema migration for SQLAlchemy Home-page: http://www.openstack.org/ Author: OpenStack Author-email: openstack-discuss@lists.openstack.org License: UNKNOWN Description: SQLAlchemy Migrate ================== Fork from http://code.google.com/p/sqlalchemy-migrate/ to get it working with SQLAlchemy 0.8. Inspired by Ruby on Rails' migrations, Migrate provides a way to deal with database schema changes in `SQLAlchemy `_ projects. Migrate extends SQLAlchemy to have database changeset handling. It provides a database change repository mechanism which can be used from the command line as well as from inside python code. Help ---- Sphinx documentation is available at the project page `readthedocs.org `_. Users and developers can be found at #openstack-dev on Freenode IRC network and at the public users mailing list `migrate-users `_. New releases and major changes are announced at the public announce mailing list `openstack-dev `_ and at the Python package index `sqlalchemy-migrate `_. Homepage is located at `stackforge `_ You can also clone a current `development version `_ Tests and Bugs -------------- To run automated tests: * install tox: ``pip install -U tox`` * run tox: ``tox`` * to test only a specific Python version: ``tox -e py27`` (Python 2.7) Please report any issues with sqlalchemy-migrate to the issue tracker at `Launchpad issues `_ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 sqlalchemy-migrate-0.13.0/sqlalchemy_migrate.egg-info/SOURCES.txt0000664000175000017500000001122213553670602024642 0ustar zuulzuul00000000000000.testr.conf .zuul.yaml AUTHORS COPYING ChangeLog MANIFEST.in README.rst TODO bindep.txt doc-requirements.txt requirements.txt setup.cfg setup.py test-requirements.txt test_db.cfg test_db_py3.cfg tox.ini doc/requirements.txt doc/source/Makefile doc/source/api.rst doc/source/changelog.rst doc/source/changeset.rst doc/source/conf.py doc/source/credits.rst doc/source/download.rst doc/source/faq.rst doc/source/glossary.rst doc/source/index.rst doc/source/tools.rst doc/source/versioning.rst doc/source/historical/ProjectDesignDecisionsAutomation.trac doc/source/historical/ProjectDesignDecisionsScriptFormat.trac doc/source/historical/ProjectDesignDecisionsVersioning.trac doc/source/historical/ProjectDetailedDesign.trac doc/source/historical/ProjectGoals.trac doc/source/historical/ProjectProposal.txt doc/source/historical/RepositoryFormat.trac doc/source/historical/RepositoryFormat2.trac doc/source/theme/almodovar.css doc/source/theme/layout.css doc/source/theme/layout.html migrate/__init__.py migrate/exceptions.py migrate/changeset/__init__.py migrate/changeset/ansisql.py migrate/changeset/constraint.py migrate/changeset/schema.py migrate/changeset/util.py migrate/changeset/databases/__init__.py migrate/changeset/databases/firebird.py migrate/changeset/databases/ibmdb2.py migrate/changeset/databases/mysql.py migrate/changeset/databases/oracle.py migrate/changeset/databases/postgres.py migrate/changeset/databases/sqlite.py migrate/changeset/databases/visitor.py migrate/tests/__init__.py migrate/tests/changeset/__init__.py migrate/tests/changeset/test_changeset.py migrate/tests/changeset/test_constraint.py migrate/tests/changeset/databases/__init__.py migrate/tests/changeset/databases/test_ibmdb2.py migrate/tests/fixture/__init__.py migrate/tests/fixture/base.py migrate/tests/fixture/database.py migrate/tests/fixture/models.py migrate/tests/fixture/pathed.py migrate/tests/fixture/shell.py migrate/tests/fixture/warnings.py migrate/tests/integrated/__init__.py migrate/tests/integrated/test_docs.py migrate/tests/versioning/__init__.py migrate/tests/versioning/test_api.py migrate/tests/versioning/test_cfgparse.py migrate/tests/versioning/test_database.py migrate/tests/versioning/test_genmodel.py migrate/tests/versioning/test_keyedinstance.py migrate/tests/versioning/test_pathed.py migrate/tests/versioning/test_repository.py migrate/tests/versioning/test_runchangeset.py migrate/tests/versioning/test_schema.py migrate/tests/versioning/test_schemadiff.py migrate/tests/versioning/test_script.py migrate/tests/versioning/test_shell.py migrate/tests/versioning/test_template.py migrate/tests/versioning/test_util.py migrate/tests/versioning/test_version.py migrate/versioning/__init__.py migrate/versioning/api.py migrate/versioning/cfgparse.py migrate/versioning/config.py migrate/versioning/genmodel.py migrate/versioning/migrate_repository.py migrate/versioning/pathed.py migrate/versioning/repository.py migrate/versioning/schema.py migrate/versioning/schemadiff.py migrate/versioning/shell.py migrate/versioning/template.py migrate/versioning/version.py migrate/versioning/script/__init__.py migrate/versioning/script/base.py migrate/versioning/script/py.py migrate/versioning/script/sql.py migrate/versioning/templates/__init__.py migrate/versioning/templates/manage/default.py_tmpl migrate/versioning/templates/manage/pylons.py_tmpl migrate/versioning/templates/repository/__init__.py migrate/versioning/templates/repository/default/README migrate/versioning/templates/repository/default/__init__.py migrate/versioning/templates/repository/default/migrate.cfg migrate/versioning/templates/repository/default/versions/__init__.py migrate/versioning/templates/repository/pylons/README migrate/versioning/templates/repository/pylons/__init__.py migrate/versioning/templates/repository/pylons/migrate.cfg migrate/versioning/templates/repository/pylons/versions/__init__.py migrate/versioning/templates/script/__init__.py migrate/versioning/templates/script/default.py_tmpl migrate/versioning/templates/script/pylons.py_tmpl migrate/versioning/templates/sql_script/default.py_tmpl migrate/versioning/templates/sql_script/pylons.py_tmpl migrate/versioning/util/__init__.py migrate/versioning/util/importpath.py migrate/versioning/util/keyedinstance.py playbooks/sqlalchemy-migrate-devstack-dsvm/post.yaml playbooks/sqlalchemy-migrate-devstack-dsvm/run.yaml sqlalchemy_migrate.egg-info/PKG-INFO sqlalchemy_migrate.egg-info/SOURCES.txt sqlalchemy_migrate.egg-info/dependency_links.txt sqlalchemy_migrate.egg-info/entry_points.txt sqlalchemy_migrate.egg-info/not-zip-safe sqlalchemy_migrate.egg-info/pbr.json sqlalchemy_migrate.egg-info/requires.txt sqlalchemy_migrate.egg-info/top_level.txt tools/pretty_tox.sh tools/test-setup.shsqlalchemy-migrate-0.13.0/sqlalchemy_migrate.egg-info/top_level.txt0000664000175000017500000000001013553670602025501 0ustar zuulzuul00000000000000migrate sqlalchemy-migrate-0.13.0/sqlalchemy_migrate.egg-info/pbr.json0000664000175000017500000000005613553670602024437 0ustar zuulzuul00000000000000{"git_version": "5d1f322", "is_release": true}sqlalchemy-migrate-0.13.0/sqlalchemy_migrate.egg-info/not-zip-safe0000664000175000017500000000000113553670602025206 0ustar zuulzuul00000000000000 sqlalchemy-migrate-0.13.0/sqlalchemy_migrate.egg-info/entry_points.txt0000664000175000017500000000017313553670602026257 0ustar zuulzuul00000000000000[console_scripts] migrate = migrate.versioning.shell:main migrate-repository = migrate.versioning.migrate_repository:main sqlalchemy-migrate-0.13.0/sqlalchemy_migrate.egg-info/requires.txt0000664000175000017500000000010613553670602025355 0ustar zuulzuul00000000000000pbr>=1.8 SQLAlchemy>=0.9.6 decorator six>=1.7.0 sqlparse Tempita>=0.4 sqlalchemy-migrate-0.13.0/sqlalchemy_migrate.egg-info/dependency_links.txt0000664000175000017500000000000113553670602027026 0ustar zuulzuul00000000000000 sqlalchemy-migrate-0.13.0/playbooks/0000775000175000017500000000000013553670602017417 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/playbooks/sqlalchemy-migrate-devstack-dsvm/0000775000175000017500000000000013553670602025760 5ustar zuulzuul00000000000000sqlalchemy-migrate-0.13.0/playbooks/sqlalchemy-migrate-devstack-dsvm/run.yaml0000664000175000017500000000312013553670475027454 0ustar zuulzuul00000000000000- hosts: all name: Autoconverted job legacy-sqlalchemy-migrate-devstack-dsvm from old job gate-sqlalchemy-migrate-devstack-dsvm-nv tasks: - name: Ensure legacy workspace directory file: path: '{{ ansible_user_dir }}/workspace' state: directory - shell: cmd: | set -e set -x cat > clonemap.yaml << EOF clonemap: - name: openstack/devstack-gate dest: devstack-gate EOF /usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \ https://opendev.org \ openstack/devstack-gate executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' - shell: cmd: | set -e set -x export PYTHONUNBUFFERED=true export PROJECTS="x/sqlalchemy-migrate $PROJECTS" export DEVSTACK_GATE_TEMPEST=1 export DEVSTACK_GATE_TEMPEST_FULL=1 export BRANCH_OVERRIDE=default if [ "$BRANCH_OVERRIDE" != "default" ] ; then export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE fi function pre_test_hook { cd /opt/stack/new/sqlalchemy-migrate sudo -H pip install . } export -f pre_test_hook cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh executable: /bin/bash chdir: '{{ ansible_user_dir }}/workspace' environment: '{{ zuul | zuul_legacy_vars }}' sqlalchemy-migrate-0.13.0/playbooks/sqlalchemy-migrate-devstack-dsvm/post.yaml0000664000175000017500000000063313553670475027643 0ustar zuulzuul00000000000000- hosts: primary tasks: - name: Copy files from {{ ansible_user_dir }}/workspace/ on node synchronize: src: '{{ ansible_user_dir }}/workspace/' dest: '{{ zuul.executor.log_root }}' mode: pull copy_links: true verify_host: true rsync_opts: - --include=/logs/** - --include=*/ - --exclude=* - --prune-empty-dirs sqlalchemy-migrate-0.13.0/setup.py0000664000175000017500000000172513553670475017143 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=1.8'], pbr=True) sqlalchemy-migrate-0.13.0/AUTHORS0000664000175000017500000000427013553670602016467 0ustar zuulzuul00000000000000Alex Favaro Andreas Jaeger Anusree Bob Farrell Brant Knudson Chih-Hsuan Yen Chris Withers Corey Bryant Cyril Roelandt Dan Prince David Ripton Domen Kožar Dustin J. Mitchell Eric Harney Gabriel Haikel Guemar Ihar Hrachyshka Jan Dittberner Jan Dittberner Jan Dittberner Jason Michalski Jeremy Stanley Jonathan Herlin Josip Delic Longgeek Matt Riedemann Matt Riedemann Mike Bayer Monty Taylor Nicola Soranzo Pete Keen Peter Conerly Pádraig Brady Pádraig Brady Qin Zhao Rahul Priyadarshi Rick Copeland Roman Podoliaka Roman Podolyaka Sascha Peilicke Sascha Peilicke Sean Dague Sean Dague Sean Mooney Sheng Bo Hou Thomas Goirand Thomas Goirand Thuy Christenson Tony Breeds Victor Stinner Yuval Langer al.yazdi@gmail.com asuffield@gmail.com ches.martin christian.simms chrisw dineshbhor emil.kroymann hudson@fubarite.fubar.si iElectric jan.dittberner markbmc@gmail.com percious17 root@fubarite.fubar.si wyuenho@gmail.com sqlalchemy-migrate-0.13.0/ChangeLog0000664000175000017500000006062513553670602017177 0ustar zuulzuul00000000000000CHANGES ======= 0.13.0 ------ * remove inspect.getargspec deprecation warning * Use engine.connect(); don't use private \_run\_visitor method * Claim support for python 3.6 * Claim support for python 3.5 * Remove test-requirements-py\*.txt files * Add bindep support * Remove py26 tox targets * Import zuul jobs * Fix docs build * OpenDev Migration Patch 0.12.0 ------ * Change title in README.rst * Don't use deprecated / non-functional "force" parameter * Use mysqlclient * Use legacy\_alter\_table ON in sqlite recreate\_table * Remove py26 support * Add .eggs in .gitignore * Import MutableMapping from the correct Python module * Update mailinglist from dev to discuss * Get rid of psycopg2 warnings by disabling wheels * Enforce that pbr used is >= 1.8 0.11.0 ------ * Use a modern PBR package * Prepare for using standard python tests * Fix spelling mistake * Set autoincrement to False when modifying to non-Integer datatype * Raise VersionNotFoundError instead of KeyError * Fix DeprecationWarning on setuptools >= 11.3 * Update .gitreview for new namespace 0.10.0 ------ * Update URLs in documentation * Add VerNum.\_\_index\_\_() for Python 3 support * Fixes usage function for Py3 * Unblock migrate (py26 and py3\* testing issues) 0.9.7 ----- * Revert "Revert "uncap pbr and sqla requirements"" * Update flake8 related dependencies * Revert "uncap pbr and sqla requirements" * uncap pbr and sqla requirements * Update tests and reqs for SQLA 1.0 * Ignore stderr output when invoking migrate script in tests * Add Python 3 classifiers 0.9.6 ----- * Fix ibmdb2 index name handling 0.9.5 ----- * Don't run the test if \_setup() fails * Correcting minor typo * Fix .gitignore for .tox and .testrepository * allow dropping fkeys with sqlite * Add pretty\_tox setup * script: strip comments in SQL statements 0.9.4 ----- * Remove svn version tag setting 0.9.3 ----- * Ignore transaction management statements in SQL scripts * Use native sqlalchemy 0.9 quote attribute with ibmdb2 * Don't add warnings filter on import * Replace assertNotEquals with assertNotEqual * Update requirements file matching global requ * Work toward Python 3.4 support and testing * pep8: mark all pep8 checks that currently fail as ignored 0.9.2 ----- * SqlScript: execute multiple statements one by one * Make sure we don't throw away exception on SQL script failure * Pin testtools to < 0.9.36 * Fix ibmdb2 unique constraint handling for sqlalchemy 0.9 * Fixes the auto-generated manage.py 0.9.1 ----- * Move patch from oslo to drop unique constraints with sqlite * Port to Python3 * tests: Replace "self.assert\_" by "self.assertTrue" 0.9 --- * turn on testing for sqla 0.9 * Replace AbstractType by TypeEngine * fix scripttest compat * Use native quote attribute introduced in sqla 0.9 * Fix genmodel for SQLA 0.9 * Conditionally import ibmdb2/ibm\_db\_sa * migrate needs subunit >= 0.0.18 * UniqueConstraint named and escaped twice * Fix 3 files with Windows line endings to Unix line endings * Eradicate trailing whitespace * Convert tabs to spaces in a couple of rst files 0.8.4 ----- 0.8.5 ----- * uncap SQLA in requirements.txt * Add DB2 10.5 Support * Sync with global requirements * Fix broken development version link in README 0.8.2 ----- * Un-break the version in migrate/\_\_init\_\_.py * Fix the version number to match the last release 0.8.1 ----- * Remove the tag\_build line from setup.cfg * Drop setuptools\_git test requirement 0.8 --- * Fix int overflow exception in unittest * Fix dropping of indexed columns in sqlite/sa08 * Run tests on PostgreSQL and MySQL too * Update tox requirements * Stop using the d2to1-based pbr * decouple index name generation from sqlalchemy version * Run tests with different SQLAlchemy versions * Add a workaround for pytz and pip>=1.4 * Add a reqs files for RTFD * Fix exceptions for SQLAlchemy 0.8 * added bugfixes for 0.8 * Updated to OpenStack Build stuff * Removed hg and google code references * Initial changes to import into StackForge * update changelog * fix error, Text columns have no width * fix deprecation warning by using MetaData.reflect * update credits and changelog * Import correct exceptions module (Fixes issue 154) * update changelog and credits * apply patch for issue #72 by asuffield@gmail.com * update changelog and credits * Fix excludeTablesgetDiffOfModelAgainstModel is not passing excludeTables correctly * start next development iteration * Added signature for changeset ad06c76fc174 * tag for release 0.7.2 0.7.2 ----- * finalize changelog for 0.7.2 * add credits for contributors * bump SQLAlchemy dependency to >= 0.6 add build-req.pip with build requirements * update changelog fix issue numbers (use trac issue prefix for pre 0.3 versions) * add glossary update documentation meta data, rewrap index.rst * add more developer related information * rewrap README * update sqlalchemy documentation links use explicit code-block markup * update intersphinx configuration, add sphinxcontrib.issuetracker configuration * ignore vim swap files and docs/\_static * document adding/droping columns (fixes issue 104) * PEP-8 compliant script templates * add regression test (fixes issue 105) * fix issues with ConfigParser and existing repositories (fixes issue 115) * give credits to Benoît Allard * remove obsolete manage.py\_tmpl (related to issue 121) * update changelog (add issue 121 bugfix) * generate if \_\_name\_\_ == "\_\_main\_\_" in manage.py (fixes issue 121) * update changelog (include #125 fix) * merge e5bd2821eea8 from https://code.google.com/r/alyazdi-patches/ (fixes issue 125) * update changelog * merge fixes by wyenho * drop SQLAlchemy < 0.6 compatibility code * ignore PyCharm project files * fix SQLAlchemy 0.6.x compatibility of issue 128 patch * update changelog (drop support for SQLAlchemy 0.5.x, Python 2.4 and 2.5) * fix issue 128: "table rename failure with sqlalchemy 0.7.x" * changelog updates * Fix for issue #125, create the table on the same connection as the ALTER and INSERT happen * fixed issue 129 * fixed issue 83 * remove unused import * Fix issue 124: ignore sqlite\_sequence in comparing db to model * Use two models in generated migrations. Test column addition and removal * Put constraints (positional) before args (keywords) * Output has changed even without the SQLa patch, update #122 tests * Fix and test issue 118. Clarify genmodel transformations * Fix column creation in make\_update\_script\_for\_model * More concise declarative output * More pep8 compliant declarative output * Fix some tests that broke when adding descriptions for sql scripts * Allow descriptions in sql change script filenames * Optionally number versions with timestamps instead of sequences * start next development iteration * Added signature for changeset fbb2817a1e3f * Added tag v0.7.1 for changeset fbb2817a1e3f 0.7.1 ----- * Added signature for changeset 35038c66152b * set migrate.\_\_version\_\_ to 0.7.1 * finalize changelog for 0.7.1 * fix issue tracking and source code URLs * fix column.create() properly * start next development iteration * Release 0.7 0.7 --- * update version number in docs/conf.py * add migrate.\_\_version\_\_ (Fixes issue 111) * update changelog and version information * bump version number to 0.7 to indicate SQLA 0.7 compatibility * no special treatment for SQLA 0.7 required in migrate.changeset.ansisql * remove commented code * fix unit test for adding new columns with foreign keys * use DatabaseError instead of ProgrammingError because behaviour seems to be database dependent (addresses issue 122) * table.drop() raises ProgrammingError in SQLAlchemy 0.7 instead of SQLError (addresses #112) * SQLAlchemy 0.7's column.foreign\_keys is a set and has no \_list (addresses #112) * use proper encoding instead of True (addresses #112) * fix one more test (addresses #112) * use Table.\_columns to remove columns (addresses #112) * psycopg2 downloads from initrd.org work without workaround * add Developing with migrations tutorial link * merge * add elixir tutorial to docs * start next iteration * update version in docs/conf.py * Added tag v0.6.1 for changeset c2526dce0768 0.6.1 ----- * finalize changelog for 0.6.1 * Bring back alter\_metadata on ColumnDelta: it seems intertwined with a lot of the tests. So, it's a private API now.. * try to get firebird stuff working with 0.6.6 * remove the alter\_metadata feature * work around firebird's insistence that indexes and constraints are dropped before columns that are references by them * fix sqlite column dropper now that the table is only modified after the visitor is run * firebird can only drop named foreign keys * These drop indexes appear to only be for firebird. Once firebird is fixed, they're not needed * Only alter the SA objects after running the visitor, so the visitor may inspect * fix py2.4 and py2.5 * merge * use mirrored copy of kinterbasedb to cope with SourceForge reliability problems * fixes #107 * fixes #106 * fixes #105 * merge * adding faq section to docs * make migrate.changeset.constraint.ForeignKeyConstraint.autoname work with SQLAlchemy 0.5 and 0.6 * use \_index\_identifier instead of \_validate\_identifier if \_validate\_identifier does not exist in migrate/changeset/ansisql.py * use absolute imports of exception classes (fixes tests) * fix generation of foreign key constraint name in migrate.changeset.constraint.ForeignKeyConstraint.autoname * merge * update changelog * implement column type diff'ing * update changelog * fixed #92 * restore missing table header for column diffs * rewrite of schemadiff internals * preserve the original stack strace * don't stop if one db fails, run for all so we can tell how badly things went wrong on Hudson * remove reference to ficticious command * clear out the test db for each test, making tests more isolated * for when logging is just annoying ;-) * silence logger that SA adds * give better feedback when errors occur in \_setup or \_teardown * quit screwing with the testing frameworks.. * update feature matrix * correct change log structure * Fix issue 94 - it was impossible to add a column with a non-unique index * implement column adding with foreign keys on sqlite * Fix bug with column dropping involving foreign keys. Bonus: remove\_from\_table now understands foreign keys * fix for issue 96: deleting a column in sqlite shouldn't delete all indexes bonus: remove\_from\_table now removes indexes * another py2.4 fixture * dammit! * attempt at improving the api docs a little * improve docs * hopefully fix test failures * another Py2.4 fix * fix docs * another attempt to get around the initd.org problems * looks like init.org might be back * disable psycopg install while initd.org is offline * silence console output * hopefully make py2.4 compatible * - capture deprecation warnings and assert they re as they should be - re-word alter\_column deprecation warning to make more sense * remove default of dropping on pdb on test error or failure * fix last exception import * move all exception classes to migrate.exceptions * update todo and test-req.pip * use if main conditional in manage.py script * correct case for dependencies in setup.py * fixes issue #88 * merge * fix tests on python2.7 * bump version to 0.6.1 to indicate that trunk is newer than release 0.6 * exclude .hgtags from release tarballs * merge * update TODO * link to mercurial instead of SVN * fix links in doc * update sphinx config * Added signature for changeset 65742e996d94 * Added tag v0.6 for changeset cb01bf174b05 0.6 --- * update changelog * small doc correction; fixes #67 * merge * better document summary of changeset actions * adding connection keyword to ORM methods * use migrate.tests.\* instead of tests.\* in find\_packages argument * use stdout for logging.INFO and lower; for the rest use stderr * fix deprecation warning when using old script synta * update README and fix last bugs * add documentation generation date * add MIT licence file * changed documentation layout * add descriptions of modules in docs * change dialect table from ugly format to list table * restructure changelog and minor modifications to documentation * log database name when running tests; firebug does not support DROP CONSTRAINT CASCADE * remove test dependencies information from setup.py * found that bugger. ScriptTest 1.0.1 removes the functonality we need. rollback to 1.0 * fix SA06 compatibility in tests * fix unittests * move to unittest2, update README for testing instructions * update test-req.pip * merge with https://robertanthonyfarrell-0point6-release-patches.googlecode.com/hg/ * move tests/ directory into migrate/tests (much better form) and fix all import lines and other minor issues * note about scripttest==1.0.1 * remove debug output and swearing * make tests use a virtualenv for 'migrate' shell command * change print statements to log.info * add more recent version of kinterbadb due to soureforge not being maintained anymore. Thanks btami\! * add firebird to test\_db.cfg.tmpl; fix bug when dropping a column in firebird: also drop related constraint or index * revert 2688cdb980 * update setup.py * run shell diff tests with correct environment * debugging fix * skip runpy tests on python2.4 * fix SA05 tests for autoincrement diff * fix MySQL failing tests with autoincrement * docs link correction * SA06 tests fix, thanks to Mike Bayer * unified warnings, use compare columns in tests * move warning exceptions to right module * update documenatation * deprecate two columns alter * update postgres url name * fix python2.4 error * update TODO * added 0.6 TODO, all api now uses engine.dispose() to handle pool correctly * add pysqlite for test dep * removing mx.DateTime from test deps because pip crashes * add mx.DateTime to test dep * add mysql driver test deps * change test\_db database name to more verbose name * partly fix SA0.6 tests on postgres * fix docs * fix documentation meta.bind(engine) -> meta.bind = engine; thanks mvt * add .coverage and test\_db.cfg to .hgignore * more import fixes * fixing models import * rename test package to tests (problems with pytest dist) * eliminate unicode usage at help * last but not least SA06 test fixes * more SA06 fixes * SA06 fixes * adding test-requirements.pip for running tests * updating MANIFEST.in, fixing virtualenv tests usage, 2 failed tests * apply Emil Kroymann's patch for Issue 75 * apply patch by Jason Newton * remove bad reference to migrate\_engine * Add table declarations for tables with diff * Add support for ALTER TABLE add/drop columns in make\_update\_script\_for\_model * adding sql\_script template customizations, removing unneeded imports, updating docs * merge * remove versioning.base in favor of versioning.config * removing deprecated logger module * fix small bug introduced in last commit, add description to migrate help * we are using Tempita for templates; adding most basic pylons template * add option to customize templates and use multiple themes * applying patch for issue #61 by Michael Bayer * add disable\_logging option * use logging module for output, fixes #26 * add tests for plain API, fixed some small bugs * separating test\_shell and test\_api, replacing shell hacks with ScriptTest * add populate\_default kwarg to column.create, fixes issue #50 * adding .hgignore * convert svn to hg * applying patch for issue #60 from entequak * add support for SA 0.6 by Michael Bayer * removing old changelog * change dev location * update changelog internal links * updated changeset documentation, added alter\_metadata to all schema classes * add not supported exceptions for sqlite constraints * always return delta when using alter constructs * - completely refactored ColumnDelta to extract differences between columns/parameters (also fixes issue #23) - fixed some bugs (passing server\_default) on column.alter - updated tests, specially ColumnDelta and column.alter - introduced alter\_metadata which can preserve altering existing objects if False (defaults to True) - updated documentation * adding basic support for firebird, fixes #55 * finally, tests pass for all supported dialects * add ability to download development version from pypi * fix syntax error * some more PEP8 love * fix bug when initializing CheckConstraint * updated changeset tests. whole package is finally PEP8. fixed mysql tests&bugs. updated docs where apropriate. changeset test coverage almost at 100% * - refactor migrate.changeset; - visitors are refactored to be more unified - constraint module is refactored, CheckConstraint is added - documentation is partialy updated, dialect support table is added (unfinished) - test\_constraint was updated NOTE: oracle and mysql were not tested, \*may be broken\* * update docs, delete obsolete code in constraints * update documentation * removed magical behavior with importing migrate\_engine, now engine is passed to upgrade/downgrade functions * use sqlalchemy preparer to do SQL quote formatting. this is a raw change, tests are yet to be written * lipstick changes * update tests for schema, refactor a bit * update README, add docs to repository.py and schema.py * fix setup.py * updated migrate.versioning.repository tests and docs, update README * update tests and docs for migrate.versioning.script.\* * change isinstance(obj, (str, unicode)) to isinstance(obj, basestring) as suggested by Piotr Ożarowski * start work on 0.5.5 * add changelog, update util.py docs * rearange tests * update documentation * added tests for versioning.version.py, refactored the module * add tests for low level util.py * make @usedb work correctly * Issue 34; preview\_sql now correctly displays SQL on python and SQL scripts. (tests added, docs still missing) * use unittest.TestCase for tests * use entrypoints terminology to parse dotted model class names * remove the duplicate * extras\_require is a bit tricky, use tests\_require instead * fix typechecking * update CHANGELOG * Issue 38; add ability to pass arguments/dict for create\_engine func * refactor api.py a bit, lots of PEP8 love * add asbool function to make api parsing easier * some more PEP8 love all over the files * apply some PEP8 love to template.py * apply PEP8 to version.py, fixed notification of missing test\_db.cfg * apply option parsing patch for Issue 54 by iElectric * start work on 0.5.4 * fix Issue 52 by removing unneeded parameters from object.\_\_new\_\_ calls * update CHANGELOG, set version to 0.5.3 * apply patch for Issue 29 by Jonathan Ellis * add support for ondelete and oncascade to ANSI-SQL foreign key creation * unignore \_build, recreate \_build and ignore everything below it (fixes Issue 49) * update CHANGELOG, set version to 0.5.2 * mark ALTER TABLE ADD FOREIGN KEY as unsupported by SQLite update corresponding test case * integrate unit test fix by Adam Lowry * integrate test case fix by Adam Lowry * whoops, needed to update setup.cfg too for Sphinx build location * add Sphinx Makefile, and use underscore prefixes for some Sphinx stuff to be more Windows-friendly * integrate patch for Issue 36, update CHANGELOG * fix Issue 47 by moving sphinx and nose dependencies to extras\_require * fix upload * prepare release 0.5.1.1 * removed misleading docs/Makefile * updated versioning.rst to reflect reality * apply patch by Toshio Kuratomi to fix some unit tests with Python 2.6 * migrate.versioning.schema and schemadiff PEP-8 clean, added to api.rst * migrate.versioning.repository PEP-8 clean, added to api.rst * migrate.versioning PEP-8 improvements, more documentation * correct all links in rst docs * PEP-8 clean migrate.changeset, updated CHANGELOG, updated api.rst * cleanup in migrate.changeset.ansisql and api doc update * revert stupid test case breaking change * make migrate.changeset.databases PEP-8 clean and add it to the API docs * move .. automodule stuff to docs/api.rst * start proper API documentation * make migrate.schema.ansisql PEP8 clean and add some sphinx docstrings * add sphinx to setup dependencies * switch from pudge to sphinx * first sphinx docstrings * first take at sphinx based documentation * reformatted download.rst for sphinx * reformatted index.rst for sphinx * add sphinx build files * updated CHANGELOG * updated project documentation * tagging version 0.5.1 to match sa version * support for SA 0.5.1 * fix for changeset test * integrate patch for supporting CheckConstraints by srittau * Depend on SQLAlchemy >= 0.5 at runtime and nose for setup * apply patch for Issue #43 (better SQLite support) by Florian Apolloner * switch to nosetests * fixed bug in create column where foreign keys were being left out * more migrate deprecation removal fixes * all tests pass except for a couple of mysql related failures in 2.6, SA 0.5rc4 * print statement removal * fixed a number of shell-related failures * hopefully the last of migrate.run * more "run" removal * fixes to postgres, shell. removal of "run" module * print statement removal * removed driver deprecation, since that was deprecated in 0.4 * now all databases are running at once * all tests pass with postgres now * only 2 failing tests, the tests that remain failures are mysql related * most of the tests are now working with nose * initial py.test removal * added an echo option for all manage commands * removed dependency on py.test modified downgrade so that migrate.py downgrade x works just like: migrate.py downgrade --version=x * modified altering of columns to support postgres * missed a postgres identifier quoting on renaming * added hook functions which allow the dialects to specify how to indicate identifiers, as this is different in postgres * integrate patch for Issue 33 * add --declarative option to create\_model to generate new declarative style * get test\_changeset working in oracle - just more server\_default issues * add support for SA 0.5 * - integrate patch by Toshio Kuratomi sent to migrate-users 2008/07/30 06:08 (GMT+01:00) - pylint clean migrate/versioning/migrate\_repository.py * \* create bugfix branch for 0.4.4 \* prepare trunk for new development * \* add 0.4.5 changes to CHANGELOG * applied patch for shell test by Toshio Kuramoti * support default for sql scripts * add unit test to make sure we can handle more than 999 revisions * finish implementing repository migration script * make repository format flatter and get rid of commit command * \* applied a slightly modified version of the patch for issue #20 by Christophe de Vienne which uses a logger for 'migrate.versioning' instead of the root logger * almost done issue 12: generate downgrade method for schema migration * fix issue 18 by applying given patch * enhance command update\_db\_from\_model to bump db\_version to latest in repository * improve diff unit tests to make sure they don't drop data when column type changes; make the code pass the tests * make diff unit tests more robust * Change make\_update\_script\_for\_model shell command to compare two versions of Python model (issue #12); add shell test for new diff'ing apis * - merged CHANGELOG from bugfix branch * - release 0.4.4 * r1081@denkpolster: jan | 2008-04-04 18:48:20 +0200 - fix for SQLAlchemy deprecation warning when creating version table * r1035@denkpolster: jan | 2008-04-02 14:39:05 +0200 - fix unit tests with py-0.9.1, fixes #17 * code reorg: create new utility method loadModel for db diffing * - patch by pwannygoodness for Issue #15 integrated * - merge CHANGELOG from branch bugfix-0\_4\_2 to trunk - increase trunk and branch bugfix-0\_4\_2 version numbers - rename branch bugfix-0\_4\_2 to bugfix-0\_4\_3 * Execute sqlite-specific code to alter a table inside one transaction so that we can't leave the database in a nasty state * When running schemadiff tests exclude table migrate\_version because it's left around after a previous round of tests * rename model/db sync commands and add new command update\_db\_from\_model * add experimental support for comparing metadata against database (issue #12) * - make test\_shell not assume Python code is compiled - update results in test\_shell so that it works in Postgresql - make Oracle changesets generate valid sql syntax when a modified column's default value changes to NULL * \* integrate patch for Issue #14 by Kevin Dangoor \* merge CHANGELOG from bugfix\_0\_4\_2 branch * \* fixed package name * \* prepare work on 0.4.3 * make import of sqlalchemy's SchemaGenerator work regardless of previous imports * - 0.4.1 changes added to CHANGELOG * - integrate setuptools patch by Kevin Dangoor - bump version to 0.4.2dev (0.4.1 will be released from branch bugfix-0\_4\_0) * - prepare for new development * - prepare for release 0.4.0 * integrated patch by Christian Simms posted at http://groups.google.com/group/migrate-users/browse\_thread/thread/952a2185baf70c4d fix all test cases for sqlalchemy>=0.4 and still works with sqlalchemy>=0.3.10 fixes #9 * moved trunk, branches and tags to project root fixes Issue #5 sqlalchemy-migrate-0.13.0/.testr.conf0000664000175000017500000000050013553670475017505 0ustar zuulzuul00000000000000[DEFAULT] test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \ OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \ OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \ ${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION test_id_option=--load-list $IDFILE test_list_option=--list