mycli-1.5.2/0000755000076600000240000000000012621400311013222 5ustar amjithstaff00000000000000mycli-1.5.2/AUTHORS0000644000076600000240000000101412617644775014323 0ustar amjithstaff00000000000000Many thanks to the following contributors. Core Developers: ---------------- * Iryna Cherniavska * Thomas Roten * Darik Gamble * Matheus Rosa Contributors: ------------- * Steve Robbins * Daniel West * shoma * Daniel Black * Jonathan Bruno * Heath Naylor * bjarnagin * jbruno * Abirami P * spacewander * Adam Chainz * Johannes Hoff * Jonathan Slenders * Kacper Kwapisz * Martijn Engler * Shoma Suzuki * Tyler Kuipers * Yasuhiro Matsumoto Creator: -------- Amjith Ramanujam mycli-1.5.2/changelog.md0000644000076600000240000001644412621377036015525 0ustar amjithstaff000000000000001.5.2: ====== Bug Fixes: ---------- * Protect against port number being None when no port is specified in command line. 1.5.1: ====== Bug Fixes: ---------- * Cast the value of port read from my.cnf to int. 1.5.0: ====== Features: --------- * Make a config option to enable `audit_log`. (Thanks: [Matheus Rosa]). * Add support for reading .mylogin.cnf to get user credentials. (Thanks: [Thomas Roten]). This feature is only available when `pycrypto` package is installed. * Register the special command `prompt` with the `\R` as alias. (Thanks: [Matheus Rosa]). Users can now change the mysql prompt at runtime using `prompt` command. eg: ``` mycli> prompt \u@\h> Changed prompt format to \u@\h> Time: 0.001s amjith@localhost> ``` * Perform completion refresh in a background thread. Now mycli can handle databases with thousands of tables without blocking. * Add support for `system` command. (Thanks: [Matheus Rosa]). Users can now run a system command from within mycli as follows: ``` amjith@localhost:(none)>system cat tmp.sql select 1; select * from django_migrations; ``` * Caught and hexed binary fields in MySQL. (Thanks: [Daniel West]). Geometric fields stored in a database will be displayed as hexed strings. * Treat enter key as tab when the suggestion menu is open. (Thanks: [Matheus Rosa]) * Add "delete" and "truncate" as destructive commands. (Thanks: [Martijn Engler]). * Change \dt syntax to add an optional table name. (Thanks: [Shoma Suzuki]). `\dt [tablename]` will describe the columns in a table. * Add TRANSACTION related keywords. * Treat DESC and EXPLAIN as DESCRIBE. (Thanks: [spacewander]). Bug Fixes: ---------- * Fix the removal of whitespace from table output. * Add ability to make suggestions for compound join clauses. (Thanks: [Matheus Rosa]). * Fix the incorrect reporting of command time. * Add type validation for port argument. (Thanks [Matheus Rosa]) Internal Changes: ----------------- * Make pycrypto optional and only install it in \*nix systems. (Thanks: [Iryna Cherniavska]). * Add badge for PyPI version to README. (Thanks: [Shoma Suzuki]). * Updated release script with a --dry-run and --confirm-steps option. (Thanks: [Iryna Cherniavska]). * Adds support for PyMySQL 0.6.2 and above. This is useful for debian package builders. (Thanks: [Thomas Roten]). * Disable click warning. 1.4.0: ====== Features: --------- * Add `source` command. This allows running sql statement from a file. eg: ``` mycli> source filename.sql ``` * Added a config option to make the warning before destructive commands optional. (Thanks: [Daniel West](https://github.com/danieljwest)) In the config file ~/.myclirc set `destructive_warning = False` which will disable the warning before running `DROP` commands. * Add completion support for CHANGE TO and other master/slave commands. This is still preliminary and it will be enhanced in the future. * Add custom styles to color the menus and toolbars. * Upgrade prompt_toolkit to 0.46. (Thanks: [Jonathan Slenders](https://github.com/jonathanslenders)) Multi-line queries are automatically indented. Bug Fixes: ---------- * Fix keyword completion after the `WHERE` clause. * Add `\g` and `\G` as valid query terminators. Previously in multi-line mode ending a query with a `\G` wouldn't run the query. This is now fixed. 1.3.0: ====== Features: --------- * Add a new special command (\T) to change the table format on the fly. (Thanks: [Jonathan Bruno](https://github.com/brewneaux)) eg: ``` mycli> \T tsv ``` * Add `--defaults-group-suffix` to the command line. This lets the user specify a group to use in the my.cnf files. (Thanks: [Iryna Cherniavska](http://github.com/j-bennet)) In the my.cnf file a user can specify credentials for different databases and invoke mycli with the group name to use the appropriate credentials. eg: ``` # my.cnf [client] user = 'root' socket = '/tmp/mysql.sock' pager = 'less -RXSF' database = 'account' [clientamjith] user = 'amjith' database = 'user_management' $ mycli --defaults-group-suffix=amjith # uses the [clientamjith] section in my.cnf ``` * Add `--defaults-file` option to the command line. This allows specifying a `my.cnf` to use at launch. This also makes it play nice with mysql sandbox. * Make `-p` and `--password` take the password in commandline. This makes mycli a drop in replacement for mysql. 1.2.0: ====== Features: --------- * Add support for wider completion menus in the config file. Add `wider_completion_menu = True` in the config file (~/.myclirc) to enable this feature. Bug Fixes: --------- * Prevent Ctrl-C from quitting mycli while the pager is active. * Refresh auto-completions after the database is changed via a CONNECT command. Internal Changes: ----------------- * Upgrade prompt_toolkit dependency version to 0.45. * Added Travis CI to run the tests automatically. 1.1.1: ====== Bug Fixes: ---------- * Change dictonary comprehension used in mycnf reader to list comprehension to make it compatible with Python 2.6. 1.1.0: ====== Features: --------- * Fuzzy completion is now case-insensitive. (Thanks: [bjarnagin](https://github.com/bjarnagin)) * Added new-line (`\n`) to the list of special characters to use in prompt. (Thanks: [brewneaux](https://github.com/brewneaux)) * Honor the `pager` setting in my.cnf files. (Thanks: [Iryna Cherniavska](http://github.com/j-bennet)) Bug Fixes: ---------- * Fix a crashing bug in completion engine for cross joins. * Make `` value consistent between tabular and vertical output. Internal Changes: ----------------- * Changed pymysql version to be greater than 0.6.6. * Upgrade prompt_toolkit version to 0.42. (Thanks: [Yasuhiro Matsumoto](https://github.com/mattn)) * Removed the explicit dependency on six. 2015/06/10: =========== Features: --------- * Customizable prompt. (Thanks [Steve Robbins](https://github.com/steverobbins)) * Make `\G` formatting to behave more like mysql. Bug Fixes: ---------- * Formatting issue in \G for really long column values. 2015/06/07: =========== Features: --------- * Upgrade prompt_toolkit to 0.38. This improves the performance of pasting long queries. * Add support for reading my.cnf files. * Add editor command \e. * Replace ConfigParser with ConfigObj. * Add \dt to show all tables. * Add fuzzy completion for table names and column names. * Automatically reconnect when connection is lost to the database. Bug Fixes: ---------- * Fix a bug with reconnect failure. * Fix the issue with `use` command not changing the prompt. * Fix the issue where `\\r` shortcut was not recognized. 2015/05/24 ========== Features: --------- * Add support for connecting via socket. * Add completion for SQL functions. * Add completion support for SHOW statements. * Made the timing of sql statements human friendly. * Automatically prompt for a password if needed. Bug Fixes: ---------- * Fixed the installation issues with PyMySQL dependency on case-sensitive file systems. [Daniel West]: http://github.com/danieljwest [Iryna Cherniavska]: https://github.com/j-bennet [Kacper Kwapisz]: https://github.com/KKKas [Martijn Engler]: https://github.com/martijnengler [Matheus Rosa]: https://github.com/mdsrosa [Shoma Suzuki]: https://github.com/shoma [spacewander]: https://github.com/spacewander [Thomas Roten]: https://github.com/tsroten mycli-1.5.2/LICENSE.txt0000644000076600000240000000340612557273603015074 0ustar amjithstaff00000000000000Copyright (c) 2015, Amjith Ramanujam All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the {organization} nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ------------------------------------------------------------------------------- This program also bundles with it python-tabulate (https://pypi.python.org/pypi/tabulate) library. This library is licensed under MIT License. ------------------------------------------------------------------------------- mycli-1.5.2/MANIFEST.in0000644000076600000240000000003112557273603014776 0ustar amjithstaff00000000000000include LICENSE.txt *.md mycli-1.5.2/mycli/0000755000076600000240000000000012621400311014337 5ustar amjithstaff00000000000000mycli-1.5.2/mycli/__init__.py0000644000076600000240000000002612621400251016451 0ustar amjithstaff00000000000000__version__ = '1.5.2' mycli-1.5.2/mycli/clibuffer.py0000644000076600000240000000261512610175477016702 0ustar amjithstaff00000000000000from prompt_toolkit.buffer import Buffer from prompt_toolkit.filters import Condition class CLIBuffer(Buffer): def __init__(self, always_multiline, *args, **kwargs): self.always_multiline = always_multiline @Condition def is_multiline(): doc = self.document return self.always_multiline and not _multiline_exception(doc.text) super(self.__class__, self).__init__(*args, is_multiline=is_multiline, tempfile_suffix='.sql', **kwargs) def _multiline_exception(text): orig = text text = text.strip() # Multi-statement favorite query is a special case. Because there will # be a semicolon separating statements, we can't consider semicolon an # EOL. Let's consider an empty line an EOL instead. if text.startswith('\\fs'): return orig.endswith('\n') return (text.startswith('\\') or # Special Command text.endswith(';') or # Ended with a semi-colon text.endswith('\\g') or # Ended with \g text.endswith('\\G') or # Ended with \G (text == 'exit') or # Exit doesn't need semi-colon (text == 'quit') or # Quit doesn't need semi-colon (text == ':q') or # To all the vim fans out there (text == '') # Just a plain enter without any text ) mycli-1.5.2/mycli/clistyle.py0000644000076600000240000000124012610175477016562 0ustar amjithstaff00000000000000from pygments.token import string_to_tokentype from pygments.style import Style from pygments.util import ClassNotFound from prompt_toolkit.styles import default_style_extensions import pygments.styles def style_factory(name, cli_style): try: style = pygments.styles.get_style_by_name(name) except ClassNotFound: style = pygments.styles.get_style_by_name('native') class CLIStyle(Style): styles = {} styles.update(style.styles) styles.update(default_style_extensions) custom_styles = dict([(string_to_tokentype(x), y) for x, y in cli_style.items()]) styles.update(custom_styles) return CLIStyle mycli-1.5.2/mycli/clitoolbar.py0000644000076600000240000000226412615144005017057 0ustar amjithstaff00000000000000from pygments.token import Token def create_toolbar_tokens_func(get_key_bindings, get_is_refreshing): """ Return a function that generates the toolbar tokens. """ assert callable(get_key_bindings) token = Token.Toolbar def get_toolbar_tokens(cli): result = [] result.append((token, ' ')) if cli.buffers['default'].completer.smart_completion: result.append((token.On, '[F2] Smart Completion: ON ')) else: result.append((token.Off, '[F2] Smart Completion: OFF ')) if cli.buffers['default'].always_multiline: result.append((token.On, '[F3] Multiline: ON ')) else: result.append((token.Off, '[F3] Multiline: OFF ')) if cli.buffers['default'].always_multiline: result.append((token, ' (Semi-colon [;] will end the line)')) if get_key_bindings() == 'vi': result.append((token.On, '[F4] Vi-mode')) else: result.append((token.On, '[F4] Emacs-mode')) if get_is_refreshing(): result.append((token, ' Refreshing completions...')) return result return get_toolbar_tokens mycli-1.5.2/mycli/completion_refresher.py0000644000076600000240000001054412615144005021143 0ustar amjithstaff00000000000000import threading from .packages.special.main import COMMANDS try: from collections import OrderedDict except ImportError: from .packages.ordereddict import OrderedDict from .sqlcompleter import SQLCompleter from .sqlexecute import SQLExecute class CompletionRefresher(object): refreshers = OrderedDict() def __init__(self): self._completer_thread = None self._restart_refresh = threading.Event() def refresh(self, executor, callbacks): """ Creates a SQLCompleter object and populates it with the relevant completion suggestions in a background thread. executor - SQLExecute object, used to extract the credentials to connect to the database. callbacks - A function or a list of functions to call after the thread has completed the refresh. The newly created completion object will be passed in as an argument to each callback. """ if self.is_refreshing(): self._restart_refresh.set() return [(None, None, None, 'Auto-completion refresh restarted.')] else: self._completer_thread = threading.Thread(target=self._bg_refresh, args=(executor, callbacks), name='completion_refresh') self._completer_thread.setDaemon(True) self._completer_thread.start() return [(None, None, None, 'Auto-completion refresh started in the background.')] def is_refreshing(self): return self._completer_thread and self._completer_thread.is_alive() def _bg_refresh(self, sqlexecute, callbacks): completer = SQLCompleter(smart_completion=True) # Create a new pgexecute method to popoulate the completions. e = sqlexecute executor = SQLExecute(e.dbname, e.user, e.password, e.host, e.port, e.socket, e.charset) # If callbacks is a single function then push it into a list. if callable(callbacks): callbacks = [callbacks] while 1: for refresher in self.refreshers.values(): refresher(completer, executor) if self._restart_refresh.is_set(): self._restart_refresh.clear() break else: # Break out of while loop if the for loop finishes natually # without hitting the break statement. break # Start over the refresh from the beginning if the for loop hit the # break statement. continue for callback in callbacks: callback(completer) def refresher(name, refreshers=CompletionRefresher.refreshers): """Decorator to add the decorated function to the dictionary of refreshers. Any function decorated with a @refresher will be executed as part of the completion refresh routine.""" def wrapper(wrapped): refreshers[name] = wrapped return wrapped return wrapper @refresher('databases') def refresh_databases(completer, executor): completer.extend_database_names(executor.databases()) @refresher('schemata') def refresh_schemata(completer, executor): # schemata - In MySQL Schema is the same as database. But for mycli # schemata will be the name of the current database. completer.extend_schemata(executor.dbname) completer.set_dbname(executor.dbname) @refresher('tables') def refresh_tables(completer, executor): completer.extend_relations(executor.tables(), kind='tables') completer.extend_columns(executor.table_columns(), kind='tables') @refresher('users') def refresh_users(completer, executor): completer.extend_users(executor.users()) # @refresher('views') # def refresh_views(completer, executor): # completer.extend_relations(executor.views(), kind='views') # completer.extend_columns(executor.view_columns(), kind='views') @refresher('functions') def refresh_functions(completer, executor): completer.extend_functions(executor.functions()) @refresher('special_commands') def refresh_special(completer, executor): completer.extend_special_commands(COMMANDS.keys()) @refresher('show_commands') def refresh_show_commands(completer, executor): completer.extend_show_items(executor.show_candidates()) mycli-1.5.2/mycli/config.py0000644000076600000240000001103312615144005016164 0ustar amjithstaff00000000000000import shutil from io import BytesIO, TextIOWrapper import logging import os from os.path import expanduser, exists import struct from configobj import ConfigObj try: from Crypto.Cipher import AES except ImportError: AES = None class CryptoError(Exception): """ Exception to signal about pycrypto not available. """ pass logger = logging.getLogger(__name__) def load_config(usr_cfg, def_cfg=None): cfg = ConfigObj() cfg.merge(ConfigObj(def_cfg, interpolation=False)) cfg.merge(ConfigObj(expanduser(usr_cfg), interpolation=False)) cfg.filename = expanduser(usr_cfg) return cfg def write_default_config(source, destination, overwrite=False): destination = expanduser(destination) if not overwrite and exists(destination): return shutil.copyfile(source, destination) def get_mylogin_cnf_path(): """Return the path to the .mylogin.cnf file or None if doesn't exist.""" app_data = os.getenv('APPDATA') if app_data is None: mylogin_cnf_dir = os.path.expanduser('~') else: mylogin_cnf_dir = os.path.join(app_data, 'MySQL') mylogin_cnf_dir = os.path.abspath(mylogin_cnf_dir) mylogin_cnf_path = os.path.join(mylogin_cnf_dir, '.mylogin.cnf') if exists(mylogin_cnf_path): logger.debug("Found login path file at '{0}'".format(mylogin_cnf_path)) return mylogin_cnf_path return None def open_mylogin_cnf(name): """Open a readable version of .mylogin.cnf. Returns the file contents as a TextIOWrapper object. :param str name: The pathname of the file to be opened. :return: the login path file or None """ try: with open(name, 'rb') as f: plaintext = read_and_decrypt_mylogin_cnf(f) except (OSError, IOError): logger.error('Unable to open login path file.') return None if not isinstance(plaintext, BytesIO): logger.error('Unable to read login path file.') return None return TextIOWrapper(plaintext) def read_and_decrypt_mylogin_cnf(f): """Read and decrypt the contents of .mylogin.cnf. This decryption algorithm mimics the code in MySQL's mysql_config_editor.cc. The login key is 20-bytes of random non-printable ASCII. It is written to the actual login path file. It is used to generate the real key used in the AES cipher. :param f: an I/O object opened in binary mode :return: the decrypted login path file :rtype: io.BytesIO or None """ if AES is None: raise CryptoError('pycrypto is not available.') # Number of bytes used to store the length of ciphertext. MAX_CIPHER_STORE_LEN = 4 LOGIN_KEY_LEN = 20 # Move past the unused buffer. buf = f.read(4) if not buf or len(buf) != 4: logger.error('Login path file is blank or incomplete.') return None # Read the login key. key = f.read(LOGIN_KEY_LEN) # Generate the real key. rkey = [0] * 16 for i in range(LOGIN_KEY_LEN): try: rkey[i % 16] ^= ord(key[i:i+1]) except TypeError: # ord() was unable to get the value of the byte. logger.error('Unable to generate login path AES key.') return None rkey = struct.pack('16B', *rkey) # Create a cipher object using the key. aes_cipher = AES.new(rkey, AES.MODE_ECB) # Create a bytes buffer to hold the plaintext. plaintext = BytesIO() while True: # Read the length of the ciphertext. len_buf = f.read(MAX_CIPHER_STORE_LEN) if len(len_buf) < MAX_CIPHER_STORE_LEN: break cipher_len, = struct.unpack(" len(pplain) or len(set(pplain[-pad_len:])) != 1: # Pad length should be less than or equal to the length of the # plaintext. The pad should have a single unqiue byte. logger.warning('Invalid pad found in login path file.') continue # Get rid of pad. plain = pplain[:-pad_len] plaintext.write(plain) if plaintext.tell() == 0: logger.error('No data successfully decrypted from login path file.') return None plaintext.seek(0) return plaintext mycli-1.5.2/mycli/encodingutils.py0000644000076600000240000000107612511142310017564 0ustar amjithstaff00000000000000import sys PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 def unicode2utf8(arg): """ Only in Python 2. Psycopg2 expects the args as bytes not unicode. In Python 3 the args are expected as unicode. """ if PY2 and isinstance(arg, unicode): return arg.encode('utf-8') return arg def utf8tounicode(arg): """ Only in Python 2. Psycopg2 returns the error message as utf-8. In Python 3 the errors are returned as unicode. """ if PY2 and isinstance(arg, str): return arg.decode('utf-8') return arg mycli-1.5.2/mycli/filters.py0000644000076600000240000000063612610237744016406 0ustar amjithstaff00000000000000from prompt_toolkit.filters import Filter class HasSelectedCompletion(Filter): """Enable when the current buffer has a selected completion.""" def __call__(self, cli): complete_state = cli.current_buffer.complete_state return (complete_state is not None and complete_state.current_completion is not None) def __repr__(self): return "HasSelectedCompletion()" mycli-1.5.2/mycli/key_bindings.py0000644000076600000240000000542012610237744017377 0ustar amjithstaff00000000000000import logging from prompt_toolkit.keys import Keys from prompt_toolkit.key_binding.manager import KeyBindingManager from prompt_toolkit.filters import Condition from .filters import HasSelectedCompletion _logger = logging.getLogger(__name__) def mycli_bindings(get_key_bindings, set_key_bindings): """ Custom key bindings for mycli. """ assert callable(get_key_bindings) assert callable(set_key_bindings) key_binding_manager = KeyBindingManager( enable_open_in_editor=True, enable_system_bindings=True, enable_vi_mode=Condition(lambda cli: get_key_bindings() == 'vi')) @key_binding_manager.registry.add_binding(Keys.F2) def _(event): """ Enable/Disable SmartCompletion Mode. """ _logger.debug('Detected F2 key.') buf = event.cli.current_buffer buf.completer.smart_completion = not buf.completer.smart_completion @key_binding_manager.registry.add_binding(Keys.F3) def _(event): """ Enable/Disable Multiline Mode. """ _logger.debug('Detected F3 key.') buf = event.cli.current_buffer buf.always_multiline = not buf.always_multiline @key_binding_manager.registry.add_binding(Keys.F4) def _(event): """ Toggle between Vi and Emacs mode. """ _logger.debug('Detected F4 key.') if get_key_bindings() == 'vi': set_key_bindings('emacs') else: set_key_bindings('vi') @key_binding_manager.registry.add_binding(Keys.Tab) def _(event): """ Force autocompletion at cursor. """ _logger.debug('Detected key.') b = event.cli.current_buffer if b.complete_state: b.complete_next() else: event.cli.start_completion(select_first=True) @key_binding_manager.registry.add_binding(Keys.ControlSpace) def _(event): """ Initialize autocompletion at cursor. If the autocompletion menu is not showing, display it with the appropriate completions for the context. If the menu is showing, select the next completion. """ _logger.debug('Detected key.') b = event.cli.current_buffer if b.complete_state: b.complete_next() else: event.cli.start_completion(select_first=False) @key_binding_manager.registry.add_binding(Keys.ControlJ, filter=HasSelectedCompletion()) def _(event): """ Makes the enter key work as the tab key only when showing the menu. """ _logger.debug('Detected key.') event.current_buffer.complete_state = None b = event.cli.current_buffer b.complete_state = None return key_binding_manager mycli-1.5.2/mycli/lexer.py0000644000076600000240000000047012541456046016052 0ustar amjithstaff00000000000000from pygments.lexer import inherit from pygments.lexers.sql import MySqlLexer from pygments.token import Keyword class MyCliLexer(MySqlLexer): """ Extends MySQL lexer to add keywords. """ tokens = { 'root': [ (r'\brepair\b', Keyword), inherit, ], } mycli-1.5.2/mycli/magic.py0000644000076600000240000000273012526476642016023 0ustar amjithstaff00000000000000from .main import MyCli import sql.parse import sql.connection import logging _logger = logging.getLogger(__name__) def load_ipython_extension(ipython): # This is called via the ipython command '%load_ext mycli.magic'. # First, load the sql magic if it isn't already loaded. if not ipython.find_line_magic('sql'): ipython.run_line_magic('load_ext', 'sql') # Register our own magic. ipython.register_magic_function(mycli_line_magic, 'line', 'mycli') def mycli_line_magic(line): _logger.debug('mycli magic called: %r', line) parsed = sql.parse.parse(line, {}) conn = sql.connection.Connection.get(parsed['connection']) try: # A corresponding mycli object already exists mycli = conn._mycli _logger.debug('Reusing existing mycli') except AttributeError: mycli = MyCli() u = conn.session.engine.url _logger.debug('New mycli: %r', str(u)) mycli.connect(u.database, u.host, u.username, u.port, u.password) conn._mycli = mycli # For convenience, print the connection alias print('Connected: {}'.format(conn.name)) try: mycli.run_cli() except SystemExit: pass if not mycli.query_history: return q = mycli.query_history[-1] if q.mutating: _logger.debug('Mutating query detected -- ignoring') return if q.successful: ipython = get_ipython() return ipython.run_cell_magic('sql', line, q.query) mycli-1.5.2/mycli/main.py0000755000076600000240000007464212621400226015662 0ustar amjithstaff00000000000000#!/usr/bin/env python from __future__ import unicode_literals from __future__ import print_function import os import sys import traceback import logging import threading from time import time from datetime import datetime from random import choice from io import open import click import sqlparse from prompt_toolkit import CommandLineInterface, Application, AbortAction from prompt_toolkit.enums import DEFAULT_BUFFER from prompt_toolkit.shortcuts import create_default_layout, create_eventloop from prompt_toolkit.document import Document from prompt_toolkit.filters import Always, HasFocus, IsDone from prompt_toolkit.layout.processors import (HighlightMatchingBracketProcessor, ConditionalProcessor) from prompt_toolkit.history import FileHistory from pygments.token import Token from configobj import ConfigObj, ConfigObjError from .packages.tabulate import tabulate, table_formats from .packages.expanded import expanded_table from .packages.special.main import (COMMANDS, NO_QUERY) import mycli.packages.special as special from .sqlcompleter import SQLCompleter from .clitoolbar import create_toolbar_tokens_func from .clistyle import style_factory from .sqlexecute import SQLExecute from .clibuffer import CLIBuffer from .completion_refresher import CompletionRefresher from .config import (write_default_config, load_config, get_mylogin_cnf_path, open_mylogin_cnf, CryptoError) from .key_bindings import mycli_bindings from .encodingutils import utf8tounicode from .lexer import MyCliLexer from .__init__ import __version__ click.disable_unicode_literals_warning = True try: from urlparse import urlparse except ImportError: from urllib.parse import urlparse from pymysql import OperationalError from collections import namedtuple # Query tuples are used for maintaining history Query = namedtuple('Query', ['query', 'successful', 'mutating']) PACKAGE_ROOT = os.path.dirname(__file__) class MyCli(object): default_prompt = '\\t \\u@\\h:\\d> ' defaults_suffix = None # In order of being loaded. Files lower in list override earlier ones. cnf_files = [ '/etc/my.cnf', '/etc/mysql/my.cnf', '/usr/local/etc/my.cnf', os.path.expanduser('~/.my.cnf') ] def __init__(self, sqlexecute=None, prompt=None, logfile=None, defaults_suffix=None, defaults_file=None, login_path=None): self.sqlexecute = sqlexecute self.logfile = logfile self.defaults_suffix = defaults_suffix self.login_path = login_path # self.cnf_files is a class variable that stores the list of mysql # config files to read in at launch. # If defaults_file is specified then override the class variable with # defaults_file. if defaults_file: self.cnf_files = [defaults_file] default_config = os.path.join(PACKAGE_ROOT, 'myclirc') write_default_config(default_config, '~/.myclirc') # Load config. c = self.config = load_config('~/.myclirc', default_config) self.multi_line = c['main'].as_bool('multi_line') self.destructive_warning = c['main'].as_bool('destructive_warning') self.key_bindings = c['main']['key_bindings'] special.set_timing_enabled(c['main'].as_bool('timing')) self.table_format = c['main']['table_format'] self.syntax_style = c['main']['syntax_style'] self.cli_style = c['colors'] self.wider_completion_menu = c['main'].as_bool('wider_completion_menu') # audit log if self.logfile is None and 'audit_log' in c['main']: try: self.logfile = open(os.path.expanduser(c['main']['audit_log']), 'a') except (IOError, OSError) as e: self.output('Error: Unable to open the audit log file. Your queries will not be logged.', err=True, fg='red') self.logfile = False self.completion_refresher = CompletionRefresher() self.logger = logging.getLogger(__name__) self.initialize_logging() prompt_cnf = self.read_my_cnf_files(self.cnf_files, ['prompt'])['prompt'] self.prompt_format = prompt or prompt_cnf or c['main']['prompt'] or \ self.default_prompt self.query_history = [] # Initialize completer. smart_completion = c['main'].as_bool('smart_completion') self.completer = SQLCompleter(smart_completion) self._completer_lock = threading.Lock() # Register custom special commands. self.register_special_commands() # Load .mylogin.cnf if it exists. mylogin_cnf_path = get_mylogin_cnf_path() if mylogin_cnf_path: try: mylogin_cnf = open_mylogin_cnf(mylogin_cnf_path) if mylogin_cnf_path and mylogin_cnf: # .mylogin.cnf gets read last, even if defaults_file is specified. self.cnf_files.append(mylogin_cnf) elif mylogin_cnf_path and not mylogin_cnf: # There was an error reading the login path file. print('Error: Unable to read login path file.') except CryptoError: click.secho('Warning: .mylogin.cnf was not read: pycrypto ' 'module is not available.') self.cli = None def register_special_commands(self): special.register_special_command(self.change_db, 'use', '\\u', 'Change to a new database.', aliases=('\\u',)) special.register_special_command(self.change_db, 'connect', '\\r', 'Reconnect to the database. Optional database argument.', aliases=('\\r', ), case_sensitive=True) special.register_special_command(self.refresh_completions, 'rehash', '\\#', 'Refresh auto-completions.', arg_type=NO_QUERY, aliases=('\\#',)) special.register_special_command(self.change_table_format, 'tableformat', '\\T', 'Change Table Type.', aliases=('\\T',), case_sensitive=True) special.register_special_command(self.execute_from_file, 'source', '\\. filename', 'Execute commands from file.', aliases=('\\.',)) special.register_special_command(self.change_prompt_format, 'prompt', '\\R', 'Change prompt format.', aliases=('\\R',), case_sensitive=True) def change_table_format(self, arg, **_): if not arg in table_formats(): msg = "Table type %s not yet implemented. Allowed types:" % arg for table_type in table_formats(): msg += "\n\t%s" % table_type yield (None, None, None, msg) else: self.table_format = arg yield (None, None, None, "Changed table Type to %s" % self.table_format) def change_db(self, arg, **_): if arg is None: self.sqlexecute.connect() else: self.sqlexecute.connect(database=arg) yield (None, None, None, 'You are now connected to database "%s" as ' 'user "%s"' % (self.sqlexecute.dbname, self.sqlexecute.user)) def execute_from_file(self, arg, **_): if not arg: message = 'Missing required argument, filename.' return [(None, None, None, message)] try: with open(os.path.expanduser(arg), encoding='utf-8') as f: query = f.read() except IOError as e: return [(None, None, None, str(e))] return self.sqlexecute.run(query) def change_prompt_format(self, arg, **_): """ Change the prompt format. """ if not arg: message = 'Missing required argument, format.' return [(None, None, None, message)] self.prompt_format = self.get_prompt(arg) return [(None, None, None, "Changed prompt format to %s" % arg)] def initialize_logging(self): log_file = self.config['main']['log_file'] log_level = self.config['main']['log_level'] level_map = {'CRITICAL': logging.CRITICAL, 'ERROR': logging.ERROR, 'WARNING': logging.WARNING, 'INFO': logging.INFO, 'DEBUG': logging.DEBUG } handler = logging.FileHandler(os.path.expanduser(log_file)) formatter = logging.Formatter( '%(asctime)s (%(process)d/%(threadName)s) ' '%(name)s %(levelname)s - %(message)s') handler.setFormatter(formatter) root_logger = logging.getLogger('mycli') root_logger.addHandler(handler) root_logger.setLevel(level_map[log_level.upper()]) root_logger.debug('Initializing mycli logging.') root_logger.debug('Log file %r.', log_file) def connect_uri(self, uri): uri = urlparse(uri) database = uri.path[1:] # ignore the leading fwd slash self.connect(database, uri.username, uri.password, uri.hostname, uri.port) def read_my_cnf_files(self, files, keys): """ Reads a list of config files and merges them. The last one will win. :param files: list of files to read :param keys: list of keys to retrieve :returns: tuple, with None for missing keys. """ cnf = ConfigObj() for _file in files: try: cnf.merge(ConfigObj(_file, interpolation=False)) except ConfigObjError as e: self.logger.error('Error parsing %r.', _file) self.logger.error('Recovering partially parsed config values.') cnf.merge(e.config) pass sections = ['client'] if self.login_path and self.login_path != 'client': sections.append(self.login_path) if self.defaults_suffix: sections.extend([sect + self.defaults_suffix for sect in sections]) def get(key): result = None for sect in cnf: if sect in sections and key in cnf[sect]: result = cnf[sect][key] return result return dict([(x, get(x)) for x in keys]) def connect(self, database='', user='', passwd='', host='', port='', socket='', charset=''): cnf = {'database': None, 'user': None, 'password': None, 'host': None, 'port': None, 'socket': None, 'default-character-set': None} cnf = self.read_my_cnf_files(self.cnf_files, cnf.keys()) # Fall back to config values only if user did not specify a value. database = database or cnf['database'] if port or host: socket = '' else: socket = socket or cnf['socket'] user = user or cnf['user'] or os.getenv('USER') host = host or cnf['host'] or 'localhost' port = int(port or cnf['port'] or 3306) passwd = passwd or cnf['password'] charset = charset or cnf['default-character-set'] or 'utf8' # Connect to the database. try: try: sqlexecute = SQLExecute(database, user, passwd, host, port, socket, charset) except OperationalError as e: if ('Access denied for user' in e.args[1]): passwd = click.prompt('Password', hide_input=True, show_default=False, type=str) sqlexecute = SQLExecute(database, user, passwd, host, port, socket, charset) else: raise e except Exception as e: # Connecting to a database could fail. self.logger.debug('Database connection failed: %r.', e) self.logger.error("traceback: %r", traceback.format_exc()) self.output(str(e), err=True, fg='red') exit(1) self.sqlexecute = sqlexecute def handle_editor_command(self, cli, document): """ Editor command is any query that is prefixed or suffixed by a '\e'. The reason for a while loop is because a user might edit a query multiple times. For eg: "select * from \e" to edit it in vim, then come back to the prompt with the edited query "select * from blah where q = 'abc'\e" to edit it again. :param cli: CommandLineInterface :param document: Document :return: Document """ while special.editor_command(document.text): filename = special.get_filename(document.text) sql, message = special.open_external_editor(filename, sql=document.text) if message: # Something went wrong. Raise an exception and bail. raise RuntimeError(message) cli.current_buffer.document = Document(sql, cursor_position=len(sql)) document = cli.run(False) continue return document def run_cli(self): sqlexecute = self.sqlexecute logger = self.logger original_less_opts = self.adjust_less_opts() self.set_pager_from_config() self.refresh_completions() def set_key_bindings(value): if value not in ('emacs', 'vi'): value = 'emacs' self.key_bindings = value project_root = os.path.dirname(PACKAGE_ROOT) author_file = os.path.join(project_root, 'AUTHORS') sponsor_file = os.path.join(project_root, 'SPONSORS') key_binding_manager = mycli_bindings(get_key_bindings=lambda: self.key_bindings, set_key_bindings=set_key_bindings) print('Version:', __version__) print('Chat: https://gitter.im/dbcli/mycli') print('Mail: https://groups.google.com/forum/#!forum/mycli-users') print('Home: http://mycli.net') print('Thanks to the contributor -', thanks_picker([author_file, sponsor_file])) def prompt_tokens(cli): return [(Token.Prompt, self.get_prompt(self.prompt_format))] get_toolbar_tokens = create_toolbar_tokens_func(lambda: self.key_bindings, self.completion_refresher.is_refreshing) layout = create_default_layout(lexer=MyCliLexer, reserve_space_for_menu=True, multiline=True, get_prompt_tokens=prompt_tokens, get_bottom_toolbar_tokens=get_toolbar_tokens, display_completions_in_columns=self.wider_completion_menu, extra_input_processors=[ ConditionalProcessor( processor=HighlightMatchingBracketProcessor(chars='[](){}'), filter=HasFocus(DEFAULT_BUFFER) & ~IsDone()), ]) with self._completer_lock: buf = CLIBuffer(always_multiline=self.multi_line, completer=self.completer, history=FileHistory(os.path.expanduser('~/.mycli-history')), complete_while_typing=Always()) application = Application(style=style_factory(self.syntax_style, self.cli_style), layout=layout, buffer=buf, key_bindings_registry=key_binding_manager.registry, on_exit=AbortAction.RAISE_EXCEPTION, ignore_case=True) self.cli = CommandLineInterface(application=application, eventloop=create_eventloop()) try: while True: document = self.cli.run() special.set_expanded_output(False) # The reason we check here instead of inside the sqlexecute is # because we want to raise the Exit exception which will be # caught by the try/except block that wraps the # sqlexecute.run() statement. if quit_command(document.text): raise EOFError try: document = self.handle_editor_command(self.cli, document) except RuntimeError as e: logger.error("sql: %r, error: %r", document.text, e) logger.error("traceback: %r", traceback.format_exc()) self.output(str(e), err=True, fg='red') continue if self.destructive_warning: destroy = confirm_destructive_query(document.text) if destroy is None: pass # Query was not destructive. Nothing to do here. elif destroy is True: self.output('Your call!') else: self.output('Wise choice!') continue # Keep track of whether or not the query is mutating. In case # of a multi-statement query, the overall query is considered # mutating if any one of the component statements is mutating mutating = False try: logger.debug('sql: %r', document.text) if self.logfile: self.logfile.write('\n# %s\n' % datetime.now()) self.logfile.write(document.text) self.logfile.write('\n') successful = False start = time() res = sqlexecute.run(document.text) successful = True output = [] total = 0 for title, cur, headers, status in res: logger.debug("headers: %r", headers) logger.debug("rows: %r", cur) logger.debug("status: %r", status) threshold = 1000 if (is_select(status) and cur and cur.rowcount > threshold): self.output('The result set has more than %s rows.' % threshold, fg='red') if not click.confirm('Do you want to continue?'): self.output("Aborted!", err=True, fg='red') break output.extend(format_output(title, cur, headers, status, self.table_format)) end = time() total += end - start mutating = mutating or is_mutating(status) except UnicodeDecodeError as e: import pymysql if pymysql.VERSION < ('0', '6', '7'): message = ('You are running an older version of pymysql.\n' 'Please upgrade to 0.6.7 or above to view binary data.\n' 'Try \'pip install -U pymysql\'.') self.output(message) else: raise e except KeyboardInterrupt: # Restart connection to the database sqlexecute.connect() logger.debug("cancelled query, sql: %r", document.text) self.output("cancelled query", err=True, fg='red') except NotImplementedError: self.output('Not Yet Implemented.', fg="yellow") except OperationalError as e: logger.debug("Exception: %r", e) reconnect = True if (e.args[0] in (2003, 2006, 2013)): reconnect = click.prompt('Connection reset. Reconnect (Y/n)', show_default=False, type=bool, default=True) if reconnect: logger.debug('Attempting to reconnect.') try: sqlexecute.connect() logger.debug('Reconnected successfully.') self.output('Reconnected!\nTry the command again.', fg='green') except OperationalError as e: logger.debug('Reconnect failed. e: %r', e) self.output(str(e), err=True, fg='red') continue # If reconnection failed, don't proceed further. else: # If user chooses not to reconnect, don't proceed further. continue else: logger.error("sql: %r, error: %r", document.text, e) logger.error("traceback: %r", traceback.format_exc()) self.output(str(e), err=True, fg='red') except Exception as e: logger.error("sql: %r, error: %r", document.text, e) logger.error("traceback: %r", traceback.format_exc()) self.output(str(e), err=True, fg='red') else: try: self.output_via_pager('\n'.join(output)) except KeyboardInterrupt: pass if special.is_timing_enabled(): self.output('Time: %0.03fs' % total) # Refresh the table names and column names if necessary. if need_completion_refresh(document.text): self.refresh_completions( reset=need_completion_reset(document.text)) finally: if self.logfile is False: self.output("Warning: This query was not logged.", err=True, fg='red') query = Query(document.text, successful, mutating) self.query_history.append(query) except EOFError: self.output('Goodbye!') finally: # Reset the less opts back to original. logger.debug('Restoring env var LESS to %r.', original_less_opts) os.environ['LESS'] = original_less_opts os.environ['PAGER'] = special.get_original_pager() def output(self, text, **kwargs): if self.logfile: self.logfile.write(utf8tounicode(text)) self.logfile.write('\n') click.secho(text, **kwargs) def output_via_pager(self, text): if self.logfile: self.logfile.write(text) self.logfile.write('\n') click.echo_via_pager(text) def adjust_less_opts(self): less_opts = os.environ.get('LESS', '') self.logger.debug('Original value for LESS env var: %r', less_opts) os.environ['LESS'] = '-SRXF' return less_opts def set_pager_from_config(self): cnf = self.read_my_cnf_files(self.cnf_files, ['pager']) if cnf['pager']: special.set_pager(cnf['pager']) def refresh_completions(self, reset=False): if reset: with self._completer_lock: self.completer.reset_completions() self.completion_refresher.refresh(self.sqlexecute, self._on_completions_refreshed) return [(None, None, None, 'Auto-completion refresh started in the background.')] def _on_completions_refreshed(self, new_completer): self._swap_completer_objects(new_completer) if self.cli: # After refreshing, redraw the CLI to clear the statusbar # "Refreshing completions..." indicator self.cli.request_redraw() def _swap_completer_objects(self, new_completer): """Swap the completer object in cli with the newly created completer. """ with self._completer_lock: self.completer = new_completer # When mycli is first launched we call refresh_completions before # instantiating the cli object. So it is necessary to check if cli # exists before trying the replace the completer object in cli. if self.cli: self.cli.current_buffer.completer = new_completer def get_completions(self, text, cursor_positition): with self._completer_lock: return self.completer.get_completions( Document(text=text, cursor_position=cursor_positition), None) def get_prompt(self, string): sqlexecute = self.sqlexecute string = string.replace('\\u', sqlexecute.user or '(none)') string = string.replace('\\h', sqlexecute.host or '(none)') string = string.replace('\\d', sqlexecute.dbname or '(none)') string = string.replace('\\t', sqlexecute.server_type()[0] or 'mycli') string = string.replace('\\n', "\n") return string @click.command() @click.option('-h', '--host', envvar='MYSQL_HOST', help='Host address of the database.') @click.option('-P', '--port', envvar='MYSQL_TCP_PORT', type=int, help='Port number to use for connection. Honors ' '$MYSQL_TCP_PORT') @click.option('-u', '--user', help='User name to connect to the database.') @click.option('-S', '--socket', envvar='MYSQL_UNIX_PORT', help='The socket file to use for connection.') @click.option('-p', '--password', 'password', envvar='MYSQL_PWD', type=str, help='Password to connect to the database') @click.option('--pass', 'password', envvar='MYSQL_PWD', type=str, help='Password to connect to the database') @click.option('-v', '--version', is_flag=True, help='Version of mycli.') @click.option('-D', '--database', 'dbname', help='Database to use.') @click.option('-R', '--prompt', 'prompt', help='Prompt format (Default: "{0}")'.format( MyCli.default_prompt)) @click.option('-l', '--logfile', type=click.File(mode='a', encoding='utf-8'), help='Log every query and its results to a file.') @click.option('--defaults-group-suffix', type=str, help='Read config group with the specified suffix.') @click.option('--defaults-file', type=click.Path(), help='Only read default options from the given file') @click.option('--login-path', type=str, help='Read this path from the login file.') @click.argument('database', default='', nargs=1) def cli(database, user, host, port, socket, password, dbname, version, prompt, logfile, defaults_group_suffix, defaults_file, login_path): if version: print('Version:', __version__) sys.exit(0) mycli = MyCli(prompt=prompt, logfile=logfile, defaults_suffix=defaults_group_suffix, defaults_file=defaults_file, login_path=login_path) # Choose which ever one has a valid value. database = database or dbname if database and '://' in database: mycli.connect_uri(database) else: mycli.connect(database, user, password, host, port, socket) mycli.logger.debug('Launch Params: \n' '\tdatabase: %r' '\tuser: %r' '\thost: %r' '\tport: %r', database, user, host, port) mycli.run_cli() def format_output(title, cur, headers, status, table_format): output = [] if title: # Only print the title if it's not None. output.append(title) if cur: headers = [utf8tounicode(x) for x in headers] if special.is_expanded_output(): output.append(expanded_table(cur, headers)) else: output.append(tabulate(cur, headers, tablefmt=table_format, missingval='')) if status: # Only print the status if it's not None. output.append(status) return output def need_completion_refresh(queries): """Determines if the completion needs a refresh by checking if the sql statement is an alter, create, drop or change db.""" for query in sqlparse.split(queries): try: first_token = query.split()[0] if first_token.lower() in ('alter', 'create', 'use', '\\r', '\\u', 'connect', 'drop'): return True except Exception: return False def need_completion_reset(queries): """Determines if the statement is a database switch such as 'use' or '\\u'. When a database is changed the existing completions must be reset before we start the completion refresh for the new database. """ for query in sqlparse.split(queries): try: first_token = query.split()[0] if first_token.lower() in ('use', '\\u'): return True except Exception: return False def is_mutating(status): """Determines if the statement is mutating based on the status.""" if not status: return False mutating = set(['insert', 'update', 'delete', 'alter', 'create', 'drop', 'replace', 'truncate', 'load']) return status.split(None, 1)[0].lower() in mutating def is_select(status): """Returns true if the first word in status is 'select'.""" if not status: return False return status.split(None, 1)[0].lower() == 'select' def confirm_destructive_query(queries): """Checks if the query is destructive and prompts the user to confirm. Returns: None if the query is non-destructive. True if the query is destructive and the user wants to proceed. False if the query is destructive and the user doesn't want to proceed. """ destructive = set(['drop', 'shutdown', 'delete', 'truncate']) queries = queries.strip() for query in sqlparse.split(queries): try: first_token = query.split()[0] if first_token.lower() in destructive: destroy = click.prompt("You're about to run a destructive command.\nDo you want to proceed? (y/n)", type=bool) return destroy except Exception: return False def quit_command(sql): return (sql.strip().lower() == 'exit' or sql.strip().lower() == 'quit' or sql.strip() == '\q' or sql.strip() == ':q') def thanks_picker(files=()): for filename in files: with open(filename) as f: contents = f.readlines() return choice([x.split('*')[1].strip() for x in contents if x.startswith('*')]) if __name__ == "__main__": cli() mycli-1.5.2/mycli/myclirc0000644000076600000240000000557012615144005015743 0ustar amjithstaff00000000000000# vi: ft=dosini [main] # Enables context sensitive auto-completion. If this is disabled the all # possible completions will be listed. smart_completion = True # Multi-line mode allows breaking up the sql statements into multiple lines. If # this is set to True, then the end of the statements must have a semi-colon. # If this is set to False then sql statements can't be split into multiple # lines. End of line (return) is considered as the end of the statement. multi_line = False # Destructive warning mode will alert you before executing a sql statement # that may cause harm to the database such as "drop table", "drop database" # or "shutdown". destructive_warning = True # log_file location. log_file = ~/.mycli.log # Default log level. Possible values: "CRITICAL", "ERROR", "WARNING", "INFO" # and "DEBUG". log_level = INFO # Log every query and its results to a file. Enable this by uncommenting the # line below. # audit_log = ~/.mycli-audit.log # Timing of sql statments and table rendering. timing = True # Table format. Possible values: psql, plain, simple, grid, fancy_grid, pipe, # orgtbl, rst, mediawiki, html, latex, latex_booktabs, tsv. # Recommended: psql, fancy_grid and grid. table_format = psql # Syntax Style. Possible values: manni, igor, xcode, vim, autumn, vs, rrt, # native, perldoc, borland, tango, emacs, friendly, monokai, paraiso-dark, # colorful, murphy, bw, pastie, paraiso-light, trac, default, fruity syntax_style = default # Keybindings: Possible values: emacs, vi. # Emacs mode: Ctrl-A is home, Ctrl-E is end. All emacs keybindings are available in the REPL. # When Vi mode is enabled you can use modal editing features offered by Vi in the REPL. key_bindings = emacs # Enabling this option will show the suggestions in a wider menu. Thus more items are suggested. wider_completion_menu = False # MySQL prompt # \t - Product type (Percona, MySQL, Mariadb) # \u - Username # \h - Hostname of the server # \d - Database name # \n - Newline prompt = '\t \u@\h:\d> ' # Custom colors for the completion menu, toolbar, etc. [colors] # Completion menus. Token.Menu.Completions.Completion.Current = 'bg:#00aaaa #000000' Token.Menu.Completions.Completion = 'bg:#008888 #ffffff' Token.Menu.Completions.MultiColumnMeta = 'bg:#aaffff #000000' Token.Menu.Completions.ProgressButton = 'bg:#003333' Token.Menu.Completions.ProgressBar = 'bg:#00aaaa' # Selected text. Token.SelectedText = '#ffffff bg:#6666aa' # Search matches. (reverse-i-search) Token.SearchMatch = '#ffffff bg:#4444aa' Token.SearchMatch.Current = '#ffffff bg:#44aa44' # The bottom toolbar. Token.Toolbar.Off = 'bg:#222222 #888888' Token.Toolbar.On = 'bg:#222222 #ffffff' # Search/arg/system toolbars. Token.Toolbar.Search = 'noinherit bold' Token.Toolbar.Search.Text = 'nobold' Token.Toolbar.System = 'noinherit bold' Token.Toolbar.Arg = 'noinherit bold' Token.Toolbar.Arg.Text = 'nobold' # Favorite queries. [favorite_queries] mycli-1.5.2/mycli/packages/0000755000076600000240000000000012621400311016115 5ustar amjithstaff00000000000000mycli-1.5.2/mycli/packages/__init__.py0000644000076600000240000000000012511142310020214 0ustar amjithstaff00000000000000mycli-1.5.2/mycli/packages/completion_engine.py0000644000076600000240000002672612615144005022212 0ustar amjithstaff00000000000000from __future__ import print_function import sys import sqlparse from sqlparse.sql import Comparison, Identifier, Where from .parseutils import last_word, extract_tables, find_prev_keyword from .special import parse_special_command PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 if PY3: string_types = str else: string_types = basestring def suggest_type(full_text, text_before_cursor): """Takes the full_text that is typed so far and also the text before the cursor to suggest completion type and scope. Returns a tuple with a type of entity ('table', 'column' etc) and a scope. A scope for a column category will be a list of tables. """ word_before_cursor = last_word(text_before_cursor, include='many_punctuations') identifier = None # If we've partially typed a word then word_before_cursor won't be an empty # string. In that case we want to remove the partially typed string before # sending it to the sqlparser. Otherwise the last token will always be the # partially typed string which renders the smart completion useless because # it will always return the list of keywords as completion. if word_before_cursor: if word_before_cursor[-1] == '(' or word_before_cursor[0] == '\\': parsed = sqlparse.parse(text_before_cursor) else: parsed = sqlparse.parse( text_before_cursor[:-len(word_before_cursor)]) # word_before_cursor may include a schema qualification, like # "schema_name.partial_name" or "schema_name.", so parse it # separately p = sqlparse.parse(word_before_cursor)[0] if p.tokens and isinstance(p.tokens[0], Identifier): identifier = p.tokens[0] else: parsed = sqlparse.parse(text_before_cursor) if len(parsed) > 1: # Multiple statements being edited -- isolate the current one by # cumulatively summing statement lengths to find the one that bounds the # current position current_pos = len(text_before_cursor) stmt_start, stmt_end = 0, 0 for statement in parsed: stmt_len = len(statement.to_unicode()) stmt_start, stmt_end = stmt_end, stmt_end + stmt_len if stmt_end >= current_pos: text_before_cursor = full_text[stmt_start:current_pos] full_text = full_text[stmt_start:] break elif parsed: # A single statement statement = parsed[0] else: # The empty string statement = None # Check for special commands and handle those separately if statement: # Be careful here because trivial whitespace is parsed as a statement, # but the statement won't have a first token tok1 = statement.token_first() if tok1 and tok1.value == '\\': return suggest_special(text_before_cursor) last_token = statement and statement.token_prev(len(statement.tokens)) or '' return suggest_based_on_last_token(last_token, text_before_cursor, full_text, identifier) def suggest_special(text): text = text.lstrip() cmd, arg = parse_special_command(text) if cmd == text: # Trying to complete the special command itself return [{'type': 'special'}] if cmd in ('\\u', '\\r'): return [{'type': 'database'}] if cmd in ('\\T'): return [{'type': 'table_format'}] if cmd in ['\\f', '\\fs', '\\fd']: return [{'type': 'favoritequery'}] if cmd in ['\\dt']: return [ {'type': 'table', 'schema': []}, {'type': 'view', 'schema': []}, {'type': 'schema'}, ] return [{'type': 'keyword'}, {'type': 'special'}] def suggest_based_on_last_token(token, text_before_cursor, full_text, identifier): if isinstance(token, string_types): token_v = token.lower() elif isinstance(token, Comparison): # If 'token' is a Comparison type such as # 'select * FROM abc a JOIN def d ON a.id = d.'. Then calling # token.value on the comparison type will only return the lhs of the # comparison. In this case a.id. So we need to do token.tokens to get # both sides of the comparison and pick the last token out of that # list. token_v = token.tokens[-1].value.lower() elif isinstance(token, Where): # sqlparse groups all tokens from the where clause into a single token # list. This means that token.value may be something like # 'where foo > 5 and '. We need to look "inside" token.tokens to handle # suggestions in complicated where clauses correctly prev_keyword, text_before_cursor = find_prev_keyword(text_before_cursor) return suggest_based_on_last_token(prev_keyword, text_before_cursor, full_text, identifier) else: token_v = token.value.lower() if not token: return [{'type': 'keyword'}, {'type': 'special'}] elif token_v.endswith('('): p = sqlparse.parse(text_before_cursor)[0] if p.tokens and isinstance(p.tokens[-1], Where): # Four possibilities: # 1 - Parenthesized clause like "WHERE foo AND (" # Suggest columns/functions # 2 - Function call like "WHERE foo(" # Suggest columns/functions # 3 - Subquery expression like "WHERE EXISTS (" # Suggest keywords, in order to do a subquery # 4 - Subquery OR array comparison like "WHERE foo = ANY(" # Suggest columns/functions AND keywords. (If we wanted to be # really fancy, we could suggest only array-typed columns) column_suggestions = suggest_based_on_last_token('where', text_before_cursor, full_text, identifier) # Check for a subquery expression (cases 3 & 4) where = p.tokens[-1] prev_tok = where.token_prev(len(where.tokens) - 1) if isinstance(prev_tok, Comparison): # e.g. "SELECT foo FROM bar WHERE foo = ANY(" prev_tok = prev_tok.tokens[-1] prev_tok = prev_tok.value.lower() if prev_tok == 'exists': return [{'type': 'keyword'}] else: return column_suggestions # Get the token before the parens prev_tok = p.token_prev(len(p.tokens) - 1) if prev_tok and prev_tok.value and prev_tok.value.lower() == 'using': # tbl1 INNER JOIN tbl2 USING (col1, col2) tables = extract_tables(full_text) # suggest columns that are present in more than one table return [{'type': 'column', 'tables': tables, 'drop_unique': True}] elif p.token_first().value.lower() == 'select': # If the lparen is preceeded by a space chances are we're about to # do a sub-select. if last_word(text_before_cursor, 'all_punctuations').startswith('('): return [{'type': 'keyword'}] elif p.token_first().value.lower() == 'show': return [{'type': 'show'}] # We're probably in a function argument list return [{'type': 'column', 'tables': extract_tables(full_text)}] elif token_v in ('set', 'by', 'distinct'): return [{'type': 'column', 'tables': extract_tables(full_text)}] elif token_v in ('show'): return [{'type': 'show'}] elif token_v in ('to',): p = sqlparse.parse(text_before_cursor)[0] if p.token_first().value.lower() == 'change': return [{'type': 'change'}] else: return [{'type': 'user'}] elif token_v in ('user', 'for'): return [{'type': 'user'}] elif token_v in ('select', 'where', 'having'): # Check for a table alias or schema qualification parent = (identifier and identifier.get_parent_name()) or [] if parent: tables = extract_tables(full_text) tables = [t for t in tables if identifies(parent, *t)] return [{'type': 'column', 'tables': tables}, {'type': 'table', 'schema': parent}, {'type': 'view', 'schema': parent}, {'type': 'function', 'schema': parent}] else: return [{'type': 'column', 'tables': extract_tables(full_text)}, {'type': 'function', 'schema': []}, {'type': 'keyword'}] elif (token_v.endswith('join') and token.is_keyword) or (token_v in ('copy', 'from', 'update', 'into', 'describe', 'truncate', 'desc', 'explain')): schema = (identifier and identifier.get_parent_name()) or [] # Suggest tables from either the currently-selected schema or the # public schema if no schema has been specified suggest = [{'type': 'table', 'schema': schema}] if not schema: # Suggest schemas suggest.insert(0, {'type': 'schema'}) # Only tables can be TRUNCATED, otherwise suggest views if token_v != 'truncate': suggest.append({'type': 'view', 'schema': schema}) return suggest elif token_v in ('table', 'view', 'function'): # E.g. 'DROP FUNCTION ', 'ALTER TABLE ' rel_type = token_v schema = (identifier and identifier.get_parent_name()) or [] if schema: return [{'type': rel_type, 'schema': schema}] else: return [{'type': 'schema'}, {'type': rel_type, 'schema': []}] elif token_v == 'on': tables = extract_tables(full_text) # [(schema, table, alias), ...] parent = (identifier and identifier.get_parent_name()) or [] if parent: # "ON parent." # parent can be either a schema name or table alias tables = [t for t in tables if identifies(parent, *t)] return [{'type': 'column', 'tables': tables}, {'type': 'table', 'schema': parent}, {'type': 'view', 'schema': parent}, {'type': 'function', 'schema': parent}] else: # ON # Use table alias if there is one, otherwise the table name aliases = [t[2] or t[1] for t in tables] suggest = [{'type': 'alias', 'aliases': aliases}] # The lists of 'aliases' could be empty if we're trying to complete # a GRANT query. eg: GRANT SELECT, INSERT ON # In that case we just suggest all tables. if not aliases: suggest.append({'type': 'table', 'schema': parent}) return suggest elif token_v in ('use', 'database', 'template', 'connect'): # "\c ", "DROP DATABASE ", # "CREATE DATABASE WITH TEMPLATE " return [{'type': 'database'}] elif token_v == 'tableformat': return [{'type': 'table_format'}] elif token_v.endswith(',') or token_v in ['=', 'and', 'or']: prev_keyword, text_before_cursor = find_prev_keyword(text_before_cursor) if prev_keyword: return suggest_based_on_last_token( prev_keyword, text_before_cursor, full_text, identifier) else: return [] else: return [{'type': 'keyword'}] def identifies(id, schema, table, alias): return id == alias or id == table or ( schema and (id == schema + '.' + table)) mycli-1.5.2/mycli/packages/connection.py0000644000076600000240000000276212610175477020661 0ustar amjithstaff00000000000000"""Connection and cursor wrappers around PyMySQL. This module effectively backports PyMySQL functionality and error handling so that mycli will support Debian's python-pymysql version (0.6.2). """ import pymysql Cursor = pymysql.cursors.Cursor connect = pymysql.connect if pymysql.VERSION[1] == 6 and pymysql.VERSION[2] < 5: class Cursor(pymysql.cursors.Cursor): """Makes Cursor a context manager in PyMySQL < 0.6.5.""" def __enter__(self): return self def __exit__(self, *exc_info): del exc_info self.close() if pymysql.VERSION[1] == 6 and pymysql.VERSION[2] < 3: class Connection(pymysql.connections.Connection): """Adds error handling to Connection in PyMySQL < 0.6.3.""" def __del__(self): if self.socket: try: self.socket.close() except: pass self.socket = None self._rfile = None def connect(*args, **kwargs): """Makes connect() use our custom Connection class. PyMySQL < 0.6.3 uses the *passwd* argument instead of *password*. This function renames that keyword or assigns it the default value of '', which is the same default value PyMySQL gives it. See pymysql.connections.Connection.__init__() for more information about calling this function. """ kwargs['passwd'] = kwargs.pop('password', '') return Connection(*args, **kwargs) mycli-1.5.2/mycli/packages/counter.py0000644000076600000240000001425312610175477020177 0ustar amjithstaff00000000000000from __future__ import print_function from operator import itemgetter from heapq import nlargest from itertools import repeat, ifilter class Counter(dict): '''Dict subclass for counting hashable objects. Sometimes called a bag or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. >>> Counter('zyzygy') Counter({'y': 3, 'z': 2, 'g': 1}) ''' def __init__(self, iterable=None, **kwds): '''Create a new, empty Counter object. And if given, count elements from an input iterable. Or, initialize the count from another mapping of elements to their counts. >>> c = Counter() # a new, empty counter >>> c = Counter('gallahad') # a new counter from an iterable >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping >>> c = Counter(a=4, b=2) # a new counter from keyword args ''' self.update(iterable, **kwds) def __missing__(self, key): return 0 def most_common(self, n=None): '''List the n most common elements and their counts from the most common to the least. If n is None, then list all element counts. >>> Counter('abracadabra').most_common(3) [('a', 5), ('r', 2), ('b', 2)] ''' if n is None: return sorted(self.iteritems(), key=itemgetter(1), reverse=True) return nlargest(n, self.iteritems(), key=itemgetter(1)) def elements(self): '''Iterator over elements repeating each as many times as its count. >>> c = Counter('ABCABC') >>> sorted(c.elements()) ['A', 'A', 'B', 'B', 'C', 'C'] If an element's count has been set to zero or is a negative number, elements() will ignore it. ''' for elem, count in self.iteritems(): for _ in repeat(None, count): yield elem # Override dict methods where the meaning changes for Counter objects. @classmethod def fromkeys(cls, iterable, v=None): raise NotImplementedError( 'Counter.fromkeys() is undefined. Use Counter(iterable) instead.') def update(self, iterable=None, **kwds): '''Like dict.update() but add counts instead of replacing them. Source can be an iterable, a dictionary, or another Counter instance. >>> c = Counter('which') >>> c.update('witch') # add elements from another iterable >>> d = Counter('watch') >>> c.update(d) # add elements from another counter >>> c['h'] # four 'h' in which, witch, and watch 4 ''' if iterable is not None: if hasattr(iterable, 'iteritems'): if self: self_get = self.get for elem, count in iterable.iteritems(): self[elem] = self_get(elem, 0) + count else: dict.update(self, iterable) # fast path when counter is empty else: self_get = self.get for elem in iterable: self[elem] = self_get(elem, 0) + 1 if kwds: self.update(kwds) def copy(self): 'Like dict.copy() but returns a Counter instance instead of a dict.' return Counter(self) def __delitem__(self, elem): 'Like dict.__delitem__() but does not raise KeyError for missing values.' if elem in self: dict.__delitem__(self, elem) def __repr__(self): if not self: return '%s()' % self.__class__.__name__ items = ', '.join(map('%r: %r'.__mod__, self.most_common())) return '%s({%s})' % (self.__class__.__name__, items) # Multiset-style mathematical operations discussed in: # Knuth TAOCP Volume II section 4.6.3 exercise 19 # and at http://en.wikipedia.org/wiki/Multiset # # Outputs guaranteed to only include positive counts. # # To strip negative and zero counts, add-in an empty counter: # c += Counter() def __add__(self, other): '''Add counts from two counters. >>> Counter('abbb') + Counter('bcc') Counter({'b': 4, 'c': 2, 'a': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem in set(self) | set(other): newcount = self[elem] + other[elem] if newcount > 0: result[elem] = newcount return result def __sub__(self, other): ''' Subtract count, but keep only results with positive counts. >>> Counter('abbbc') - Counter('bccd') Counter({'b': 2, 'a': 1}) ''' if not isinstance(other, Counter): return NotImplemented result = Counter() for elem in set(self) | set(other): newcount = self[elem] - other[elem] if newcount > 0: result[elem] = newcount return result def __or__(self, other): '''Union is the maximum of value in either of the input counters. >>> Counter('abbb') | Counter('bcc') Counter({'b': 3, 'c': 2, 'a': 1}) ''' if not isinstance(other, Counter): return NotImplemented _max = max result = Counter() for elem in set(self) | set(other): newcount = _max(self[elem], other[elem]) if newcount > 0: result[elem] = newcount return result def __and__(self, other): ''' Intersection is the minimum of corresponding counts. >>> Counter('abbb') & Counter('bcc') Counter({'b': 1}) ''' if not isinstance(other, Counter): return NotImplemented _min = min result = Counter() if len(self) < len(other): self, other = other, self for elem in ifilter(self.__contains__, other): newcount = _min(self[elem], other[elem]) if newcount > 0: result[elem] = newcount return result if __name__ == '__main__': import doctest print(doctest.testmod()) mycli-1.5.2/mycli/packages/expanded.py0000644000076600000240000000267212611411643020277 0ustar amjithstaff00000000000000from .tabulate import _text_type import binascii def pad(field, total, char=u" "): return field + (char * (total - len(field))) def get_separator(num, header_len, data_len): sep = u"***************************[ %d. row ]***************************\n" % (num + 1) return sep def format_field(value): # Returns the field as a text type, otherwise will hexify the string try: if isinstance(value, bytes): return _text_type(value, "ascii") else: return _text_type(value) except UnicodeDecodeError: return _text_type('0x' + binascii.hexlify(value).decode('ascii')) def expanded_table(rows, headers): header_len = max([len(x) for x in headers]) max_row_len = 0 results = [] padded_headers = [pad(x, header_len) + u" |" for x in headers] header_len += 2 for row in rows: row = [format_field(x) for x in row] row_len = max([len(x) for x in row]) row_result = [] if row_len > max_row_len: max_row_len = row_len for header, value in zip(padded_headers, row): if value is None: value = '' row_result.append(u"%s %s" % (header, value)) results.append('\n'.join(row_result)) output = [] for i, result in enumerate(results): output.append(get_separator(i, header_len, max_row_len)) output.append(result) output.append('\n') return ''.join(output) mycli-1.5.2/mycli/packages/literals/0000755000076600000240000000000012621400311017734 5ustar amjithstaff00000000000000mycli-1.5.2/mycli/packages/literals/__init__.py0000644000076600000240000000000212611065035022046 0ustar amjithstaff00000000000000 mycli-1.5.2/mycli/packages/literals/main.py0000644000076600000240000000057612611065167021261 0ustar amjithstaff00000000000000import os import json root = os.path.dirname(__file__) literal_file = os.path.join(root, 'literals.json') with open(literal_file) as f: literals = json.load(f) def get_literals(literal_type): """Where `literal_type` is one of 'keywords', 'functions', 'show', 'change_items' returns a tuple of literal values of that type""" return tuple(literals[literal_type]) mycli-1.5.2/mycli/packages/ordereddict.py0000644000076600000240000001017512615144005020773 0ustar amjithstaff00000000000000# Copyright (c) 2009 Raymond Hettinger # # Permission is hereby granted, free of charge, to any person # obtaining a copy of this software and associated documentation files # (the "Software"), to deal in the Software without restriction, # including without limitation the rights to use, copy, modify, merge, # publish, distribute, sublicense, and/or sell copies of the Software, # and to permit persons to whom the Software is furnished to do so, # subject to the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES # OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR # OTHER DEALINGS IN THE SOFTWARE. from UserDict import DictMixin class OrderedDict(dict, DictMixin): def __init__(self, *args, **kwds): if len(args) > 1: raise TypeError('expected at most 1 arguments, got %d' % len(args)) try: self.__end except AttributeError: self.clear() self.update(*args, **kwds) def clear(self): self.__end = end = [] end += [None, end, end] # sentinel node for doubly linked list self.__map = {} # key --> [key, prev, next] dict.clear(self) def __setitem__(self, key, value): if key not in self: end = self.__end curr = end[1] curr[2] = end[1] = self.__map[key] = [key, curr, end] dict.__setitem__(self, key, value) def __delitem__(self, key): dict.__delitem__(self, key) key, prev, next = self.__map.pop(key) prev[2] = next next[1] = prev def __iter__(self): end = self.__end curr = end[2] while curr is not end: yield curr[0] curr = curr[2] def __reversed__(self): end = self.__end curr = end[1] while curr is not end: yield curr[0] curr = curr[1] def popitem(self, last=True): if not self: raise KeyError('dictionary is empty') if last: key = reversed(self).next() else: key = iter(self).next() value = self.pop(key) return key, value def __reduce__(self): items = [[k, self[k]] for k in self] tmp = self.__map, self.__end del self.__map, self.__end inst_dict = vars(self).copy() self.__map, self.__end = tmp if inst_dict: return (self.__class__, (items,), inst_dict) return self.__class__, (items,) def keys(self): return list(self) setdefault = DictMixin.setdefault update = DictMixin.update pop = DictMixin.pop values = DictMixin.values items = DictMixin.items iterkeys = DictMixin.iterkeys itervalues = DictMixin.itervalues iteritems = DictMixin.iteritems def __repr__(self): if not self: return '%s()' % (self.__class__.__name__,) return '%s(%r)' % (self.__class__.__name__, self.items()) def copy(self): return self.__class__(self) @classmethod def fromkeys(cls, iterable, value=None): d = cls() for key in iterable: d[key] = value return d def __eq__(self, other): if isinstance(other, OrderedDict): if len(self) != len(other): return False for p, q in zip(self.items(), other.items()): if p != q: return False return True return dict.__eq__(self, other) def __ne__(self, other): return not self == other mycli-1.5.2/mycli/packages/parseutils.py0000644000076600000240000001601012610175477020704 0ustar amjithstaff00000000000000from __future__ import print_function import re import sqlparse from sqlparse.sql import IdentifierList, Identifier, Function from sqlparse.tokens import Keyword, DML, Punctuation cleanup_regex = { # This matches only alphanumerics and underscores. 'alphanum_underscore': re.compile(r'(\w+)$'), # This matches everything except spaces, parens, colon, and comma 'many_punctuations': re.compile(r'([^():,\s]+)$'), # This matches everything except spaces, parens, colon, comma, and period 'most_punctuations': re.compile(r'([^\.():,\s]+)$'), # This matches everything except a space. 'all_punctuations': re.compile('([^\s]+)$'), } def last_word(text, include='alphanum_underscore'): """ Find the last word in a sentence. >>> last_word('abc') 'abc' >>> last_word(' abc') 'abc' >>> last_word('') '' >>> last_word(' ') '' >>> last_word('abc ') '' >>> last_word('abc def') 'def' >>> last_word('abc def ') '' >>> last_word('abc def;') '' >>> last_word('bac $def') 'def' >>> last_word('bac $def', include='most_punctuations') '$def' >>> last_word('bac \def', include='most_punctuations') '\\\\def' >>> last_word('bac \def;', include='most_punctuations') '\\\\def;' >>> last_word('bac::def', include='most_punctuations') 'def' """ if not text: # Empty string return '' if text[-1].isspace(): return '' else: regex = cleanup_regex[include] matches = regex.search(text) if matches: return matches.group(0) else: return '' # This code is borrowed from sqlparse example script. # def is_subselect(parsed): if not parsed.is_group(): return False for item in parsed.tokens: if item.ttype is DML and item.value.upper() in ('SELECT', 'INSERT', 'UPDATE', 'CREATE', 'DELETE'): return True return False def extract_from_part(parsed, stop_at_punctuation=True): tbl_prefix_seen = False for item in parsed.tokens: if tbl_prefix_seen: if is_subselect(item): for x in extract_from_part(item, stop_at_punctuation): yield x elif stop_at_punctuation and item.ttype is Punctuation: raise StopIteration # An incomplete nested select won't be recognized correctly as a # sub-select. eg: 'SELECT * FROM (SELECT id FROM user'. This causes # the second FROM to trigger this elif condition resulting in a # StopIteration. So we need to ignore the keyword if the keyword # FROM. # Also 'SELECT * FROM abc JOIN def' will trigger this elif # condition. So we need to ignore the keyword JOIN and its variants # INNER JOIN, FULL OUTER JOIN, etc. elif item.ttype is Keyword and ( not item.value.upper() == 'FROM') and ( not item.value.upper().endswith('JOIN')): raise StopIteration else: yield item elif ((item.ttype is Keyword or item.ttype is Keyword.DML) and item.value.upper() in ('COPY', 'FROM', 'INTO', 'UPDATE', 'TABLE', 'JOIN',)): tbl_prefix_seen = True # 'SELECT a, FROM abc' will detect FROM as part of the column list. # So this check here is necessary. elif isinstance(item, IdentifierList): for identifier in item.get_identifiers(): if (identifier.ttype is Keyword and identifier.value.upper() == 'FROM'): tbl_prefix_seen = True break def extract_table_identifiers(token_stream): """yields tuples of (schema_name, table_name, table_alias)""" for item in token_stream: if isinstance(item, IdentifierList): for identifier in item.get_identifiers(): # Sometimes Keywords (such as FROM ) are classified as # identifiers which don't have the get_real_name() method. try: schema_name = identifier.get_parent_name() real_name = identifier.get_real_name() except AttributeError: continue if real_name: yield (schema_name, real_name, identifier.get_alias()) elif isinstance(item, Identifier): real_name = item.get_real_name() schema_name = item.get_parent_name() if real_name: yield (schema_name, real_name, item.get_alias()) else: name = item.get_name() yield (None, name, item.get_alias() or name) elif isinstance(item, Function): yield (None, item.get_name(), item.get_name()) # extract_tables is inspired from examples in the sqlparse lib. def extract_tables(sql): """Extract the table names from an SQL statment. Returns a list of (schema, table, alias) tuples """ parsed = sqlparse.parse(sql) if not parsed: return [] # INSERT statements must stop looking for tables at the sign of first # Punctuation. eg: INSERT INTO abc (col1, col2) VALUES (1, 2) # abc is the table name, but if we don't stop at the first lparen, then # we'll identify abc, col1 and col2 as table names. insert_stmt = parsed[0].token_first().value.lower() == 'insert' stream = extract_from_part(parsed[0], stop_at_punctuation=insert_stmt) return list(extract_table_identifiers(stream)) def find_prev_keyword(sql): """ Find the last sql keyword in an SQL statement Returns the value of the last keyword, and the text of the query with everything after the last keyword stripped """ if not sql.strip(): return None, '' parsed = sqlparse.parse(sql)[0] flattened = list(parsed.flatten()) logical_operators = ('AND', 'OR', 'NOT', 'BETWEEN') for t in reversed(flattened): if t.value == '(' or (t.is_keyword and ( t.value.upper() not in logical_operators)): # Find the location of token t in the original parsed statement # We can't use parsed.token_index(t) because t may be a child token # inside a TokenList, in which case token_index thows an error # Minimal example: # p = sqlparse.parse('select * from foo where bar') # t = list(p.flatten())[-3] # The "Where" token # p.token_index(t) # Throws ValueError: not in list idx = flattened.index(t) # Combine the string values of all tokens in the original list # up to and including the target keyword token t, to produce a # query string with everything after the keyword token removed text = ''.join(tok.value for tok in flattened[:idx+1]) return t, text return None, '' if __name__ == '__main__': sql = 'select * from (select t. from tabl t' print (extract_tables(sql)) mycli-1.5.2/mycli/packages/special/0000755000076600000240000000000012621400311017535 5ustar amjithstaff00000000000000mycli-1.5.2/mycli/packages/special/__init__.py0000644000076600000240000000036512541202625021663 0ustar amjithstaff00000000000000__all__ = [] def export(defn): """Decorator to explicitly mark functions that are exposed in a lib.""" globals()[defn.__name__] = defn __all__.append(defn.__name__) return defn from . import dbcommands from . import iocommands mycli-1.5.2/mycli/packages/special/dbcommands.py0000644000076600000240000000167112610175477022247 0ustar amjithstaff00000000000000import logging from .main import special_command, RAW_QUERY, PARSED_QUERY log = logging.getLogger(__name__) @special_command('\\dt', '\\dt [table]', 'List or describe tables.', arg_type=PARSED_QUERY, case_sensitive=True) def list_tables(cur, arg=None, arg_type=PARSED_QUERY): if arg: query = 'SHOW FIELDS FROM {0}'.format(arg) else: query = 'SHOW TABLES' log.debug(query) cur.execute(query) if cur.description: headers = [x[0] for x in cur.description] return [(None, cur, headers, '')] else: return [(None, None, None, '')] @special_command('\\l', '\\l', 'List databases.', arg_type=RAW_QUERY, case_sensitive=True) def list_databases(cur, **_): query = 'SHOW DATABASES' log.debug(query) cur.execute(query) if cur.description: headers = [x[0] for x in cur.description] return [(None, cur, headers, '')] else: return [(None, None, None, '')] mycli-1.5.2/mycli/packages/special/favoritequeries.py0000644000076600000240000000406612542276136023354 0ustar amjithstaff00000000000000# -*- coding: utf-8 -*- from __future__ import unicode_literals class FavoriteQueries(object): section_name = 'favorite_queries' usage = ''' Favorite Queries are a way to save frequently used queries with a short name. Examples: # Save a new favorite query. > \\fs simple select * from abc where a is not Null; # List all favorite queries. > \\f ╒════════╤═══════════════════════════════════════╕ │ Name │ Query │ ╞════════╪═══════════════════════════════════════╡ │ simple │ SELECT * FROM abc where a is not NULL │ ╘════════╧═══════════════════════════════════════╛ # Run a favorite query. > \\f simple ╒════════╤════════╕ │ a │ b │ ╞════════╪════════╡ │ 日本語 │ 日本語 │ ╘════════╧════════╛ # Delete a favorite query. > \\fd simple simple: Deleted ''' def __init__(self, config): self.config = config def list(self): return self.config.get(self.section_name, []) def get(self, name): return self.config.get(self.section_name, {}).get(name, None) def save(self, name, query): if self.section_name not in self.config: self.config[self.section_name] = {} self.config[self.section_name][name] = query self.config.write() def delete(self, name): try: del self.config[self.section_name][name] except KeyError: return '%s: Not Found.' % name self.config.write() return '%s: Deleted' % name from ...config import load_config favoritequeries = FavoriteQueries(load_config('~/.myclirc')) mycli-1.5.2/mycli/packages/special/iocommands.py0000644000076600000240000001475412611507207022266 0ustar amjithstaff00000000000000import os import re import logging import subprocess from io import open import click import sqlparse from . import export from .main import special_command, NO_QUERY, PARSED_QUERY from .favoritequeries import favoritequeries from .utils import handle_cd_command TIMING_ENABLED = False use_expanded_output = False ORIGINAL_PAGER = os.environ.get('PAGER', '') @export def set_timing_enabled(val): global TIMING_ENABLED TIMING_ENABLED = val @export def get_original_pager(): return ORIGINAL_PAGER @export @special_command('pager', '\\P [command]', 'Set PAGER. Print the query results via PAGER', arg_type=PARSED_QUERY, aliases=('\\P', ), case_sensitive=True) def set_pager(arg, **_): if not arg: if not ORIGINAL_PAGER: os.environ.pop('PAGER', None) msg = 'Reset pager.' else: os.environ['PAGER'] = ORIGINAL_PAGER msg = 'Reset pager back to default. Default: %s' % ORIGINAL_PAGER else: os.environ['PAGER'] = arg msg = 'PAGER set to %s.' % arg return [(None, None, None, msg)] @special_command('\\timing', '\\t', 'Toggle timing of commands.', arg_type=NO_QUERY, aliases=('\\t', ), case_sensitive=True) def toggle_timing(): global TIMING_ENABLED TIMING_ENABLED = not TIMING_ENABLED message = "Timing is " message += "on." if TIMING_ENABLED else "off." return [(None, None, None, message)] @export def is_timing_enabled(): return TIMING_ENABLED @export def set_expanded_output(val): global use_expanded_output use_expanded_output = val @export def is_expanded_output(): return use_expanded_output def quit(*args): raise NotImplementedError def stub(*args): raise NotImplementedError _logger = logging.getLogger(__name__) @export def editor_command(command): """ Is this an external editor command? :param command: string """ # It is possible to have `\e filename` or `SELECT * FROM \e`. So we check # for both conditions. return command.strip().endswith('\\e') or command.strip().startswith('\\e') @export def get_filename(sql): if sql.strip().startswith('\\e'): command, _, filename = sql.partition(' ') return filename.strip() or None @export def open_external_editor(filename=None, sql=''): """ Open external editor, wait for the user to type in his query, return the query. :return: list with one tuple, query as first element. """ sql = sql.strip() # The reason we can't simply do .strip('\e') is that it strips characters, # not a substring. So it'll strip "e" in the end of the sql also! # Ex: "select * from style\e" -> "select * from styl". pattern = re.compile('(^\\\e|\\\e$)') while pattern.search(sql): sql = pattern.sub('', sql) message = None filename = filename.strip().split(' ', 1)[0] if filename else None MARKER = '# Type your query above this line.\n' # Populate the editor buffer with the partial sql (if available) and a # placeholder comment. query = click.edit(sql + '\n\n' + MARKER, filename=filename, extension='.sql') if filename: try: with open(filename, encoding='utf-8') as f: query = f.read() except IOError: message = 'Error reading file: %s.' % filename if query is not None: query = query.split(MARKER, 1)[0].rstrip('\n') else: # Don't return None for the caller to deal with. # Empty string is ok. query = sql return (query, message) @special_command('\\f', '\\f [name]', 'List or execute favorite queries.', arg_type=PARSED_QUERY, case_sensitive=True) def execute_favorite_query(cur, arg): """Returns (title, rows, headers, status)""" if arg == '': for result in list_favorite_queries(): yield result query = favoritequeries.get(arg) if query is None: message = "No favorite query: %s" % (arg) yield (None, None, None, message) else: for sql in sqlparse.split(query): sql = sql.rstrip(';') title = '> %s' % (sql) cur.execute(sql) if cur.description: headers = [x[0] for x in cur.description] yield (title, cur, headers, None) else: yield (title, None, None, None) def list_favorite_queries(): """List of all favorite queries. Returns (title, rows, headers, status)""" headers = ["Name", "Query"] rows = [(r, favoritequeries.get(r)) for r in favoritequeries.list()] if not rows: status = '\nNo favorite queries found.' + favoritequeries.usage else: status = '' return [('', rows, headers, status)] @special_command('\\fs', '\\fs name query', 'Save a favorite query.') def save_favorite_query(arg, **_): """Save a new favorite query. Returns (title, rows, headers, status)""" usage = 'Syntax: \\fs name query.\n\n' + favoritequeries.usage if not arg: return [(None, None, None, usage)] name, _, query = arg.partition(' ') # If either name or query is missing then print the usage and complain. if (not name) or (not query): return [(None, None, None, usage + 'Err: Both name and query are required.')] favoritequeries.save(name, query) return [(None, None, None, "Saved.")] @special_command('\\fd', '\\fd [name]', 'Delete a favorite query.') def delete_favorite_query(arg, **_): """Delete an existing favorite query. """ usage = 'Syntax: \\fd name.\n\n' + favoritequeries.usage if not arg: return [(None, None, None, usage)] status = favoritequeries.delete(arg) return [(None, None, None, status)] @special_command('system', 'system [command]', 'Execute a system commmand.') def execute_system_command(arg, **_): """ Execute a system command. """ usage = "Syntax: system [command].\n" if not arg: return [(None, None, None, usage)] try: command = arg.strip() if command.startswith('cd'): ok, error_message = handle_cd_command(arg) if not ok: return [(None, None, None, error_message)] return [(None, None, None, '')] args = arg.split(' ') process = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) output, error = process.communicate() response = output if not error else error return [(None, None, None, response)] except OSError as e: return [(None, None, None, 'OSError: %s' % e.strerror)] mycli-1.5.2/mycli/packages/special/main.py0000644000076600000240000000741412554076162021063 0ustar amjithstaff00000000000000import logging from collections import namedtuple from . import export log = logging.getLogger(__name__) NO_QUERY = 0 PARSED_QUERY = 1 RAW_QUERY = 2 SpecialCommand = namedtuple('SpecialCommand', ['handler', 'command', 'shortcut', 'description', 'arg_type', 'hidden', 'case_sensitive']) COMMANDS = {} @export class CommandNotFound(Exception): pass @export def parse_special_command(sql): command, _, arg = sql.partition(' ') return (command, arg.strip()) @export def special_command(command, shortcut, description, arg_type=PARSED_QUERY, hidden=False, case_sensitive=False, aliases=()): def wrapper(wrapped): register_special_command(wrapped, command, shortcut, description, arg_type, hidden, case_sensitive, aliases) return wrapped return wrapper @export def register_special_command(handler, command, shortcut, description, arg_type=PARSED_QUERY, hidden=False, case_sensitive=False, aliases=()): cmd = command.lower() if not case_sensitive else command COMMANDS[cmd] = SpecialCommand(handler, command, shortcut, description, arg_type, hidden, case_sensitive) for alias in aliases: cmd = alias.lower() if not case_sensitive else alias COMMANDS[cmd] = SpecialCommand(handler, command, shortcut, description, arg_type, case_sensitive=case_sensitive, hidden=True) @export def execute(cur, sql): """Execute a special command and return the results. If the special command is not supported a KeyError will be raised. """ command, arg = parse_special_command(sql) if (command not in COMMANDS) and (command.lower() not in COMMANDS): raise CommandNotFound try: special_cmd = COMMANDS[command] except KeyError: special_cmd = COMMANDS[command.lower()] if special_cmd.case_sensitive: raise CommandNotFound('Command not found: %s' % command) # "help is a special case. We want built-in help, not # mycli help here. if command == 'help' and arg: return show_keyword_help(cur=cur, arg=arg) if special_cmd.arg_type == NO_QUERY: return special_cmd.handler() elif special_cmd.arg_type == PARSED_QUERY: return special_cmd.handler(cur=cur, arg=arg) elif special_cmd.arg_type == RAW_QUERY: return special_cmd.handler(cur=cur, query=sql) @special_command('help', '\\?', 'Show this help.', arg_type=NO_QUERY, aliases=('\\?', '?')) def show_help(): # All the parameters are ignored. headers = ['Command', 'Shortcut', 'Description'] result = [] for _, value in sorted(COMMANDS.items()): if not value.hidden: result.append((value.command, value.shortcut, value.description)) return [(None, result, headers, None)] def show_keyword_help(cur, arg): """ Call the built-in "show ", to display help for an SQL keyword. :param cur: cursor :param arg: string :return: list """ keyword = arg.strip('"').strip("'") query = "help '{0}'".format(keyword) log.debug(query) cur.execute(query) if cur.description and cur.rowcount > 0: headers = [x[0] for x in cur.description] return [(None, cur, headers, '')] else: return [(None, None, None, 'No help found for {0}.'.format(keyword))] @special_command('exit', '\\q', 'Exit.', arg_type=NO_QUERY, aliases=('\\q', )) @special_command('quit', '\\q', 'Quit.', arg_type=NO_QUERY) @special_command('\\e', '\\e', 'Edit command with editor. (uses $EDITOR)', arg_type=NO_QUERY, case_sensitive=True) @special_command('\\G', '\\G', 'Display results vertically.', arg_type=NO_QUERY, case_sensitive=True) def stub(): raise NotImplementedError mycli-1.5.2/mycli/packages/special/utils.py0000644000076600000240000000072212611507207021263 0ustar amjithstaff00000000000000import os import subprocess def handle_cd_command(arg): """Handles a `cd` shell command by calling python's os.chdir.""" CD_CMD = 'cd' tokens = arg.split(CD_CMD + ' ') directory = tokens[-1] if len(tokens) > 1 else None if not directory: return False, "No folder name was provided." try: os.chdir(directory) subprocess.call(['pwd']) return True, None except OSError as e: return False, e.strerror mycli-1.5.2/mycli/packages/tabulate.py0000644000076600000240000011233212615144005020302 0ustar amjithstaff00000000000000# -*- coding: utf-8 -*- """Pretty-print tabular data.""" from __future__ import print_function from __future__ import unicode_literals from collections import namedtuple from decimal import Decimal from platform import python_version_tuple from wcwidth import wcswidth import re import binascii if python_version_tuple()[0] < "3": from itertools import izip_longest from functools import partial _none_type = type(None) _int_type = int _long_type = long _float_type = float _text_type = unicode _binary_type = str def _is_file(f): return isinstance(f, file) else: from itertools import zip_longest as izip_longest from functools import reduce, partial _none_type = type(None) _int_type = int _long_type = int _float_type = float _text_type = str _binary_type = bytes import io def _is_file(f): return isinstance(f, io.IOBase) __all__ = ["tabulate", "tabulate_formats", "simple_separated_format"] __version__ = "0.7.4" MIN_PADDING = 2 Line = namedtuple("Line", ["begin", "hline", "sep", "end"]) DataRow = namedtuple("DataRow", ["begin", "sep", "end"]) # A table structure is suppposed to be: # # --- lineabove --------- # headerrow # --- linebelowheader --- # datarow # --- linebewteenrows --- # ... (more datarows) ... # --- linebewteenrows --- # last datarow # --- linebelow --------- # # TableFormat's line* elements can be # # - either None, if the element is not used, # - or a Line tuple, # - or a function: [col_widths], [col_alignments] -> string. # # TableFormat's *row elements can be # # - either None, if the element is not used, # - or a DataRow tuple, # - or a function: [cell_values], [col_widths], [col_alignments] -> string. # # padding (an integer) is the amount of white space around data values. # # with_header_hide: # # - either None, to display all table elements unconditionally, # - or a list of elements not to be displayed if the table has column headers. # TableFormat = namedtuple("TableFormat", ["lineabove", "linebelowheader", "linebetweenrows", "linebelow", "headerrow", "datarow", "padding", "with_header_hide"]) def _pipe_segment_with_colons(align, colwidth): """Return a segment of a horizontal line with optional colons which indicate column's alignment (as in `pipe` output format).""" w = colwidth if align in ["right", "decimal"]: return ('-' * (w - 1)) + ":" elif align == "center": return ":" + ('-' * (w - 2)) + ":" elif align == "left": return ":" + ('-' * (w - 1)) else: return '-' * w def _pipe_line_with_colons(colwidths, colaligns): """Return a horizontal line with optional colons to indicate column's alignment (as in `pipe` output format).""" segments = [_pipe_segment_with_colons(a, w) for a, w in zip(colaligns, colwidths)] return "|" + "|".join(segments) + "|" def _mediawiki_row_with_attrs(separator, cell_values, colwidths, colaligns): alignment = { "left": '', "right": 'align="right"| ', "center": 'align="center"| ', "decimal": 'align="right"| ' } # hard-coded padding _around_ align attribute and value together # rather than padding parameter which affects only the value values_with_attrs = [' ' + alignment.get(a, '') + c + ' ' for c, a in zip(cell_values, colaligns)] colsep = separator*2 return (separator + colsep.join(values_with_attrs)).rstrip() def _html_row_with_attrs(celltag, cell_values, colwidths, colaligns): alignment = { "left": '', "right": ' style="text-align: right;"', "center": ' style="text-align: center;"', "decimal": ' style="text-align: right;"' } values_with_attrs = ["<{0}{1}>{2}".format(celltag, alignment.get(a, ''), c) for c, a in zip(cell_values, colaligns)] return "" + "".join(values_with_attrs).rstrip() + "" def _latex_line_begin_tabular(colwidths, colaligns, booktabs=False): alignment = { "left": "l", "right": "r", "center": "c", "decimal": "r" } tabular_columns_fmt = "".join([alignment.get(a, "l") for a in colaligns]) return "\n".join(["\\begin{tabular}{" + tabular_columns_fmt + "}", "\\toprule" if booktabs else "\hline"]) LATEX_ESCAPE_RULES = {r"&": r"\&", r"%": r"\%", r"$": r"\$", r"#": r"\#", r"_": r"\_", r"^": r"\^{}", r"{": r"\{", r"}": r"\}", r"~": r"\textasciitilde{}", "\\": r"\textbackslash{}", r"<": r"\ensuremath{<}", r">": r"\ensuremath{>}"} def _latex_row(cell_values, colwidths, colaligns): def escape_char(c): return LATEX_ESCAPE_RULES.get(c, c) escaped_values = ["".join(map(escape_char, cell)) for cell in cell_values] rowfmt = DataRow("", "&", "\\\\") return _build_simple_row(escaped_values, rowfmt) _table_formats = {"simple": TableFormat(lineabove=Line("", "-", " ", ""), linebelowheader=Line("", "-", " ", ""), linebetweenrows=None, linebelow=Line("", "-", " ", ""), headerrow=DataRow("", " ", ""), datarow=DataRow("", " ", ""), padding=0, with_header_hide=["lineabove", "linebelow"]), "plain": TableFormat(lineabove=None, linebelowheader=None, linebetweenrows=None, linebelow=None, headerrow=DataRow("", " ", ""), datarow=DataRow("", " ", ""), padding=0, with_header_hide=None), "grid": TableFormat(lineabove=Line("+", "-", "+", "+"), linebelowheader=Line("+", "=", "+", "+"), linebetweenrows=Line("+", "-", "+", "+"), linebelow=Line("+", "-", "+", "+"), headerrow=DataRow("|", "|", "|"), datarow=DataRow("|", "|", "|"), padding=1, with_header_hide=None), "fancy_grid": TableFormat(lineabove=Line("╒", "═", "╤", "╕"), linebelowheader=Line("╞", "═", "╪", "╡"), linebetweenrows=Line("├", "─", "┼", "┤"), linebelow=Line("╘", "═", "╧", "╛"), headerrow=DataRow("│", "│", "│"), datarow=DataRow("│", "│", "│"), padding=1, with_header_hide=None), "pipe": TableFormat(lineabove=_pipe_line_with_colons, linebelowheader=_pipe_line_with_colons, linebetweenrows=None, linebelow=None, headerrow=DataRow("|", "|", "|"), datarow=DataRow("|", "|", "|"), padding=1, with_header_hide=["lineabove"]), "orgtbl": TableFormat(lineabove=None, linebelowheader=Line("|", "-", "+", "|"), linebetweenrows=None, linebelow=None, headerrow=DataRow("|", "|", "|"), datarow=DataRow("|", "|", "|"), padding=1, with_header_hide=None), "psql": TableFormat(lineabove=Line("+", "-", "+", "+"), linebelowheader=Line("|", "-", "+", "|"), linebetweenrows=None, linebelow=Line("+", "-", "+", "+"), headerrow=DataRow("|", "|", "|"), datarow=DataRow("|", "|", "|"), padding=1, with_header_hide=None), "rst": TableFormat(lineabove=Line("", "=", " ", ""), linebelowheader=Line("", "=", " ", ""), linebetweenrows=None, linebelow=Line("", "=", " ", ""), headerrow=DataRow("", " ", ""), datarow=DataRow("", " ", ""), padding=0, with_header_hide=None), "mediawiki": TableFormat(lineabove=Line("{| class=\"wikitable\" style=\"text-align: left;\"", "", "", "\n|+ \n|-"), linebelowheader=Line("|-", "", "", ""), linebetweenrows=Line("|-", "", "", ""), linebelow=Line("|}", "", "", ""), headerrow=partial(_mediawiki_row_with_attrs, "!"), datarow=partial(_mediawiki_row_with_attrs, "|"), padding=0, with_header_hide=None), "html": TableFormat(lineabove=Line("", "", "", ""), linebelowheader=None, linebetweenrows=None, linebelow=Line("
", "", "", ""), headerrow=partial(_html_row_with_attrs, "th"), datarow=partial(_html_row_with_attrs, "td"), padding=0, with_header_hide=None), "latex": TableFormat(lineabove=_latex_line_begin_tabular, linebelowheader=Line("\\hline", "", "", ""), linebetweenrows=None, linebelow=Line("\\hline\n\\end{tabular}", "", "", ""), headerrow=_latex_row, datarow=_latex_row, padding=1, with_header_hide=None), "latex_booktabs": TableFormat(lineabove=partial(_latex_line_begin_tabular, booktabs=True), linebelowheader=Line("\\midrule", "", "", ""), linebetweenrows=None, linebelow=Line("\\bottomrule\n\\end{tabular}", "", "", ""), headerrow=_latex_row, datarow=_latex_row, padding=1, with_header_hide=None), "tsv": TableFormat(lineabove=None, linebelowheader=None, linebetweenrows=None, linebelow=None, headerrow=DataRow("", "\t", ""), datarow=DataRow("", "\t", ""), padding=0, with_header_hide=None)} tabulate_formats = list(sorted(_table_formats.keys())) _invisible_codes = re.compile(r"\x1b\[\d*m|\x1b\[\d*\;\d*\;\d*m") # ANSI color codes _invisible_codes_bytes = re.compile(b"\x1b\[\d*m|\x1b\[\d*\;\d*\;\d*m") # ANSI color codes def simple_separated_format(separator): """Construct a simple TableFormat with columns separated by a separator. >>> tsv = simple_separated_format("\\t") ; \ tabulate([["foo", 1], ["spam", 23]], tablefmt=tsv) == 'foo \\t 1\\nspam\\t23' True """ return TableFormat(None, None, None, None, headerrow=DataRow('', separator, ''), datarow=DataRow('', separator, ''), padding=0, with_header_hide=None) def _isconvertible(conv, string): try: n = conv(string) return True except (ValueError, TypeError): return False def _isnumber(string): """ >>> _isnumber("123.45") True >>> _isnumber("123") True >>> _isnumber("spam") False """ return _isconvertible(float, string) def _isint(string): """ >>> _isint("123") True >>> _isint("123.45") False """ return type(string) is _int_type or type(string) is _long_type or \ (isinstance(string, _binary_type) or isinstance(string, _text_type)) and \ _isconvertible(int, string) def _type(string, has_invisible=True): """The least generic type (type(None), int, float, str, unicode). >>> _type(None) is type(None) True >>> _type("foo") is type("") True >>> _type("1") is type(1) True >>> _type('\x1b[31m42\x1b[0m') is type(42) True >>> _type('\x1b[31m42\x1b[0m') is type(42) True """ if has_invisible and \ (isinstance(string, _text_type) or isinstance(string, _binary_type)): string = _strip_invisible(string) if string is None: return _none_type if isinstance(string, (bool, Decimal,)): return _text_type elif hasattr(string, "isoformat"): # datetime.datetime, date, and time return _text_type elif _isint(string): return int elif _isnumber(string): return float elif isinstance(string, _binary_type): return _binary_type else: return _text_type def _afterpoint(string): """Symbols after a decimal point, -1 if the string lacks the decimal point. >>> _afterpoint("123.45") 2 >>> _afterpoint("1001") -1 >>> _afterpoint("eggs") -1 >>> _afterpoint("123e45") 2 """ if _isnumber(string): if _isint(string): return -1 else: pos = string.rfind(".") pos = string.lower().rfind("e") if pos < 0 else pos if pos >= 0: return len(string) - pos - 1 else: return -1 # no point else: return -1 # not a number def _padleft(width, s, has_invisible=True): """Flush right. >>> _padleft(6, '\u044f\u0439\u0446\u0430') == ' \u044f\u0439\u0446\u0430' True """ lwidth = width - wcswidth(_strip_invisible(s) if has_invisible else s) return ' ' * lwidth + s def _padright(width, s, has_invisible=True): """Flush left. >>> _padright(6, '\u044f\u0439\u0446\u0430') == '\u044f\u0439\u0446\u0430 ' True """ rwidth = width - wcswidth(_strip_invisible(s) if has_invisible else s) return s + ' ' * rwidth def _padboth(width, s, has_invisible=True): """Center string. >>> _padboth(6, '\u044f\u0439\u0446\u0430') == ' \u044f\u0439\u0446\u0430 ' True """ xwidth = width - wcswidth(_strip_invisible(s) if has_invisible else s) lwidth = xwidth // 2 rwidth = 0 if xwidth <= 0 else lwidth + xwidth % 2 return ' ' * lwidth + s + ' ' * rwidth def _strip_invisible(s): "Remove invisible ANSI color codes." if isinstance(s, _text_type): return re.sub(_invisible_codes, "", s) else: # a bytestring return re.sub(_invisible_codes_bytes, "", s) def _visible_width(s): """Visible width of a printed string. ANSI color codes are removed. >>> _visible_width('\x1b[31mhello\x1b[0m'), _visible_width("world") (5, 5) """ if isinstance(s, _text_type) or isinstance(s, _binary_type): return wcswidth(_strip_invisible(s)) else: return wcswidth(_text_type(s)) def _align_column(strings, alignment, minwidth=0, has_invisible=True): """[string] -> [padded_string] >>> list(map(str,_align_column(["12.345", "-1234.5", "1.23", "1234.5", "1e+234", "1.0e234"], "decimal"))) [' 12.345 ', '-1234.5 ', ' 1.23 ', ' 1234.5 ', ' 1e+234 ', ' 1.0e234'] >>> list(map(str,_align_column(['123.4', '56.7890'], None))) ['123.4', '56.7890'] """ if alignment == "right": padfn = _padleft elif alignment == "center": padfn = _padboth elif alignment == "decimal": decimals = [_afterpoint(s) for s in strings] maxdecimals = max(decimals) strings = [s + (maxdecimals - decs) * " " for s, decs in zip(strings, decimals)] padfn = _padleft elif not alignment: return strings else: padfn = _padright if has_invisible: width_fn = _visible_width else: width_fn = wcswidth maxwidth = max(max(map(width_fn, strings)), minwidth) padded_strings = [padfn(maxwidth, s, has_invisible) for s in strings] return padded_strings def _more_generic(type1, type2): types = { _none_type: 0, int: 1, float: 2, _binary_type: 3, _text_type: 4 } invtypes = { 4: _text_type, 3: _binary_type, 2: float, 1: int, 0: _none_type } moregeneric = max(types.get(type1, 4), types.get(type2, 4)) return invtypes[moregeneric] def _column_type(strings, has_invisible=True): """The least generic type all column values are convertible to. >>> _column_type(["1", "2"]) is _int_type True >>> _column_type(["1", "2.3"]) is _float_type True >>> _column_type(["1", "2.3", "four"]) is _text_type True >>> _column_type(["four", '\u043f\u044f\u0442\u044c']) is _text_type True >>> _column_type([None, "brux"]) is _text_type True >>> _column_type([1, 2, None]) is _int_type True >>> import datetime as dt >>> _column_type([dt.datetime(1991,2,19), dt.time(17,35)]) is _text_type True """ types = [_type(s, has_invisible) for s in strings ] return reduce(_more_generic, types, int) def _format(val, valtype, floatfmt, missingval=""): """Format a value accoding to its type. Unicode is supported: >>> hrow = ['\u0431\u0443\u043a\u0432\u0430', '\u0446\u0438\u0444\u0440\u0430'] ; \ tbl = [['\u0430\u0437', 2], ['\u0431\u0443\u043a\u0438', 4]] ; \ good_result = '\\u0431\\u0443\\u043a\\u0432\\u0430 \\u0446\\u0438\\u0444\\u0440\\u0430\\n------- -------\\n\\u0430\\u0437 2\\n\\u0431\\u0443\\u043a\\u0438 4' ; \ tabulate(tbl, headers=hrow) == good_result True """ if val is None: return missingval if valtype in [int, _text_type]: return "{0}".format(val) elif valtype is _binary_type: try: return _text_type(val, "ascii") except UnicodeDecodeError: return _text_type('0x' + binascii.hexlify(val).decode('ascii')) except TypeError: return _text_type(val) elif valtype is float: return format(float(val), floatfmt) else: return "{0}".format(val) def _align_header(header, alignment, width): if alignment == "left": return _padright(width, header) elif alignment == "center": return _padboth(width, header) elif not alignment: return "{0}".format(header) else: return _padleft(width, header) def _normalize_tabular_data(tabular_data, headers): """Transform a supported data type to a list of lists, and a list of headers. Supported tabular data types: * list-of-lists or another iterable of iterables * list of named tuples (usually used with headers="keys") * list of dicts (usually used with headers="keys") * list of OrderedDicts (usually used with headers="keys") * 2D NumPy arrays * NumPy record arrays (usually used with headers="keys") * dict of iterables (usually used with headers="keys") * pandas.DataFrame (usually used with headers="keys") The first row can be used as headers if headers="firstrow", column indices can be used as headers if headers="keys". """ if hasattr(tabular_data, "keys") and hasattr(tabular_data, "values"): # dict-like and pandas.DataFrame? if hasattr(tabular_data.values, "__call__"): # likely a conventional dict keys = tabular_data.keys() rows = list(izip_longest(*tabular_data.values())) # columns have to be transposed elif hasattr(tabular_data, "index"): # values is a property, has .index => it's likely a pandas.DataFrame (pandas 0.11.0) keys = tabular_data.keys() vals = tabular_data.values # values matrix doesn't need to be transposed names = tabular_data.index rows = [[v]+list(row) for v,row in zip(names, vals)] else: raise ValueError("tabular data doesn't appear to be a dict or a DataFrame") if headers == "keys": headers = list(map(_text_type,keys)) # headers should be strings else: # it's a usual an iterable of iterables, or a NumPy array rows = list(tabular_data) if (headers == "keys" and hasattr(tabular_data, "dtype") and getattr(tabular_data.dtype, "names")): # numpy record array headers = tabular_data.dtype.names elif (headers == "keys" and len(rows) > 0 and isinstance(rows[0], tuple) and hasattr(rows[0], "_fields")): # namedtuple headers = list(map(_text_type, rows[0]._fields)) elif (len(rows) > 0 and isinstance(rows[0], dict)): # dict or OrderedDict uniq_keys = set() # implements hashed lookup keys = [] # storage for set if headers == "firstrow": firstdict = rows[0] if len(rows) > 0 else {} keys.extend(firstdict.keys()) uniq_keys.update(keys) rows = rows[1:] for row in rows: for k in row.keys(): #Save unique items in input order if k not in uniq_keys: keys.append(k) uniq_keys.add(k) if headers == 'keys': headers = keys elif isinstance(headers, dict): # a dict of headers for a list of dicts headers = [headers.get(k, k) for k in keys] headers = list(map(_text_type, headers)) elif headers == "firstrow": if len(rows) > 0: headers = [firstdict.get(k, k) for k in keys] headers = list(map(_text_type, headers)) else: headers = [] elif headers: raise ValueError('headers for a list of dicts is not a dict or a keyword') rows = [[row.get(k) for k in keys] for row in rows] elif headers == "keys" and len(rows) > 0: # keys are column indices headers = list(map(_text_type, range(len(rows[0])))) # take headers from the first row if necessary if headers == "firstrow" and len(rows) > 0: headers = list(map(_text_type, rows[0])) # headers should be strings rows = rows[1:] headers = list(map(_text_type,headers)) rows = list(map(list,rows)) # pad with empty headers for initial columns if necessary if headers and len(rows) > 0: nhs = len(headers) ncols = len(rows[0]) if nhs < ncols: headers = [""]*(ncols - nhs) + headers return rows, headers def table_formats(): return _table_formats.keys() def tabulate(tabular_data, headers=[], tablefmt="simple", floatfmt="g", numalign="decimal", stralign="left", missingval=""): """Format a fixed width table for pretty printing. >>> print(tabulate([[1, 2.34], [-56, "8.999"], ["2", "10001"]])) --- --------- 1 2.34 -56 8.999 2 10001 --- --------- The first required argument (`tabular_data`) can be a list-of-lists (or another iterable of iterables), a list of named tuples, a dictionary of iterables, an iterable of dictionaries, a two-dimensional NumPy array, NumPy record array, or a Pandas' dataframe. Table headers ------------- To print nice column headers, supply the second argument (`headers`): - `headers` can be an explicit list of column headers - if `headers="firstrow"`, then the first row of data is used - if `headers="keys"`, then dictionary keys or column indices are used Otherwise a headerless table is produced. If the number of headers is less than the number of columns, they are supposed to be names of the last columns. This is consistent with the plain-text format of R and Pandas' dataframes. >>> print(tabulate([["sex","age"],["Alice","F",24],["Bob","M",19]], ... headers="firstrow")) sex age ----- ----- ----- Alice F 24 Bob M 19 Column alignment ---------------- `tabulate` tries to detect column types automatically, and aligns the values properly. By default it aligns decimal points of the numbers (or flushes integer numbers to the right), and flushes everything else to the left. Possible column alignments (`numalign`, `stralign`) are: "right", "center", "left", "decimal" (only for `numalign`), and None (to disable alignment). Table formats ------------- `floatfmt` is a format specification used for columns which contain numeric data with a decimal point. `None` values are replaced with a `missingval` string: >>> print(tabulate([["spam", 1, None], ... ["eggs", 42, 3.14], ... ["other", None, 2.7]], missingval="?")) ----- -- ---- spam 1 ? eggs 42 3.14 other ? 2.7 ----- -- ---- Various plain-text table formats (`tablefmt`) are supported: 'plain', 'simple', 'grid', 'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', and 'latex_booktabs'. Variable `tabulate_formats` contains the list of currently supported formats. "plain" format doesn't use any pseudographics to draw tables, it separates columns with a double space: >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], ... ["strings", "numbers"], "plain")) strings numbers spam 41.9999 eggs 451 >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="plain")) spam 41.9999 eggs 451 "simple" format is like Pandoc simple_tables: >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], ... ["strings", "numbers"], "simple")) strings numbers --------- --------- spam 41.9999 eggs 451 >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="simple")) ---- -------- spam 41.9999 eggs 451 ---- -------- "grid" is similar to tables produced by Emacs table.el package or Pandoc grid_tables: >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], ... ["strings", "numbers"], "grid")) +-----------+-----------+ | strings | numbers | +===========+===========+ | spam | 41.9999 | +-----------+-----------+ | eggs | 451 | +-----------+-----------+ >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="grid")) +------+----------+ | spam | 41.9999 | +------+----------+ | eggs | 451 | +------+----------+ "fancy_grid" draws a grid using box-drawing characters: >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], ... ["strings", "numbers"], "fancy_grid")) ╒═══════════╤═══════════╕ │ strings │ numbers │ ╞═══════════╪═══════════╡ │ spam │ 41.9999 │ ├───────────┼───────────┤ │ eggs │ 451 │ ╘═══════════╧═══════════╛ "pipe" is like tables in PHP Markdown Extra extension or Pandoc pipe_tables: >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], ... ["strings", "numbers"], "pipe")) | strings | numbers | |:----------|----------:| | spam | 41.9999 | | eggs | 451 | >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="pipe")) |:-----|---------:| | spam | 41.9999 | | eggs | 451 | "orgtbl" is like tables in Emacs org-mode and orgtbl-mode. They are slightly different from "pipe" format by not using colons to define column alignment, and using a "+" sign to indicate line intersections: >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], ... ["strings", "numbers"], "orgtbl")) | strings | numbers | |-----------+-----------| | spam | 41.9999 | | eggs | 451 | >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="orgtbl")) | spam | 41.9999 | | eggs | 451 | "rst" is like a simple table format from reStructuredText; please note that reStructuredText accepts also "grid" tables: >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], ... ["strings", "numbers"], "rst")) ========= ========= strings numbers ========= ========= spam 41.9999 eggs 451 ========= ========= >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="rst")) ==== ======== spam 41.9999 eggs 451 ==== ======== "mediawiki" produces a table markup used in Wikipedia and on other MediaWiki-based sites: >>> print(tabulate([["strings", "numbers"], ["spam", 41.9999], ["eggs", "451.0"]], ... headers="firstrow", tablefmt="mediawiki")) {| class="wikitable" style="text-align: left;" |+ |- ! strings !! align="right"| numbers |- | spam || align="right"| 41.9999 |- | eggs || align="right"| 451 |} "html" produces HTML markup: >>> print(tabulate([["strings", "numbers"], ["spam", 41.9999], ["eggs", "451.0"]], ... headers="firstrow", tablefmt="html"))
strings numbers
spam 41.9999
eggs 451
"latex" produces a tabular environment of LaTeX document markup: >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="latex")) \\begin{tabular}{lr} \\hline spam & 41.9999 \\\\ eggs & 451 \\\\ \\hline \\end{tabular} "latex_booktabs" produces a tabular environment of LaTeX document markup using the booktabs.sty package: >>> print(tabulate([["spam", 41.9999], ["eggs", "451.0"]], tablefmt="latex_booktabs")) \\begin{tabular}{lr} \\toprule spam & 41.9999 \\\\ eggs & 451 \\\\ \\bottomrule \end{tabular} """ if tabular_data is None: tabular_data = [] list_of_lists, headers = _normalize_tabular_data(tabular_data, headers) # format rows and columns, convert numeric values to strings cols = list(zip(*list_of_lists)) coltypes = list(map(_column_type, cols)) cols = [[_format(v, ct, floatfmt, missingval) for v in c] for c,ct in zip(cols, coltypes)] # optimization: look for ANSI control codes once, # enable smart width functions only if a control code is found plain_text = '\n'.join(['\t'.join(map(_text_type, headers))] + \ ['\t'.join(map(_text_type, row)) for row in cols]) has_invisible = re.search(_invisible_codes, plain_text) if has_invisible: width_fn = _visible_width else: width_fn = wcswidth # align columns aligns = [numalign if ct in [int,float] else stralign for ct in coltypes] minwidths = [width_fn(h) + MIN_PADDING for h in headers] if headers else [0]*len(cols) cols = [_align_column(c, a, minw, has_invisible) for c, a, minw in zip(cols, aligns, minwidths)] if headers: # align headers and add headers t_cols = cols or [['']] * len(headers) t_aligns = aligns or [stralign] * len(headers) minwidths = [max(minw, width_fn(c[0])) for minw, c in zip(minwidths, t_cols)] headers = [_align_header(h, a, minw) for h, a, minw in zip(headers, t_aligns, minwidths)] rows = list(zip(*cols)) else: minwidths = [width_fn(c[0]) for c in cols] rows = list(zip(*cols)) if not isinstance(tablefmt, TableFormat): tablefmt = _table_formats.get(tablefmt, _table_formats["simple"]) return _format_table(tablefmt, headers, rows, minwidths, aligns) def _build_simple_row(padded_cells, rowfmt): "Format row according to DataRow format without padding." begin, sep, end = rowfmt return (begin + sep.join(padded_cells) + end).rstrip() def _build_row(padded_cells, colwidths, colaligns, rowfmt): "Return a string which represents a row of data cells." if not rowfmt: return None if hasattr(rowfmt, "__call__"): return rowfmt(padded_cells, colwidths, colaligns) else: return _build_simple_row(padded_cells, rowfmt) def _build_line(colwidths, colaligns, linefmt): "Return a string which represents a horizontal line." if not linefmt: return None if hasattr(linefmt, "__call__"): return linefmt(colwidths, colaligns) else: begin, fill, sep, end = linefmt cells = [fill*w for w in colwidths] return _build_simple_row(cells, (begin, sep, end)) def _pad_row(cells, padding): if cells: pad = " "*padding padded_cells = [pad + cell + pad for cell in cells] return padded_cells else: return cells def _format_table(fmt, headers, rows, colwidths, colaligns): """Produce a plain-text representation of the table.""" lines = [] hidden = fmt.with_header_hide if (headers and fmt.with_header_hide) else [] pad = fmt.padding headerrow = fmt.headerrow padded_widths = [(w + 2*pad) for w in colwidths] padded_headers = _pad_row(headers, pad) padded_rows = [_pad_row(row, pad) for row in rows] if fmt.lineabove and "lineabove" not in hidden: lines.append(_build_line(padded_widths, colaligns, fmt.lineabove)) if padded_headers: lines.append(_build_row(padded_headers, padded_widths, colaligns, headerrow)) if fmt.linebelowheader and "linebelowheader" not in hidden: lines.append(_build_line(padded_widths, colaligns, fmt.linebelowheader)) if padded_rows and fmt.linebetweenrows and "linebetweenrows" not in hidden: # initial rows with a line below for row in padded_rows[:-1]: lines.append(_build_row(row, padded_widths, colaligns, fmt.datarow)) lines.append(_build_line(padded_widths, colaligns, fmt.linebetweenrows)) # the last row without a line below lines.append(_build_row(padded_rows[-1], padded_widths, colaligns, fmt.datarow)) else: for row in padded_rows: lines.append(_build_row(row, padded_widths, colaligns, fmt.datarow)) if fmt.linebelow and "linebelow" not in hidden: lines.append(_build_line(padded_widths, colaligns, fmt.linebelow)) return "\n".join(lines) def _main(): """\ Usage: tabulate [options] [FILE ...] Pretty-print tabular data. See also https://bitbucket.org/astanin/python-tabulate FILE a filename of the file with tabular data; if "-" or missing, read data from stdin. Options: -h, --help show this message -1, --header use the first row of data as a table header -s REGEXP, --sep REGEXP use a custom column separator (default: whitespace) -f FMT, --format FMT set output table format; supported formats: plain, simple, grid, fancy_grid, pipe, orgtbl, rst, mediawiki, html, latex, latex_booktabs, tsv (default: simple) """ import getopt import sys import textwrap usage = textwrap.dedent(_main.__doc__) try: opts, args = getopt.getopt(sys.argv[1:], "h1f:s:", ["help", "header", "format", "separator"]) except getopt.GetoptError as e: print(e) print(usage) sys.exit(2) headers = [] tablefmt = "simple" sep = r"\s+" for opt, value in opts: if opt in ["-1", "--header"]: headers = "firstrow" elif opt in ["-f", "--format"]: if value not in tabulate_formats: print("%s is not a supported table format" % value) print(usage) sys.exit(3) tablefmt = value elif opt in ["-s", "--sep"]: sep = value elif opt in ["-h", "--help"]: print(usage) sys.exit(0) files = [sys.stdin] if not args else args for f in files: if f == "-": f = sys.stdin if _is_file(f): _pprint_file(f, headers=headers, tablefmt=tablefmt, sep=sep) else: with open(f) as fobj: _pprint_file(fobj) def _pprint_file(fobject, headers, tablefmt, sep): rows = fobject.readlines() table = [re.split(sep, r.rstrip()) for r in rows] print(tabulate(table, headers, tablefmt)) if __name__ == "__main__": _main() mycli-1.5.2/mycli/sqlcompleter.py0000644000076600000240000004163312615144005017442 0ustar amjithstaff00000000000000from __future__ import print_function from __future__ import unicode_literals import logging from prompt_toolkit.completion import Completer, Completion from .packages.completion_engine import suggest_type from .packages.parseutils import last_word from .packages.special.favoritequeries import favoritequeries from re import compile, escape from .packages.tabulate import table_formats try: from collections import Counter except ImportError: # python 2.6 from .packages.counter import Counter _logger = logging.getLogger(__name__) class SQLCompleter(Completer): keywords = ['ACCESS', 'ADD', 'ALL', 'ALTER TABLE', 'AND', 'ANY', 'AS', 'ASC', 'AUDIT', 'BEFORE', 'BEGIN', 'BETWEEN', 'BINARY', 'BY', 'CASE', 'CHANGE MASTER TO', 'CHAR', 'CHECK', 'CLUSTER', 'COLUMN', 'COMMENT', 'COMPRESS', 'COMMIT', 'CONNECT', 'COPY', 'CREATE', 'CURRENT', 'DATABASE', 'DATE', 'DECIMAL', 'DEFAULT', 'DELETE FROM', 'DELIMITER', 'DESC', 'DESCRIBE', 'DISTINCT', 'DROP', 'ELSE', 'ENCODING', 'END', 'ESCAPE', 'EXCLUSIVE', 'EXISTS', 'EXTENSION', 'FILE', 'FLOAT', 'FOR', 'FORMAT', 'FORCE_QUOTE', 'FORCE_NOT_NULL', 'FREEZE', 'FROM', 'FULL', 'FUNCTION', 'GRANT', 'GROUP BY', 'HAVING', 'HEADER', 'HOST', 'IDENTIFIED', 'IMMEDIATE', 'IN', 'INCREMENT', 'INDEX', 'INITIAL', 'INSERT INTO', 'INTEGER', 'INTERSECT', 'INTO', 'INTERVAL', 'IS', 'JOIN', 'LEFT', 'LEVEL', 'LIKE', 'LIMIT', 'LOCK', 'LOG', 'LOGS', 'LONG', 'MASTER', 'MINUS', 'MODE', 'MODIFY', 'NOAUDIT', 'NOCOMPRESS', 'NOT', 'NOWAIT', 'NULL', 'NUMBER', 'OIDS', 'OF', 'OFFLINE', 'ON', 'ONLINE', 'OPTION', 'OR', 'ORDER BY', 'OUTER', 'OWNER', 'PASSWORD', 'PCTFREE', 'PORT', 'PRIMARY', 'PRIOR', 'PRIVILEGES', 'PROCESSLIST', 'PURGE', 'QUOTE', 'RAW', 'RENAME', 'REPAIR', 'RESOURCE', 'RESET', 'REVOKE', 'RIGHT', 'ROLLBACK', 'ROW', 'ROWID', 'ROWNUM', 'ROWS', 'SELECT', 'SESSION', 'SET', 'SHARE', 'SHOW', 'SIZE', 'SLAVE', 'SLAVES', 'SMALLINT', 'START', 'STOP', 'SUCCESSFUL', 'SYNONYM', 'SYSDATE', 'TABLE', 'TEMPLATE', 'THEN', 'TO', 'TRANSACTION', 'TRIGGER', 'TRUNCATE', 'UID', 'UNION', 'UNIQUE', 'UPDATE', 'USE', 'USER', 'USING', 'VALIDATE', 'VALUES', 'VARCHAR', 'VARCHAR2', 'VIEW', 'WHEN', 'WHENEVER', 'WHERE', 'WITH'] functions = ['AVG', 'COUNT', 'DISTINCT', 'FIRST', 'FORMAT', 'LAST', 'LCASE', 'LEN', 'MAX', 'MIN', 'MID', 'NOW', 'ROUND', 'SUM', 'TOP', 'UCASE'] show_items = [] change_items = ['MASTER_BIND', 'MASTER_HOST', 'MASTER_USER', 'MASTER_PASSWORD', 'MASTER_PORT', 'MASTER_CONNECT_RETRY', 'MASTER_HEARTBEAT_PERIOD', 'MASTER_LOG_FILE', 'MASTER_LOG_POS', 'RELAY_LOG_FILE', 'RELAY_LOG_POS', 'MASTER_SSL', 'MASTER_SSL_CA', 'MASTER_SSL_CAPATH', 'MASTER_SSL_CERT', 'MASTER_SSL_KEY', 'MASTER_SSL_CIPHER', 'MASTER_SSL_VERIFY_SERVER_CERT', 'IGNORE_SERVER_IDS'] users = [] def __init__(self, smart_completion=True): super(self.__class__, self).__init__() self.smart_completion = smart_completion self.reserved_words = set() for x in self.keywords: self.reserved_words.update(x.split()) self.name_pattern = compile("^[_a-z][_a-z0-9\$]*$") self.special_commands = [] self.table_formats = table_formats() self.reset_completions() def escape_name(self, name): if name and ((not self.name_pattern.match(name)) or (name.upper() in self.reserved_words) or (name.upper() in self.functions)): name = '`%s`' % name return name def unescape_name(self, name): """ Unquote a string.""" if name and name[0] == '"' and name[-1] == '"': name = name[1:-1] return name def escaped_names(self, names): return [self.escape_name(name) for name in names] def extend_special_commands(self, special_commands): # Special commands are not part of all_completions since they can only # be at the beginning of a line. self.special_commands.extend(special_commands) def extend_database_names(self, databases): self.databases.extend(databases) def extend_keywords(self, additional_keywords): self.keywords.extend(additional_keywords) self.all_completions.update(additional_keywords) def extend_show_items(self, show_items): for show_item in show_items: self.show_items.extend(show_item) self.all_completions.update(show_item) def extend_change_items(self, change_items): for change_item in change_items: self.change_items.extend(change_item) self.all_completions.update(change_item) def extend_users(self, users): for user in users: self.users.extend(user) self.all_completions.update(user) def extend_schemata(self, schema): if schema is None: return metadata = self.dbmetadata['tables'] metadata[schema] = {} # dbmetadata.values() are the 'tables' and 'functions' dicts for metadata in self.dbmetadata.values(): metadata[schema] = {} self.all_completions.update(schema) def extend_relations(self, data, kind): """ extend metadata for tables or views :param data: list of (rel_name, ) tuples :param kind: either 'tables' or 'views' :return: """ # 'data' is a generator object. It can throw an exception while being # consumed. This could happen if the user has launched the app without # specifying a database name. This exception must be handled to prevent # crashing. try: data = [self.escaped_names(d) for d in data] except Exception: data = [] # dbmetadata['tables'][$schema_name][$table_name] should be a list of # column names. Default to an asterisk metadata = self.dbmetadata[kind] for relname in data: try: metadata[self.dbname][relname[0]] = ['*'] except KeyError: _logger.error('%r %r listed in unrecognized schema %r', kind, relname[0], self.dbname) self.all_completions.add(relname[0]) def extend_columns(self, column_data, kind): """ extend column metadata :param column_data: list of (rel_name, column_name) tuples :param kind: either 'tables' or 'views' :return: """ # 'column_data' is a generator object. It can throw an exception while # being consumed. This could happen if the user has launched the app # without specifying a database name. This exception must be handled to # prevent crashing. try: column_data = [self.escaped_names(d) for d in column_data] except Exception: column_data = [] metadata = self.dbmetadata[kind] for relname, column in column_data: metadata[self.dbname][relname].append(column) self.all_completions.add(column) def extend_functions(self, func_data): # 'func_data' is a generator object. It can throw an exception while # being consumed. This could happen if the user has launched the app # without specifying a database name. This exception must be handled to # prevent crashing. try: func_data = [self.escaped_names(d) for d in func_data] except Exception: func_data = [] # dbmetadata['functions'][$schema_name][$function_name] should return # function metadata. metadata = self.dbmetadata['functions'] for func in func_data: metadata[self.dbname][func[0]] = None self.all_completions.add(func[0]) def set_dbname(self, dbname): self.dbname = dbname def reset_completions(self): self.databases = [] self.dbname = '' self.dbmetadata = {'tables': {}, 'views': {}, 'functions': {}} self.all_completions = set(self.keywords + self.functions) @staticmethod def find_matches(text, collection, start_only=False, fuzzy=True): """Find completion matches for the given text. Given the user's input text and a collection of available completions, find completions matching the last word of the text. If `start_only` is True, the text will match an available completion only at the beginning. Otherwise, a completion is considered a match if the text appears anywhere within it. yields prompt_toolkit Completion instances for any matches found in the collection of available completions. """ text = last_word(text, include='most_punctuations').lower() completions = [] if fuzzy: regex = '.*?'.join(map(escape, text)) pat = compile('(%s)' % regex) for item in sorted(collection): r = pat.search(item.lower()) if r: completions.append((len(r.group()), r.start(), item)) else: match_end_limit = len(text) if start_only else None for item in sorted(collection): match_point = item.lower().find(text, 0, match_end_limit) if match_point >= 0: completions.append((len(text), match_point, item)) return (Completion(z, -len(text)) for x, y, z in sorted(completions)) def get_completions(self, document, complete_event, smart_completion=None): word_before_cursor = document.get_word_before_cursor(WORD=True) if smart_completion is None: smart_completion = self.smart_completion # If smart_completion is off then match any word that starts with # 'word_before_cursor'. if not smart_completion: return self.find_matches(word_before_cursor, self.all_completions, start_only=True, fuzzy=False) completions = [] suggestions = suggest_type(document.text, document.text_before_cursor) for suggestion in suggestions: _logger.debug('Suggestion type: %r', suggestion['type']) if suggestion['type'] == 'column': tables = suggestion['tables'] _logger.debug("Completion column scope: %r", tables) scoped_cols = self.populate_scoped_cols(tables) if suggestion.get('drop_unique'): # drop_unique is used for 'tb11 JOIN tbl2 USING (...' # which should suggest only columns that appear in more than # one table scoped_cols = [col for (col, count) in Counter(scoped_cols).items() if count > 1 and col != '*'] cols = self.find_matches(word_before_cursor, scoped_cols) completions.extend(cols) elif suggestion['type'] == 'function': # suggest user-defined functions using substring matching funcs = self.populate_schema_objects(suggestion['schema'], 'functions') user_funcs = self.find_matches(word_before_cursor, funcs) completions.extend(user_funcs) # suggest hardcoded functions using startswith matching only if # there is no schema qualifier. If a schema qualifier is # present it probably denotes a table. # eg: SELECT * FROM users u WHERE u. if not suggestion['schema']: predefined_funcs = self.find_matches(word_before_cursor, self.functions, start_only=True, fuzzy=False) completions.extend(predefined_funcs) elif suggestion['type'] == 'table': tables = self.populate_schema_objects(suggestion['schema'], 'tables') tables = self.find_matches(word_before_cursor, tables) completions.extend(tables) elif suggestion['type'] == 'view': views = self.populate_schema_objects(suggestion['schema'], 'views') views = self.find_matches(word_before_cursor, views) completions.extend(views) elif suggestion['type'] == 'alias': aliases = suggestion['aliases'] aliases = self.find_matches(word_before_cursor, aliases) completions.extend(aliases) elif suggestion['type'] == 'database': dbs = self.find_matches(word_before_cursor, self.databases) completions.extend(dbs) elif suggestion['type'] == 'keyword': keywords = self.find_matches(word_before_cursor, self.keywords, start_only=True, fuzzy=False) completions.extend(keywords) elif suggestion['type'] == 'show': show_items = self.find_matches(word_before_cursor, self.show_items, start_only=False, fuzzy=True) completions.extend(show_items) elif suggestion['type'] == 'change': change_items = self.find_matches(word_before_cursor, self.change_items, start_only=False, fuzzy=True) completions.extend(change_items) elif suggestion['type'] == 'user': users = self.find_matches(word_before_cursor, self.users, start_only=False, fuzzy=True) completions.extend(users) elif suggestion['type'] == 'special': special = self.find_matches(word_before_cursor, self.special_commands, start_only=True, fuzzy=False) completions.extend(special) elif suggestion['type'] == 'favoritequery': queries = self.find_matches(word_before_cursor, favoritequeries.list(), start_only=False, fuzzy=True) completions.extend(queries) elif suggestion['type'] == 'table_format': formats = self.find_matches(word_before_cursor, self.table_formats, start_only=True, fuzzy=False) completions.extend(formats) return completions def populate_scoped_cols(self, scoped_tbls): """ Find all columns in a set of scoped_tables :param scoped_tbls: list of (schema, table, alias) tuples :return: list of column names """ columns = [] meta = self.dbmetadata for tbl in scoped_tbls: # A fully qualified schema.relname reference or default_schema # DO NOT escape schema names. schema = tbl[0] or self.dbname relname = tbl[1] escaped_relname = self.escape_name(tbl[1]) # We don't know if schema.relname is a table or view. Since # tables and views cannot share the same name, we can check one # at a time try: columns.extend(meta['tables'][schema][relname]) # Table exists, so don't bother checking for a view continue except KeyError: try: columns.extend(meta['tables'][schema][escaped_relname]) # Table exists, so don't bother checking for a view continue except KeyError: pass try: columns.extend(meta['views'][schema][relname]) except KeyError: pass return columns def populate_schema_objects(self, schema, obj_type): """Returns list of tables or functions for a (optional) schema""" metadata = self.dbmetadata[obj_type] schema = schema or self.dbname try: objects = metadata[schema].keys() except KeyError: # schema doesn't exist objects = [] return objects mycli-1.5.2/mycli/sqlexecute.py0000644000076600000240000001736512610175477017133 0ustar amjithstaff00000000000000import logging import pymysql import sqlparse from .packages import connection, special _logger = logging.getLogger(__name__) class SQLExecute(object): databases_query = '''SHOW DATABASES''' tables_query = '''SHOW TABLES''' version_query = '''SELECT @@VERSION''' version_comment_query = '''SELECT @@VERSION_COMMENT''' show_candidates_query = '''SELECT name from mysql.help_topic WHERE name like "SHOW %"''' users_query = '''SELECT CONCAT("'", user, "'@'",host,"'") FROM mysql.user''' functions_query = '''SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE="FUNCTION" AND ROUTINE_SCHEMA = "%s"''' table_columns_query = '''select TABLE_NAME, COLUMN_NAME from information_schema.columns where table_schema = '%s' order by table_name,ordinal_position''' def __init__(self, database, user, password, host, port, socket, charset): self.dbname = database self.user = user self.password = password self.host = host self.port = port self.socket = socket self.charset = charset self._server_type = None self.connect() def connect(self, database=None, user=None, password=None, host=None, port=None, socket=None, charset=None): db = (database or self.dbname) user = (user or self.user) password = (password or self.password) host = (host or self.host) port = (port or self.port) socket = (socket or self.socket) charset = (charset or self.charset) _logger.debug('Connection DB Params: \n' '\tdatabase: %r' '\tuser: %r' '\thost: %r' '\tport: %r' '\tsocket: %r' '\tcharset: %r', database, user, host, port, socket, charset) conn = connection.connect(database=db, user=user, password=password, host=host, port=port, unix_socket=socket, use_unicode=True, charset=charset, autocommit=True, client_flag=pymysql.constants.CLIENT.INTERACTIVE, cursorclass=connection.Cursor) if hasattr(self, 'conn'): self.conn.close() self.conn = conn # Update them after the connection is made to ensure that it was a # successful connection. self.dbname = db self.user = user self.password = password self.host = host self.port = port self.socket = socket self.charset = charset def run(self, statement): """Execute the sql in the database and return the results. The results are a list of tuples. Each tuple has 4 values (title, rows, headers, status). """ # Remove spaces and EOL statement = statement.strip() if not statement: # Empty string yield (None, None, None, None) # Split the sql into separate queries and run each one. # Unless it's saving a favorite query, in which case we # want to save them all together. if statement.startswith('\\fs'): components = [statement] else: components = sqlparse.split(statement) for sql in components: # Remove spaces, eol and semi-colons. sql = sql.rstrip(';') # \G is treated specially since we have to set the expanded output # and then proceed to execute the sql as normal. if sql.endswith('\\G'): special.set_expanded_output(True) yield self.execute_normal_sql(sql.rsplit('\\G', 1)[0]) else: try: # Special command _logger.debug('Trying a dbspecial command. sql: %r', sql) cur = self.conn.cursor() for result in special.execute(cur, sql): yield result except special.CommandNotFound: # Regular SQL yield self.execute_normal_sql(sql) def execute_normal_sql(self, split_sql): _logger.debug('Regular sql statement. sql: %r', split_sql) cur = self.conn.cursor() num_rows = cur.execute(split_sql) title = None if num_rows == 1: status = '%d row in set' % num_rows else: status = '%d rows in set' % num_rows with self.conn.cursor() as temp_cursor: temp_cursor.execute('SELECT row_count()') n = temp_cursor.fetchone()[0] if n < 0: pass elif n == 1: status = 'Query OK, %d row affected' % n else: status = 'Query OK, %d rows affected' % n # cur.description will be None for operations that do not return # rows. if cur.description: headers = [x[0] for x in cur.description] return (title, cur, headers, status) # cur.statusmessage) else: _logger.debug('No rows in result.') return (title, None, None, status) # cur.statusmessage) def tables(self): """Yields table names""" with self.conn.cursor() as cur: _logger.debug('Tables Query. sql: %r', self.tables_query) cur.execute(self.tables_query) for row in cur: yield row def table_columns(self): """Yields column names""" with self.conn.cursor() as cur: _logger.debug('Columns Query. sql: %r', self.table_columns_query) cur.execute(self.table_columns_query % self.dbname) for row in cur: yield row def databases(self): with self.conn.cursor() as cur: _logger.debug('Databases Query. sql: %r', self.databases_query) cur.execute(self.databases_query) return [x[0] for x in cur.fetchall()] def functions(self): """Yields tuples of (schema_name, function_name)""" with self.conn.cursor() as cur: _logger.debug('Functions Query. sql: %r', self.functions_query) cur.execute(self.functions_query % self.dbname) for row in cur: yield row def show_candidates(self): with self.conn.cursor() as cur: _logger.debug('Show Query. sql: %r', self.show_candidates_query) try: cur.execute(self.show_candidates_query) except pymysql.OperationalError as e: _logger.error('No show completions due to %r', e) yield '' else: for row in cur: yield (row[0].split(None, 1)[-1], ) def users(self): with self.conn.cursor() as cur: _logger.debug('Users Query. sql: %r', self.users_query) try: cur.execute(self.users_query) except pymysql.OperationalError as e: _logger.error('No user completions due to %r', e) yield '' else: for row in cur: yield row def server_type(self): if self._server_type: return self._server_type with self.conn.cursor() as cur: _logger.debug('Version Query. sql: %r', self.version_query) cur.execute(self.version_query) version = cur.fetchone()[0] _logger.debug('Version Comment. sql: %r', self.version_comment_query) cur.execute(self.version_comment_query) version_comment = cur.fetchone()[0].lower() if 'mariadb' in version_comment: product_type = 'mariadb' elif 'percona' in version_comment: product_type = 'percona' else: product_type = 'mysql' self._server_type = (product_type, version) return self._server_type mycli-1.5.2/mycli.egg-info/0000755000076600000240000000000012621400311016031 5ustar amjithstaff00000000000000mycli-1.5.2/mycli.egg-info/dependency_links.txt0000644000076600000240000000000112621400311022077 0ustar amjithstaff00000000000000 mycli-1.5.2/mycli.egg-info/entry_points.txt0000644000076600000240000000011012621400311021317 0ustar amjithstaff00000000000000 [console_scripts] mycli=mycli.main:cli mycli-1.5.2/mycli.egg-info/pbr.json0000644000076600000240000000005712621400311017511 0ustar amjithstaff00000000000000{"is_release": false, "git_version": "18d8f55"}mycli-1.5.2/mycli.egg-info/PKG-INFO0000644000076600000240000000174312621400311017133 0ustar amjithstaff00000000000000Metadata-Version: 1.1 Name: mycli Version: 1.5.2 Summary: CLI for MySQL Database. With auto-completion and syntax highlighting. Home-page: http://mycli.net Author: Amjith Ramanujam Author-email: amjith[dot]r[at]gmail.com License: LICENSE.txt Description: CLI for MySQL Database. With auto-completion and syntax highlighting. Platform: UNKNOWN Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: Unix Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: SQL Classifier: Topic :: Database Classifier: Topic :: Database :: Front-Ends Classifier: Topic :: Software Development Classifier: Topic :: Software Development :: Libraries :: Python Modules mycli-1.5.2/mycli.egg-info/requires.txt0000644000076600000240000000017312621400311020432 0ustar amjithstaff00000000000000click >= 4.1 Pygments >= 2.0 prompt_toolkit==0.46 PyMySQL >= 0.6.2 sqlparse >= 0.1.16 configobj >= 5.0.6 pycrypto >= 2.6.1 mycli-1.5.2/mycli.egg-info/SOURCES.txt0000644000076600000240000000211712621400311017716 0ustar amjithstaff00000000000000LICENSE.txt MANIFEST.in README.md changelog.md new_changelog.md setup.py mycli/__init__.py mycli/clibuffer.py mycli/clistyle.py mycli/clitoolbar.py mycli/completion_refresher.py mycli/config.py mycli/encodingutils.py mycli/filters.py mycli/key_bindings.py mycli/lexer.py mycli/magic.py mycli/main.py mycli/myclirc mycli/sqlcompleter.py mycli/sqlexecute.py mycli.egg-info/PKG-INFO mycli.egg-info/SOURCES.txt mycli.egg-info/dependency_links.txt mycli.egg-info/entry_points.txt mycli.egg-info/pbr.json mycli.egg-info/requires.txt mycli.egg-info/top_level.txt mycli/../AUTHORS mycli/../SPONSORS mycli/packages/__init__.py mycli/packages/completion_engine.py mycli/packages/connection.py mycli/packages/counter.py mycli/packages/expanded.py mycli/packages/ordereddict.py mycli/packages/parseutils.py mycli/packages/tabulate.py mycli/packages/literals/__init__.py mycli/packages/literals/main.py mycli/packages/special/__init__.py mycli/packages/special/dbcommands.py mycli/packages/special/favoritequeries.py mycli/packages/special/iocommands.py mycli/packages/special/main.py mycli/packages/special/utils.pymycli-1.5.2/mycli.egg-info/top_level.txt0000644000076600000240000000000612621400311020557 0ustar amjithstaff00000000000000mycli mycli-1.5.2/new_changelog.md0000644000076600000240000000466412616156525016401 0ustar amjithstaff000000000000001.5.0: ====== Features: --------- * Make a config option to enable `audit_log`. (Thanks: [Matheus Rosa]). * Add support for reading .mylogin.cnf to get user credentials. (Thanks: [Thomas Roten]). This feature is only available when `pycrypto` package is installed. * Register the special command `prompt` with the `\R` as alias. (Thanks: [Matheus Rosa]). Users can now change the mysql prompt at runtime using `prompt` command. eg: ``` mycli> prompt \u@\h> Changed prompt format to \u@\h> Time: 0.001s amjith@localhost> ``` * Perform completion refresh in a background thread. Now mycli can handle databases with thousands of tables without blocking. * Add support for `system` command. (Thanks: [Matheus Rosa]). Users can now run a system command from within mycli as follows: ``` amjith@localhost:(none)>system cat tmp.sql select 1; select * from django_migrations; ``` * Caught and hexed binary fields in MySQL. (Thanks: [Daniel West]). Geometric fields stored in a database will be displayed as hexed strings. * Treat enter key as tab when the suggestion menu is open. (Thanks: [Matheus Rosa]) * Add "delete" and "truncate" as destructive commands. (Thanks: [Martijn Engler]). * Change \dt syntax to add an optional table name. (Thanks: [shoma]). `\dt [tablename]` will describe the columns in a table. * Add TRANSACTION related keywords. * Treat DESC and EXPLAIN as DESCRIBE. (Thanks: [spacewander]). Bug Fixes: ---------- * Fix the removal of whitespace from table output. (Thanks: [Amjith Ramanujam]). * Add ability to make suggestions for compound join clauses. (Thanks: [Matheus Rosa]). * Fix the incorrect reporting of command time. Internal Changes: ----------------- * Make pycrypto optional and only install it in \*nix systems. (Thanks: [Iryna Cherniavska]). * Add badge for PyPI version to README. (Thanks: [Shoma Suzuki]). * Updated release script with a --dry-run and --confirm-steps option. (Thanks: [Iryna Cherniavska]). * Adds support for PyMySQL 0.6.2 and above. This is useful for debian package builders. (Thanks: [Thomas Roten]). * Disable click warning. [Daniel West]: http://github.com/danieljwest [Iryna Cherniavska]: https://github.com/j-bennet [Kacper Kwapisz]: https://github.com/KKKas [Martijn Engler]: https://github.com/martijnengler [Matheus Rosa]: https://github.com/mdsrosa [Shoma Suzuki]: https://github.com/shoma [spacewander]: https://github.com/spacewander [Thomas Roten]: https://github.com/tsroten mycli-1.5.2/PKG-INFO0000644000076600000240000000174312621400311014324 0ustar amjithstaff00000000000000Metadata-Version: 1.1 Name: mycli Version: 1.5.2 Summary: CLI for MySQL Database. With auto-completion and syntax highlighting. Home-page: http://mycli.net Author: Amjith Ramanujam Author-email: amjith[dot]r[at]gmail.com License: LICENSE.txt Description: CLI for MySQL Database. With auto-completion and syntax highlighting. Platform: UNKNOWN Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: Unix Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: SQL Classifier: Topic :: Database Classifier: Topic :: Database :: Front-Ends Classifier: Topic :: Software Development Classifier: Topic :: Software Development :: Libraries :: Python Modules mycli-1.5.2/README.md0000644000076600000240000001264012610237744014524 0ustar amjithstaff00000000000000# mycli [![Build Status](https://travis-ci.org/dbcli/mycli.svg?branch=master)](https://travis-ci.org/dbcli/mycli) [![PyPI](https://img.shields.io/pypi/v/mycli.svg?style=plastic)](https://pypi.python.org/pypi/mycli) [![Join the chat at https://gitter.im/dbcli/mycli](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/dbcli/mycli?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) A command line client for MySQL that can do auto-completion and syntax highlighting. HomePage: [http://mycli.net](http://mycli.net) Debian Packages via [PackageCloud.io](https://packagecloud.io/amjith/mycli). ![Completion](screenshots/tables.png) ![CompletionGif](screenshots/main.gif) Postgres Equivalent: [http://pgcli.com](http://pgcli.com) Quick Start ----------- If you already know how to install python packages, then you can install it via pip: You might need sudo on linux. ``` $ pip install mycli ``` or ``` $ brew update && brew install mycli # Only on OS X ``` Check the [detailed install instructions](#detailed-install-instructions) for debian packages or getting started with pip. ### Usage $ mycli --help Usage: mycli [OPTIONS] [DATABASE] Options: -h, --host TEXT Host address of the database. -P, --port TEXT Port number to use for connection. Honors $MYSQL_TCP_PORT -u, --user TEXT User name to connect to the database. -S, --socket TEXT The socket file to use for connection. -p, --password Force password prompt. --pass TEXT Password to connect to the database -v, --version Version of mycli. -D, --database TEXT Database to use. -R, --prompt TEXT Prompt format (Default: "\t \u@\h:\d> ") -l, --logfile FILENAME Log every query and its results to a file. --defaults-group-suffix TEXT Read config group with the specified suffix. --defaults-file PATH Only read default options from the given file --login-path TEXT Read this path from the login file. --help Show this message and exit. ### Examples $ mycli local_database $ mycli -h localhost -u root app_db $ mycli mysql://amjith@localhost:3306/django_poll Features -------- `mycli` is written using [prompt_toolkit](https://github.com/jonathanslenders/python-prompt-toolkit/). * Auto-completion as you type for SQL keywords as well as tables and columns in the database. * Syntax highlighting using Pygments. * Smart-completion (enabled by default) will suggest context-sensitive completion. - `SELECT * FROM ` will only show table names. - `SELECT * FROM users WHERE ` will only show column names. * Config file is automatically created at ``~/.myclirc`` at first launch. * Pretty prints tabular data. Contributions: -------------- If you're interested in contributing to this project, first of all I would like to extend my heartfelt gratitude. I've written a small doc to describe how to get this running in a development setup. https://github.com/dbcli/mycli/blob/master/DEVELOP.rst Please feel free to reach out to me if you need help. My email: amjith.r@gmail.com Twitter: [@amjithr](http://twitter.com/amjithr) ## Detailed Install Instructions: ### Debian/Ubuntu Package: The debian package for `mycli` is hosted on [packagecloud.io](https://packagecloud.io/amjith/mycli). Add the gpg key for packagecloud for package verification. ``` curl https://packagecloud.io/gpg.key | apt-key add - ``` Install a package called apt-transport-https to make it possible for apt to fetch packages over https. ``` apt-get install -y apt-transport-https ``` Add the mycli package repo to the apt source. ``` echo "deb https://packagecloud.io/amjith/mycli/ubuntu/ trusty main" | sudo tee -a /etc/apt/sources.list ``` Update the apt sources and install mycli. ``` $ sudo apt-get update $ sudo apt-get install mycli ``` Now `mycli` can be upgraded easily by using ``sudo apt-get upgrade mycli``. ### RHEL, Centos, Fedora: I haven't built an RPM package for mycli yet. So please use `pip` to install `mycli`. You can install pip on your system using: ``` $ sudo yum install python-pip ``` Once that is installed, you can install mycli as follows: ``` $ sudo pip install mycli ``` ### Thanks: This project was funded through kickstarter. My thanks to the [backers](http://mycli.net/sponsors) who supported the project. A special thanks to [Jonathan Slenders](https://twitter.com/jonathan_s) for creating [Python Prompt Toolkit](http://github.com/jonathanslenders/python-prompt-toolkit), which is quite literally the backbone library, that made this app possible. Jonathan has also provided valuable feedback and support during the development of this app. [Click](http://click.pocoo.org/3/) is used for command line option parsing and printing error messages. Thanks to [PyMysql](http://www.pymysql.org/) for a pure python adapter to MySQL database. [Tabulate](https://pypi.python.org/pypi/tabulate) library is used for pretty printing the output of tables. ### Compatibility Tests have been run on OS X and Linux. THIS HAS NOT BEEN TESTED IN WINDOWS, but the libraries used in this app are Windows compatible. This means it should work without any modifications. If you're unable to run it on Windows, please file a bug. I will try my best to fix it. mycli-1.5.2/setup.cfg0000644000076600000240000000007312621400311015043 0ustar amjithstaff00000000000000[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 mycli-1.5.2/setup.py0000644000076600000240000000416412615144005014751 0ustar amjithstaff00000000000000import re import ast import platform from setuptools import setup, find_packages _version_re = re.compile(r'__version__\s+=\s+(.*)') with open('mycli/__init__.py', 'rb') as f: version = str(ast.literal_eval(_version_re.search( f.read().decode('utf-8')).group(1))) description = 'CLI for MySQL Database. With auto-completion and syntax highlighting.' install_requirements = [ 'click >= 4.1', 'Pygments >= 2.0', # Pygments has to be Capitalcased. WTF? 'prompt_toolkit==0.46', 'PyMySQL >= 0.6.2', 'sqlparse >= 0.1.16', 'configobj >= 5.0.6', ] # pycrypto is a hard package to install on Windows, so we make it an optional # dependency. When it's installed, we can read mylogin.cnf, when it is not # available, we skip reading mylogin.cnf and print a warning message. if platform.system() != 'Windows': install_requirements.append('pycrypto >= 2.6.1') setup( name='mycli', author='Amjith Ramanujam', author_email='amjith[dot]r[at]gmail.com', version=version, license='LICENSE.txt', url='http://mycli.net', packages=find_packages(), package_data={'mycli': ['myclirc', '../AUTHORS', '../SPONSORS']}, description=description, long_description=description, install_requires=install_requirements, entry_points=''' [console_scripts] mycli=mycli.main:cli ''', classifiers=[ 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: Unix', 'Programming Language :: Python', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: SQL', 'Topic :: Database', 'Topic :: Database :: Front-Ends', 'Topic :: Software Development', 'Topic :: Software Development :: Libraries :: Python Modules', ], ) mycli-1.5.2/SPONSORS0000644000076600000240000000074412576714606014470 0ustar amjithstaff00000000000000Many thanks to the following Kickstarter backers. * Tech Blue Software * jweiland.net # Silver Sponsors * Whitane Tech * Open Query Pty Ltd * Prathap Ramamurthy * Lincoln Loop # Sponsors * Nathan Taggart * Iryna Cherniavska * Sudaraka Wijesinghe * www.mysqlfanboy.com * Steve Robbins * Norbert Spichtig * orpharion bestheneme * Daniel Black * Anonymous * Magnus udd * Anonymous * Lewis Peckover * Cyrille Tabary * Heath Naylor * Ted Pennings * Chris Anderton * Jonathan Slenders